Ryan Rix Fights for the Users

Table of Contents

About Me

I care deeply about technology that respects its users. Whether through business models, privacy practices or licensing, the power and safety of my end users drives the work that I take on. Whether that is through the protection and expansion of Uber users' privacy within the platform, or my extensive work in libre/free software communities, or my experience in hosting and constructing decentralized infrastructure and web services.

Through my career I've learned a healthy distrust of distributed systems, from the work I've done to harden and scale payments systems and the marketplace platform at Uber, to the work I've done scaling an iOS app backend from pre-alpha through launch, to my experiences in running a home lab on top of Kubernetes.

Current Work

Uber, Privacy Engineering

Integrating a privacy layer between our query/analytics tooling and the databases, ensuring that operations teams and data analyists can work with the data they need to make the system function while protecting the individual users' privacy through private data scrubbing and unique differential privacy algorithms developed by researchers at UC Berkeley.

I'm also in the process of designing a data access service to allow users to request a snapshot of the information which Uber has about that user, which empowers users to gain responsibility for their data and explore that data themselves.

Technologies and Skills acquired:

  • MVCS service development in Go and Java
  • Algorithmic security e.g. Differential Privacy

Melchior, Side Project

Melchior is an attempt to build a storage and search engine for the data I generate and consume in my daily life. A system that gives me the ability to ask "What photos did I take the last time I was in Seattle?", or "What was the article I reading about differential privacy on the ACM last month?"

Melchior is implemented as content-addressable store backed by the filesystem with PostgreSQL providing a metadata/object store and full-text search capabilities. I've currently implemented Melchior's backend as a Python3 Flask application, with the frontend being written in ClojureScript. The final version will have a backend written in Clojure, allowing for server-side rendering of the ClojureScript frontend and sharing of code and data between the frontend and backend.

Technologies and Skills acquired:

  • Reactive programming using ClojureScript's React.JS wrapper
  • SQL/pgSQL query development
  • Data modelling, graph search and content-addressable storage

Complete Computing Environment, Side Project

Complete Computing Environment (CCE, for short) is the operating system I use – a heavily customized Emacs running on top of Fedora Linux. It is the toolchest I use to be a productive and happy engineer, encompasing everything from smart e-mail handling, to a well-honed workflow of task-tracking, calandaring and note taking, to a simple and intuitive programming environment for the languages I work in. All of this is documented and exposed as a web site and encompases my work in engineering craftsman ship – the perfect tool for the job, well honed and well held.

Previous Work

Uber, Payments SRE, Business Infrastructure

As a Reliability engineer supporting the Payments organization, I built a culture of reliability through a close working relationship with developers, engineering managers, and business/product managers within the organization and external payment service providers.

Our team was responsible for making sure the production system was able to safely collect millions of dollars a week and disburse that money to Uber partners and restaurants, ensuring the livelihood of millions of small business owners around the world. We implemented and institutionalized a rigourous monthly scalability test through "floodgating", and worked directly with services to alleviate bottlenecks within the system. We worked with external payment service providers like Braintree and PayTM to ensure their platforms could scale with our business's unique usage patterns.

We were embedded in the architecture and design team of our next-gen collection/disbursement pipeline. This allowed us to ensure that the system was able to scale safely while still allowing development of new features such as batched billing, direct pay through "Instant Pay" vendors, tipping and the UberEATS business and future product innovations.

I also acted as the primary architect and technical lead for the infrastructure of a Secure Storage environment designed to move sensitive customer data like driver's license photos, bank account numbers, and payments tokens.

This environment operated completely apart from Uber's production infrastructure which allowed us to make interesting design decisions which were untenable within the limits of our production infrastructure. I architected and eventually handed off to others on my team a design and MVP for an SOA that could automatically horizontally scale services, with the intention of building a system that required as little human interaction as possible once it was feature complete. We built an automated deploy pipeline which provided an end-to-end latency of 15 minutes between code commit and code running in the development cluster.

We worked with the Infra Security team to ensure the entire stack was secure from intrusion. We started from the AWS network architecture and security groups, moved to the hardening of service configurations, and finally designed the secure configuration of distribution software. Finally, we worked with internal and external compliance auditors to certify the environment for storing various classes of user data.

Technologies and Skills acquired:

  • Secure cloud architecture and compliance certification
  • Automation of Cloud environments using Spinnaker, Terraform and AWS
  • Linux Kernel and User Space hardening
  • Designing immutable systems using Ansible Packer and Debian Jessie
  • Scalability and Load testing through fault injection and floodgate testing

Uber, Realtime Management and Operations

The Realtime Management and Operations team was formed to fill a need for infrastructure engineers specialized in the unique constraints of operating a highly available global distributed system written in Node.js.

As the second member of the team, I worked with engineering teams within the Realtime architecture on resiliency and scalability of systems and operations of those systems through automation of tooling, development of training materials and best-practices for On Call and Incident Response scenarios and through direct stability development within services.

Along with three other developers, I designed and implemented a unique datacenter failover mechanism where by we would serialize a user's state on to their device and gain the ability to restore that state to a different data-center in the event of a catastrophic failure with minimal network and storage overhead.

I was also heavily involved in hiring and interviewing due to being an early member of a core infrastructure team, and have gained a keen eye towards hiring strong talent with a healthy passion for their work and the field as a whole.

Finally, I was involved in varying levels with four datacenter turnup events, as we scaled out of a leased hosting provider on to our own hardware and network, and eventually in to the Chinese market with a completely isolate datacenter topology with unique security concerns. My involvement scaled from early work at QA of realtime services on fresh hardware to coordinating a large-scale project to bring an evolved microservice architecture in to the China datacenters while maintaining the legacy architecture's stability and scaling needs within that same set of hosts.

Technologies and Skills acquired:

  • Automation using Python and Fabric
  • Debian Jessie and Ubuntu Linux administration
  • Distributed Systems engineering with Node.JS and docker
  • Datacenter Turnup, provisioning of thousands of servers and project management

Backend Engineer, Storehouse Media

At Storehouse I did feature development and backend engineering for a multimedia-based iOS application which was featured on Day 1 of its launch in the Apple Store and various media outlets. Scaling out a simple ruby on rails application from a single Heroku Dyno on to an AWS EC2-based architecture, I maintained agility of the product's evolving features while working on cost-saving and scaling measures in preparation for said launch. I worked directly with frontend engineers and designers on API design and debugging and test infrastructure.

Hacker, MadeSolid

I was brought on at MadeSolid to prototype and eventually implement a large-scale 3D printing service bureau focusing on simple prototyping and alpha-test run projects for hardware startups. This involved the full end-to-end of printer construction and R&D, to render pipelines, to web frontends for uploading, pricing and submitting build orders.

Board Member and Hacker, HeatSync Labs

I was an early member of the Mesa, Arizona-based hackerspace HeatSync Labs, a non-profit focusing on knowledge sharing between members through a pool of shared tools and resources. I served as secretary of the board of directors and was an influential member of the community, focusing on health of the community and maintaining the 3D printing work area.

I also built out automation for the space, including a door-access control system which ran as a progressive web application embedded on various devices within the space, and a membership and inventory management portal written in Ruby on Rails.

Project Contributor, Fedora Project

I worked in various roles as a contributor within the Fedora Projects KDE, Packaging, News, Ambassadors, and Marketing Special Interest Groups. I integrated 3rd party software and provided ways to beta-test KDE software within Fedora, provided updates for the wider Fedora Community and open source community as a whole through the News, Ambassadors and Marketing SIGS. While interning with Red Hat, Inc, Fedora's sponsor, I was part of a team to plan, organize and execute Fedora's User/Developer Conference at my local university. I also worked with a group attempting to get the Fedora release name to be Beefy Miracle, after the semi-official Fedora mascot.

More Work

I maintain a portfolio of individual works, here.

Author: Ryan Rix <ryan@whatthefuck.computer>

Created: 2017-07-06 Thu 20:41