What will you do
As a Current DevOps Engineer, you will work as part of our engineering and product team to build out and maintain the infrastructure and development tooling which keeps our product running. You will be involved in every stage of the product life cycle, deploy it out into the wild and see its positive impact on real people. You will work on new features as well as on our existing codebase.
As a specialist in DevOps Engineering, you will work on the nuts and bolts of our infrastructure and platform. Experience across the stack will be crucial for tackling some very interesting scalability and high-availability problems. You will often work closely with the rest of the engineering folk to provide the necessary tooling and reassurance for their day to day work. Last but not least you will be working on infrastructure and platform regulatory compliance from a security and audit point of view.
- You are flexible and can learn on the job quickly
- You enjoy solving problems and making a difference
- You can pragmatically balance quality with a fast-paced schedule
- You are a good team player, ready to help, debate, compromise and work together
- You are comfortable working, prototyping and delivering incrementally, adapting based on customer needs and technical difficulties, always with the user in mind
- You have an eye for detail but not at the expense of missing the big picture
We would like you to...
- Have a degree in Computer Science, related field, equivalent training or work experience
- Have solid experience with Linux systems and core AWS technologies
- Have experience in Infrastructure and Configuration as Code tooling
- Be comfortable reviewing and troubleshooting your and other people's code
- Have experience with Microservice architecture patterns and platform configurations
- Have experience working with RESTful APIs and JSON
- Have experience writing tests and testable code
- Be comfortable in codebases spanning multiple languages
- Have a keen eye for automating the software delivery pipeline
- Be able to research and learn the right tool or technology for the job
Bonus points for...
- Experience with Chef/Chef Automate
- Exposure to DevOps in a regulated environment; our focus in particular is HIPAA compliance
- Experience with Hashicorp ecosystem (Vault, Nomad, Consul, Terraform)
- Experience in deploying and managing key big data tools such as Hadoop, MapReduce, Spark as well as distributed data technologies such as Kafka and Kafka Streams
- Being used to providing and maintaining highly available distributed systems to a customer SLA
Technologies we use
- Backend: Java (Spring), Python, .NET
- Databases: PostgreSQL (RDS), Couchbase and others
- Infrastructure: Linux, RabbitMQ, AWS via Terraform, Chef, Nomad, Consul and Fabio
- Monitoring: DataDog and ELK
In just 2 years, we built our product, monitored 1000 patients, built a phenomenal team of 21 and gained EU regulatory approval. We raised one of the largest seed rounds in UK history. We're now bringing our product to some of the top healthcare providers in the world
We offer a flexible work environment, where you’ll have the autonomy and freedom to do what you do best. We are hugely ambitious and focussed, but we have fun. As a company we are supportive, trusting and transparent and we’re early stage enough that you’ll have the chance to build the company you want to work for.
Something to think about is the idea that you can go home after a days work and know that you could well have directly contributed to saving someone’s life; this gets everyone in the company very excited.
On top of that we provide:
- Competitive salary
- Stock options – our company is your company. We want to build it together
- Spec your own dev environment
- Free lunch every Wednesday
- Remote friendly with support for flexible working