ByteIQ: Building an AWS Platform for Digital Health companies
-
Posted by
DNX
- On 16 fevereiro 2020

About ByteIQ
ByteIQ is a digital health startup focused on big data, they planned to use AWS to run all products and leverage AWS security and compliance standards. ByteIQ worked with DNX Solutions to design, implement and support an efficient AWS platform which allowed them to achieve 30% cost savings compared to the projected costs.
The Business Challenge
ByteIQ asked DNX to provide a solid design and implementation of a fresh new AWS platform to operate the new product. The new environment was required to be secure by default to follow the high compliance standard for medical applications in Australia. The application consists of 3 different parts:
-
Application Portal
-
Application Data Storage/Processing
-
Application Client Module
This platform must have well-defined requirements to cover security, cost-efficiency, high availability. DNX uses the well-architecture framework to address application migration projects.
The Solution
Following the AWS Well-architected framework DNX proposed to build a AWS foundation and modernize the current applications leveraging Docker containers applying on top of it blue/green deployment strategy. Essentials features to a DevOps oriented culture.
The Project
DNX proposed a two phased project to address the requirements:
Phase 1 — AWS Foundation: Multi-account AWS platform using infrastructure as code and CICD Pipelines using the following AWS services:
-
AWS Organisations/Consolidated billing
-
Single Sign-on (SSO) using GSuite
-
Client VPN to connect to private resources
-
Multi-tier VPC (Public DMZ, Private and Secure subnets)
-
S3 bucket for staging data
-
KMS strategy to apply encryption at rest
-
AWS GuardDuty/CloudTrail/SNS Topics for alerting
-
VPC Peering
Phase 2 — Application migration: All applications were migrated to Docker containers and deployed using AWS ECS with Spot instances:
-
AWS ALB/CloudFront/WAF for the application portal
-
AWS ECS workers for Data processing using CloudWatch Events
-
ASG for Containers using CPU Metric
-
ASG for EC2 using Memory Metrics
-
Zero Deployment Blue/Green using CodeDeploy
-
CICD Pipelines using Gitlab
-
DynamDB to store NoSQL data from medical clinics
-
RDS Aurora MySQL to store portal metadata.
During the first phase, the DNX team worked to deliver the AWS platform to support the project, and in the second phase DNX and ByteIQ team worked together to understand the best patterns to deploy the application stack.
The client application was written in Java and we design an integration to push data to S3 (encryption in transit and at rest) to send daily data consumption from medical clinics.
Once a group of files are saved in S3, a CloudWatch Event triggers the data processing pipeline using ECS, this process parses the unstructured data and saves to DynamoDB.
The application portal was migrated to Docker (PHP) and deployed behind an Application Load Balancer (ALB).
AWS CloudFront and Web Application Firewall (WAF) were added to ensure a better experience for users and at the same time enhance the application security.
The following high-level design diagram summarises the used AWS stack:

Deployments strategy
DNX designed and implemented a CI/CD pipeline with zero-downtime where blue and green deployment architecture was applied as proposed initially. We Gitlab as the DevOps lifecycle tool and its CI/CD pipelines achieving the customer’s goals promoting a value stream to our costumers.

Conclusion
Migrating their workloads to AWS obtained a reliable, robust, secure and cost-effective cloud platform, allowing them to experiment on a breadth of new services for building a more competitive platform for their customers rapidly using continuous delivery concepts.
The main concept that drives data-related projects is how to transform data in insights to impact the business.

By using DNX services, ByteIQ could focus on their core business and leave the cloud platform challenges with us.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas
Related Posts
Reinventing myDNA Business with Data Analytics
-
Posted by
DNX
About myDNA
myDNA is a health tech company bringing technology to healthcare with a mission to improve health worldwide. They developed a personalised wellness myDNA test that lets you discover how your body is likely to respond to food, exercise, sleep, vitamins, medications, and more, according to your genome.
It is a life changer for those who want to skip the lengthy trial and error process, and achieve their desired fitness goals sooner. Moreover, myDna is a reliable way of assisting practitioners in selecting safe and effective medications for their patients based on their unique genetic makeup. For example, doctors can prescribe antidepressants, and post-surgery pain killers that are more likely to be successful in the first instance.
The most exciting part is that this technology, which has historically been so expensive, is now available at an affordable price for normal people like you and me! Not to mention, finding out you have relatives on the other side of the world through a family matching DNA test is pretty cool!
Providing life health services based on accurate data
After replatforming myDNA IT systems from a distributed monolithic database to a microservice architecture, the team needed assistance in delivering automated tools and meaningful insights through the business. This would give them an understanding of potential areas and markets to expand their services, the agility to move and change fast as a business and, provide an advantage over competitors by delivering the services, products, and customer experience their customers seek. This is all based on data rather than assumptions.
myDNA was seeking a cloud consultant that could assist them in exploring and understanding events by expanding their data and analytics capabilities. In addition, the business planned to increase their data skills so their in-house IT team would be able to maintain and continue building the new applications in a safe and effective environment.
AWS performed a Data Lab with myDNA stakeholders where they co-designed a technical architecture and built a Pilot to start the journey. This gave the myDNA team an understanding of all the AWS cloud data and analytics solutions available. However, they required a personalised and well-designed technology roadmap taking their IT skills and myDNA business goals into consideration, as opposed to a ‘one solution fits all’ strategy. This is exactly what DNX Solutions delivered!
How did DNX Solutions help myDNA establish a modern security data strategy in just one month?
The project started with DNX’s effective and interactive discovery where our team identified the company’s needs, had a complete picture of the existing company data, the architecture used, potential technological and/or team challenges. With that, our team created a clear road map where outcomes were evident even before the conclusion of the project

In the initial phase, DNX built the MVP using AWS Console, more general roles, data sources, and built simple reports and dashboards to present basic metrics.
After that, our data cloud experts built a more robust solution fit for production, with a focus on resilience, performance, reliability, security and cost optimisation using Devops methodology, CI/CD pipelines, automation and serverless architecture whenever possible.
Once the core platform was established, we brought more data sources, integrating them into the solution, and helped to build more complex and advanced solutions such as Machine Learning.
AWS Services Used
S3 Datalakes
Raw: hosts the data extracted allowing governance, auditability, durability and security controls
DynamoDB / SSM
Stores configuration tables, parameters, and secrets used by the pipeline and ETL Jobs to automate the data process
Crawlers
Crawlers can scan the files in the datalake or databases, infer the schema and add the tables on the data catalogues
Glue ETL
Serverless Spark solution for high performance ETL jobs within AWS
Data Catalogues
Stores the metadata and metrics regarding Databases, Connections, Jobs, partitions, etc. It can grant/deny access up to the table level
Quicksight
Can consume data from multiple sources within AWS and allow user-friendly development of reports, analytics and dashboards integrated with AWS platform
Lake Formation
Low code solution to govern and administer the Data Lake. An additional layer of security including row/column level controls
Lambdas
Wild cards that can help tie the solution together in a variety of roles and use cases
Athena
Athena can query data stored in S3 using simple SQL. It allows access segregation to metadata and history via workgroups, which can be compounded with IAM roles
myDNA to provide real insights at the click of a button
There is no doubt that DNX Solutions delivered value to myDNA. The team reported they were able to deliver another data transformation that depended directly on the result of DNX’s work.
Before engagement with DNX, the myDNA team could take three to five days to deliver a few manual reports in response to business queries. The company now is able to deliver different reports based on live data with just a click of a button. Not only does the business have accurate insightful data to make their decision of what, when, and where they should invest, but they also have the agility to make these decisions.
The myDNA team can now focus on what they do best rather than spending days merging unreliable information from various sources to produce a handful of outdated reports.
The next step for myDNA is to adopt AWS machine learning to unveil predictions, achieving far better real-world results.

Working with DNX has been by far the best engagement that I have ever had with any consultant company in my life!
An exceptional Agile engagement gave us a lot of flexibility and allowed us to evolve and deal with new requirements that we didn’t even know we had as we went along the process.
It was a pleasure and an exceptional experience to work with DNX team. They provided a very well-considered solution taking the organisation and skills we have into consideration, setting us up for future success.
A eficacia de uma líderança depende do uso de dados para tomar decisões importantes, é preciso ter um olhar amplo com informações assertivas para ter ações significativas, assim é contruida uma estratégia de dados moderna para fornecer insights às pessoas e aplicações que precisam, com segurança e em qualquer escala. A DNX Brasil ajuda sua empresa a aplicar análise de dados em seus casos de uso mais críticos para os negócios com soluções completas que precisam de experiência em dados. Descubra o valor dos dados
Scalamed: Building a HIPAA compliance environment while migrating from Heroku to AWS
-
Posted by
DNX

About Scalamed
Scalamed is an Aussie startup that allows patients to receive prescriptions directly from their clinician to their mobile phones.
Taking a patient-centred approach, Scalamed believes the company must empower patients with the right information at their fingerprints to make health personalised for them.
Combining the experience of patients, care-givers, doctors, pharmacists, and geeks in a single solution, Scalamed aims to provide a friendly, personal, intuitive, secure, and caring healthcare solution.
For Dr Tal Rapske, Scalamed Founder, the journey to helping patients manage their health simply, conveniently, and on-the-go starts with medication management. As Rapske explained it, ScalaMed is in-effect a ‘digital prescription inbox’, secured by blockchain technology, which patients can access from their smartphone and share with their treating doctors and pharmacists.
“We identified a gap where a next-generation technology could improve the experience of medication management and increase adherence. By allowing patients to securely store their prescriptions digitally, doing away with paper, we can reduce medication errors, allergy mix-ups, and unnecessary hospitalisations, while giving patients their prescription history and information, and improving the convenience and ease of managing and purchasing one’s prescriptions,” Rapske explained.
The Business Challenge
While uncovering the market’s needs, Scalamed identified that the main concerns and questions about the solution are around security, ease of use, administration burden, and how difficult the system is to use. In response to the security topic, Scalamed has decided to prepare the application to be compliant with HIPAA standards for sensitive patient data protection.
Another challenge is that Scalamed was scaling up the business globally, was looking to improve the resource-usage, looking to grow more dynamically, remaining light on infrastructure operations, and wanting more control in the long-run. However, as Heroku was the current cloud platform, Scalamed was not able to achieve this due to some Heroku platform limits.
So, Scalamed needed to find a partner that solves both challenges; building a HIPAA compliant environment and preparing the business for future growth. DNX Solutions was engaged to support these challenges using AWS as a cloud solutions provider.
The 5-step Solution
Step 1: Identifying issues, risks, and opportunities
DNX started by assessing the current state of the application infrastructure, delivering a Well-Architected Review Framework where DNX identified risks and opportunities against operational excellence, security, reliability, performance efficiency, and cost optimisation pillars. Also, a HIPAA Best Practices was considered while assessing the workloads.

About 39 items were classified as high risk. Security and reliability were the main focuses for the business, followed by solving performance efficiency. Some of those are identities and permissions management, network resources, networking configuration, security events, design workload service architecture to adapt to and perform better, and data protection.
With a clear understanding of both business and technical needs in-hand, DNX and Scalamed determined that an Application Transformation would be the best path to solve those challenges.
A Transformation journey was defined as a deliverable scope, with security as a main topic to be covered in order to achieve the desired outcome.
Step 2: Enhancing security through DNX.One Well-Architected Foundation
The project started by deploying DNX.One Well-Architected Foundation (aka DNX.One) – an automated platform built with simplicity in-mind, Infrastructure as Code (IaC), open source technologies, and designed for AWS with well-architected principles. It enables the application to thrive while the business can remain focused on customer solutions.
DNX.One is a ready-to-go solution that aims to solve the most common business needs regarding cloud infrastructure as it fits different application architectures (including containers), has flexibility and automation for distinct platforms, and enhances security and management to keep business under control.
Some high-level security best practices that were leveraged while building Scalamed’s infrastructure were:
- Networking using security best practices for VPC
- Multiple Availability Zone
- Security groups and network Access Control List as an optional layer of security for VPC
- IAM policies to control access
- AWS tools to monitor VPC components and VPC connections such as CloudWatch
- A secure dedicated and isolated subnet for the database which is not accessible to the public internet
- A Centralised CloudTrail to monitor events history
- GuardDuty to provide continuous monitoring of AWS accounts
- AWS Key Management Service (KMS) to create and manage cryptographic keys and control their use across AWS services
While building a HIPAA compliant environment for Scalamed, DNX provided substantial changes on DNX.One which is default for any new customer such as having account-level separation to isolate distinct environments, granular access control for each workload, and list-grants-permission.
Having a separate audit only account was another crucial topic to be covered, enabling the HIPAA audit team to access everything with integrity.

Figure 1- IAM – single sign-on

Figure 2 – Networking

Figure 3: account management and separation
Step 3: Application Transformation Strategy
With minimum infrastructure operations in mind, DNX started the application transformation strategy. A migration from Heroku to AWS while using Elastic Container Service cluster in EC2 instances was proposed as it enhances performance and resource usage. It is important to note that DNX used spot instances for the ECS cluster, focusing on availability while reducing AWS costs.
Upon deployment of DNX.One, we migrated Scalamed deployment to Docker containers using Elastic Container Service (ECS) bringing together both the existing automated tests and database migration scripts to its CI/CD pipeline.

An internal Application Load Balancer was used to control internal access through Network Access Control List (NACLs) and/or Security Groups.
As a security best practice, environment variables were used while passing secret or sensitive data securely to containers. SSM Parameter was used to store secret keys and variables (values in plaintext), enabling only authorised services to access this and change it when convenient.
An AWS Key Manage Service (AWS KMS) customer master keys (CMKs) was used to encrypt the data at rest.
To enhance security in this phase, the environments were separated into accounts (non-prod and prod), allowing better access control for the Scalamed team to the environments through roles and policies. VPNs were also implemented in each environment (non-prod and prod), so that access to resources such as databases were only carried out through VPN, allowing authenticity, confidentiality, and integrity of data in transit.

Step 4: Build a secure CI/CD Pipelines
We used AWS EC2 instances to run complex CI/CD pipelines using spot instances, optimising steps such as database migration and automated tests running in parallel steps via Gitlab. Hundreds of pipelines are triggered daily at minimal operational cost. Moreover, this reduced the number of production incidents, increased their current test capacity, and enhanced security while running the pipeline in a private instance, avoiding public or shared instances.
DNX uses its own runners to execute the pipelines. In summary, instances are created in AWS to execute the pipelines without the need to configure SECRETS within the CICD SaaS platforms. Our instances that are created for this purpose already have the specific policies and roles to execute the pipelines only with the necessary permissions, without the need to expose the execution of pipelines inside third-party runners.

AWS stack:
- AWS Identity and Access Management (IAM)
- AWS Key Management Service (AWS KMS)
- Network ACLs + Security Groups
- AWS Systems Manager
- AWS CloudTrail
- AWS Organisations Service Control Policy
- AWS Secrets Manager
- Amazon CloudWatch
- AWS CloudWatch Events
- Amazon GuardDuty
- AWS Certificate Manager (ACM)
- AWS Single Sign-On
- AWS Consolidate Billing
Step 5: Knowledge Transfer
DNX works closely with companies to spread the AWS Well-Architected Framework pillars, bring teams together, and focus on delivery. As part of DNX Transformation Journey, a showcase was delivered at the end of the project in order to upskill the Scalamed’ team regarding what was delivered.
Conclusion
From conception to conclusion, the migration project of Heroku to AWS was completed in approximately one month. Now they have a HIPAA compliant environment as well as Well-Architected. In order to address the first challenge, the critical issues identified on the previous assessment were fixed (under security and reliability pillars) while delivering a resilient, secure, and reliable foundation.
The new Docker+AWS environment implementation allowed Scalamed to improve performance and efficacy as compared to their previous Heroku environment. Their production quality and their ability to release more products frequently have increased. Furthermore, developer and QA productivity has improved significantly.
Building a HIPAA compliance environment, improving the security of application components, automating security components and CI/CD, and applying AWS cloud-based products have enhanced the environment to seat the customer data. It enables the Scalamed team to focus on delivering Dr Tal Rapske’s passion; to reorient healthcare towards the patient and empower patients with their data seamlessly, while addressing the quadruple aim of health – improved health outcomes, reduced cost, improved patient experience, and reduced paperwork for providers.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Verifier: Building a compliant CDR data environment and supporting Verifier on their Accredited Data Recipient journey
-
Posted by
DNX

Introduction to CDR Australia
Consumer Data Right (CDR) is well underway across Australia. Under the CDR framework, Data Holders (Australian banks and credit unions) will enable consented sharing of consumers’ data through standardised open Application Programming Interfaces (APIs) with Accredited Data Recipients (ADRs). By streamlining and securing the transfer of personal data, the CDR framework will completely transform the way that consumers interact with financial services in numerous sectors. With a variety of customer-centric use cases already proven in the United Kingdom and across Europe, Australian consumers now stand to benefit immensely from this next stage in the democratisation of data.
About Verifier
Verifier is the leading-edge multi-source, multi-access method solution to consumer data sharing, using the secure courier method of accessing data. This way respects the information security and privacy needs of both consumers and income data providers, while at the same time delivering ease of online processing for lenders.
Verifier has been a pioneer in consent-driven data sharing even before the rollout of CDR, providing frictionless proof of income across the Australian market using the existing data access rights within Australia’s Privacy Act (Privacy Principle 12), and has been working towards ADR status for some time. With privacy-by-design at its heart, the Regtech leader has advocated for a non-screen-scraping approach to accessing consumer data since 2014 and has actively supported the ACCC since CDR inception, becoming one of the 10 ADRs to test the framework with the four big banks. Verifier’s CEO Lisa Schutz also serves on the Consumer Data Right Advisory Committees that support standards setting in the banking and energy sectors.
The Business Challenge
To receive consumer data directly from other financial institutions through the CDR regime, an organisation must become an Accredited Data Recipient (ADR). ADRs must meet rigorous, ongoing regulatory requirements (in areas such as consent management, infrastructure, and compliance reporting) to achieve and maintain their accreditation.
When planning their participation in CDR, Verifier recognised a need for a highly automated solution that would allow the company to scale up their offerings, and constantly adapt to the evolving standards of CDR. Rather than burden their own software engineers, who were instead building Verifier’s own use cases, they turned to a community of experienced partners to help them prepare their business for becoming an ADR.
In October 2019, Verifier selected Adatree and its single API Data Recipient platform to be the CDR ‘rails’ to help them access CDR data. DNX was engaged to act as a key support for their DevOps team in tailoring a compliant CDR data environment, which is a key requirement for accreditation.
The DNX Solution
Gaining accreditation in Australia is a complex challenge, so working with experienced partners is a worthy consideration to accelerate the CDR journey. To build and deliver a compliant CDR environment, DNX (in addition to its work with Adatree) worked closely with Trend Micro Cloud One Conformity that provides tools for mapping and automating regulatory controls, and RSM as an auditor responsible for assessing compliance of what DNX delivered.
DNX.One Foundation
We started assessing the existing Verifier infrastructure against the five pillars of AWS Well-Architected Framework. It enables DNX to understand customers’ environment and identify best practices gaps, then provide a remediation plan and roadmap to resolve issues based on Security, Operational Excellence, Performance Efficiency, Cost Optimisation, and Reliability.

The following illustrates an example of the IAM topology that was implemented for Verifier. As AWS IAM policies are version controlled and securely managed, accomplishing high standard compliance with CDR was possible. The access to AWS accounts are role-based where users assume one or multiple roles across accounts and environments.

Delivery Networking using security best practices for VPC, plus the extra ‘DNX layer’ of security is another advantage of DNX.One. Multiple Availability Zone, security groups and network ACLs, IAM policies to control access, and tools to monitor VPC components and VPC connections are default for DNX.One and were automatically deployed to Verifier’s infrastructure. Having a dedicated and isolated subnet for the database and file system was considered to enhance the security around the networking infrastructure, therefore, there are policies, permissions, and flow access to have access to sensitive data.

Another DNX.One best practice implemented for the Verifier environment was account management and separation. This practice isolates production workloads from development, test, and shared services workloads, and also provides a strong logical boundary between workloads that process data of different sensitivity levels, as defined by CDR requirements. The granular access control defines who can have access to each workload, as well as what they can do with that access. In addition, It allows Verifier to set guardrails as its workloads grow.

CDR Deployment + Cloud Conformity Remediation
Once we have prepared the foundation, we started deploying the CDR environment and running the Trend Micro Cloud One Conformity tool to enable automated security and compliance checks of the infrastructure. This enabled the DNX team to identify which items were not covered by DNX.One yet, focusing on building or fixing them to meet the technical security requirements requested.
It’s worthy to note that every new requirement was implemented or remediated on our DNX.One Foundation. The DNX.One Foundation has been improved and developed through ‘tried and tested’ applications, and this evolution is enabling companies to accelerate their journey to building an infrastructure compliant with the CDR.
The following are the core security aspects that DNX CDR infrastructure environment has(but not limited to):
- Networking (private networking, stateless and stateful firewalls, networking logs)
- Encryption (at rest and transit with dedicated customer keys and rotation policy)
- IAM (least privilege, SSO)
- Compute protection; and
- Incident response: anomaly detection, continuous compliance mechanisms, and alerting.
Some of the AWS Services provisioned
Conclusion
DNX achieved great outcomes working with Verifier, building a Well-Architected and Cloud Conformity AWS environment compliant with the CDR. This has effectively accelerated the audit process for Verifier by certifying that it is automatically compliant with many CDR requirements due to the DNX.One foundation already in-place, and at the same time has implemented security, reliability, operational excellence, performance efficiency, and cost optimisation using Infrastructure as Code (IaC). Cost optimisation was further enhanced with new benefits being prepared for the future. Verifier is now primed to participate in the CDR environment sooner, more dynamically, and in a more compliant manner.

“DNX supported Verifier’s DevOps team in tailoring our CDR data environment.”
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Brighte Capital restructures its AWS organisations, improves security, and achieves a 50-60% cost reduction.
-
Posted by
DNX

About Brighte
Brighte Capital is a rapidly growing Australian FinTech founded in 2015, making solar, battery, and home improvements affordable for Aussies all over the country.
Its mission is to make every home sustainable, offering Aussie families affordable access to sustainable energy solutions through an easy payment platform.
The company offers financing and zero-interest payment solutions for the installation of solar panels, batteries, air conditioning, and lighting equipment.
The process is simple and fast, all managed via Brighte’s website or smartphone app. Once your application is approved, you get access to highly vetted vendors offering interest-free products. Brighte recently received the Finder Green Awards 2021 in the category of Green Lender of the Year, an incredible achievement that recognises and solidifies its position in the Australian market.
As a company operating in both the Energy Industry and Financial Services Industry, Brighte must comply with numerous standards, rules, and regulations highlighting operations, security, and data protection as key topics. Australian Privacy Principles, Anti-Money Laundering and Counter-Terrorism Financing Act 2006, and National Consumer Credit Protection Act 2009 are just some examples.
But as a customer-centric company, Brighte goes beyond mere compliance requirements. Transparency and making life easier are two of its most important values, so Brighte is alert to other factors which can bring damage to their clients, well beyond compulsory minimum standards.
The Business Challenge: consolidate and improve the core digital platform architecture while prioritising security
Brighte’s business model is impressive and there has been considerable investment in a robust digital platform to support the different areas of the company. There is substantial technology in-place behind the scenes, with the business headed by a dedicated team of professionals with diverse backgrounds and skills, all contributing to a strong work culture.
As a relatively young company, Brighte has experienced exponential growth. Even with best practices in-place, it was difficult to continually manage or upgrade the various IT solutions the business was using.
Most of Brighte’s applications were developed in-house and based on a range of different programming languages and technologies. While its infrastructure was hosted on AWS, different services were being used to support each application, causing issues around ease of management and knowledge retention and sharing, but on top of that, increased vulnerability and manual interactions should have been fixed, retaining and improving security.
Brighte needed to revamp its landscape and reevaluate the current architecture of its core digital platform. The business reached out to DNX, seeking a solution that would improve its cloud strategy, apply DevOps best practices, reduce infrastructure operational overheads, and achieve overall cost optimisation. However, because of its financial conditions, these challenges need to go hand-in-hand with security. Therefore, DNX understood that the challenge is to provide those improvements while prioritising security.
The DNX Solution: infrastructure, pipelines, AWS Stack, deliverables, project, UI, frontend + backend
Prior to project kick-off, DNX began a discovery phase to maximise the information collected about the challenges faced by Brighte’s team. A Well-Architected Review Framework was delivered to identify risks and opportunities against operational excellence, security, reliability, performance efficiency, and cost optimisation pillars. This enabled DNX to ensure and maintain focus on the most important priorities, such as security and operational excellence, while the team went through the DevOps Transformation guidelines to draft a plan for the required changes, working towards continuous innovation during the course of the project.

Comparing best practices enables the team to identify new opportunities and highlight concerns that may not be apparent at the beginning.
From an infrastructure perspective, DNX recognised that Brighte needed to improve control over its AWS resources using IaC (Infrastructure as Code) and restructure its AWS organisation and accounts strategy.
To achieve this, DNX suggested its DNX.One Well-Architected Foundation (aka DNX.One) to provide the following benefits:
- New structure of AWS organisation following the best practices in the market.
- Ability to manage all infrastructure resources across all of their AWS accounts based on Terraform and CI/CD pipelines.
- Designed for AWS with Well-Architected principles
It is important to mention that DNX.One is a ready-to-go solution that aims to solve the most common business needs regarding cloud infrastructure, fitting different application architectures (including containers), has flexibility and automation for distinct platforms, and enhances management to keep business under control.
An extra layer of high-level security best practices as default for architecture guarantees continuous security at any stage. It ensures that regardless of the challenges that customers need to achieve, they will do it in a secure way.

From the applications point of view, DNX identified Brighte was using different types of AWS services to deploy their applications, including ElasticBeanstalk, ECS with Fargate, and EC2 instances.
Having these different types of application deployments is expensive, as the company needs to utilise multiple operational processes to manage the environment, but is also less secure because no single consistent security module is provided, effectively introducing risk.
With its Application Modernisation strategy, DNX suggested containerisation of the client’s main applications and deployment via ECS with spot instances. This change would substantially reduce Brighte’s costs, create a pattern for new applications that may be necessitated by future business growth, and improve security while having a single security pathway to improve the AWS responsibility under the Shared Responsibility Model, making security simpler by using ECS.
The CI/CD pipeline strategy was also evaluated and Brighte’s team demonstrated a willingness to adopt solutions that would reduce the complexity of managing new deployments and providing faster response times to deploy new applications in their landscape.
Key Project Phases:
Cloud Foundation (aka AWS Foundation)
With our automated solutions based on Terraform (IaC), DNX restructured Brighte’s AWS resources such as AWS organisation, accounts, network, domains, VPN, and all the security controls for account access via SSO using Azure AD as their Identity Provider.
Building a strong and secure foundation for Brighte’s applications was a critical first step prior to modernisation. With a multi-AZ strategy with ECS nodes running on spot instances deployed in their environments, Brighte was able to run a cluster of Docker containers across availability zones and EC2 instances, while optimising costs and simplifying the security operating model.

Security:
Although security is considered and addressed at many stages by now, and several cloud technologies have been put in-place to protect data, systems, and assets in a manner to improve security through best-practice guidance, there are some AWS services that still need to be highlighted.
AWS Cloudwatch
The logs from all systems, applications, and AWS services have been centralised in the highly scalable AWS CloudWatch service. It allows easy visualisation and filtering based on specific fields, or archiving them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualise log data in dashboards.
AWS Cloudtrail
All AWS events are reported to a centralised CloudTrail and exported to an S3 bucket in an Audit account.
AWS Organisations
The setup of new accounts has been automated by service control policies (SCPs) which apply permission guardrails at the organisation.
AWS Guardduty:
DNX implemented a centralised Guardduty to detect unexpected behaviour in API calls. The Amazon GuardDuty alerts when unexpected and potentially unauthorized or malicious activity occurs within the AWS accounts.
DNX has helped Brighte to strengthen its workload security along with a number of other relevant AWS resources, such as Amazon Cloudfront, ECR image scanners, AWS IAM identity provider, VPC endpoints, Amazon WAF, and AWS Systems Manager Parameter Store.
Cost savings:
There were three main cost optimisation drivers used for this project. The combined use of these three strategies brought savings in the order of 60%, compared with the same workloads on the previous environment, while allowing Brighte to use several new resources delivering more value with less cost to its clients.
- Using ECS clusters with EC2 Spot Instances: Spot instances are unused AWS capacity that is available for a fraction of the normal On-Demand prices on a bidding model. Spot instances can be reclaimed by AWS when there is no available capacity, so DNX uses an auto-scaling model with several instance types that ensure availability while saving around 75% compared with On-Demand. For instance, an On-Demand t3.xlarge instance costs $0.2112 per hour while the same Spot instance costs $0.0634.
- Savings plans for Databases: As the databases are stable and their use can be predicted over a long duration, AWS allows us to reserve a DB instance for one, two, or three years, with monthly or upfront payments, charging a discounted hourly rate saving from 30% to 60%, according to the chosen plan.
- Automatic scheduler for turning on and off resources according to a usage calendar: For Development and Testing environments, which are not meant to be used on a 24/7 basis, Brighte can easily schedule when these environments are available for the teams and when it should be turned off (scaling them to zero), saving around 50% compared to a full-time available environment. The scheduler mechanism allows the resources to be used at any desired time, bypassing the default calendar, in an easy to use way.
Application Modernisation:
Brighte had a good set of applications based on different technologies deployed across multiple AWS services. During this phase, the DNX team focused on the refactoring of the main applications to deploy the content via Docker containers and subsequently make use of ECS with spot instances.
They had previously adopted some of the 12-factor principles, but needed to improve their control over sensitive data and credentials. DNX proposed the use of AWS System Manager Parameter Store and adapted all the applications to follow this pattern.
A few serverless applications and UI static pages were deployed as part of this phase, even without demanding a strong code refactoring. We adapted the remaining apps to the 12-factor app methodology and made use of our CI/CD pipeline strategy.
Each environment in AWS was made identical, varying only in EC2 instance types in each environment (dev, uat, production). The same immutable application image was deployed and tested across these environments. By adopting this approach, Brighte has improved its operational resilience, greatly reducing production incidents to zero through its self-healing platform.
Logs:
Due to the high volume of logs, Brighte was using the ELK stack (ElasticSearch, Logstash, and Kibana) in legacy accounts to aggregate all of its application logs and avoid losing data during the process. The solution was working fine, but since it’s not a fully managed solution, the operational overhead was a point of impact.
DNX suggested the replacement of Logstash with Kinesis Firehose and CloudWatch Subscription Logs to send the data directly to ElasticSearch cluster. This way, Brighte was able to avoid the need of having dedicated resources to manage the solution and take advantage of the automatic transfer of logs between the applications, CloudWatch and ElasticSearch.

CI/CD pipeline:
Brighte was using Bitbucket as a provider for its applications pipelines. DNX adjusted the pipeline strategy reducing the complexity of deployments across different environments and included tools to automate the replacement of data used for automated tests using AWS System Manager Parameter Store. In addition, the bitbucket pipelines have been integrated with AWS using OpenID Connect (OIDC). As a result, there is no need for creating AWS IAM users and managing AWS Keys to access AWS resources. This strategy improved security and removed any kind of sensitive data from Brighte’s codebase.


Databases:
The databases were already deployed in RDS prior to this project, but DNX increased security by encrypting all of the database workloads and improving redundancy by activating Multi-AZ strategy during the database migration phase. Also, the databases were created in dedicated and isolated subnets which allow only incoming traffic from private subnets. Therefore, the network ACLS restricts inbound traffic for specific private subnet CIDR ranges and the RDS security groups allow only inbound traffic from ECS instances.

Conclusion
From conception to its conclusion, the project was completed in approximately five months, with the restructure of AWS accounts, infrastructure resources, and a total of 15 applications migrated to the new AWS environments.
The performance of the applications is working consistently based on auto-scaling of the clusters and without any risk of downtime due to the redundancy and self-healing strategies delivered by DNX products. The infrastructure and application deployment operational overhead has reduced significantly and this is reflected directly in Brighte’s ability to release products more frequently.
With the new pattern adopted across all applications and the use of ECS clusters with spot instances, Brighte has achieved a cost reduction of 50-60% – an outstanding result for such a large set of applications and infrastructure resources used by its digital platform.
Finally, having a very secure foundation helped Brighte to provide operational cost reduction through security and best practices, as Brighte fundamentally is saving money on operating it as the complexity was going down, therefore now they are able to run faster and safer.

“DNX are being an absolute enabler for our business with their “can do” attitude and attention to detail. Relentless battling with our neglected ecosystem and transferring knowledge every step of the way.”
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Agyle Time: Protecting customer data while reducing TCO and computing costs
-
Posted by
DNX

About Agyle Time
Agyle Time simplifies Workforce Management, ensuring cost optimisation of your resources and allowing you to better schedule to actual workload, manage costs, and improve customer satisfaction. Agyle Time uses a modern development approach with cloud technologies to engage teams and their customers with a secure and go-anywhere platform that takes just minutes to set up.
The Business Challenge
Agyle Time’s SaaS platform and its connectors are dynamic and fit different customers’ needs. However, tenant isolation along with their individual data was crucial and a mandatory requirement for large customers. In addition, due to the increase of demo requests and new tenants coming on board, building automation that delivers security was vital to keep innovating and delivering the best to Agyle Time’s users while protecting sensitive data.
Security Services on Cloud is critical for customer success in the cloud space. Data protection has become more important than ever before and every company will need high-level encryption capabilities for sensitive data, as the customers expect compliance and need governance, risk management and reporting.
DNX was engaged to elaborate and implement their new cloud operations, taking into consideration the AWS Well-Architected pillars:
- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimisation
The Solution
Multiple perspectives should be considered while architecting automation for an SaaS arrangement like Agyle Time’s. Aspects like cross-tenant prevention, data protection, and tenant isolation are essential.
For a SaaS environment, these benefits extend beyond deployment configurations, including data encryption and security controls. This allows Agyle Time to ensure tenant isolation by encrypting their data during transit between services and in storage via their database and Amazon S3. Using Terraform also allowed Agyle Time to quickly automate their key management infrastructure, allowing employees to set up accounts for the system instantly with no third-party involvement or risk of misconfiguration.
Using Buildkite for CI/CD self-hosted pipelines, DNX has implemented automation on the CI/CD tool improving the security layer in the deployment process. For better pipeline control we decided to use self-hosted runners in our project with a custom hardware configuration which offers us better control on the builds.
It is feasible to check that secure code is deployed using CI/CD by imposing certain regulations during build time and deployment time. We’ve been able to enforce these checks with little effort because we’re utilizing Buildkite. To implement this security check, DNX used a number of plugins together with Buildkite.
The first step to an automated security architecture is to understand the kind of threats you need to protect against. Threat modelling is a technique for identifying and classifying threats that could impact your operations. It’s important to remember that any threat you document in this process is only one possible scenario out of many, but documenting it helps you better prepare yourself for how to handle it. It’s also not essential that you identify every threat, as long as you understand the general types of threats that are possible in your environment.
Going one step further, DNX has implemented a security plugin that takes care of the authentication process in Buildkite. This plugin adds some new functionalities to ensure that only authorized and authenticated users can access the CI/CD pipeline data.
The results were an automated data pipeline that brought the benefits of IaC to Agyle Time’s managed service. Each tenant’s data is isolated from the rest of Agyle Time, making it possible to enforce their multi-tenant architecture and hosting strategy using Terraform. The pipeline also allows each tenant to manage their own key infrastructure, removing any single point of failure in the account creation process.






Images regarding Buildkite demo
DNX.One Foundation
We started assessing the existing Agyle Time infrastructure against the five pillars of AWS Well-Architected Framework. It enables DNX Solutions to understand customers’ environments and identify best practices gaps, then provides a remediation plan and roadmap to resolve issues based on Security, Operational Excellence, Performance Efficiency, Cost Optimisation, and Reliability.
With a thorough awareness of and recognition of infrastructure issues, DNX delivered the DNX.One Well-Architected Foundation (aka DNX.One) – an automated platform built with simplicity in mind, Infrastructure as Code (IaC), open-source technologies, and designed for AWS with well-architected principles. It means that the platform is already built based on reference architectures and continuous assurance testing to regulatory audits and analytics, removing many regulatory and compliance hurdles involved throughout an organisation’s entire lifecycle.
The following illustrates an example of the IAM topology implemented for Agyle Time. As AWS IAM policies are controlled and securely managed, accomplishing high standard compliance was possible. The access to AWS accounts is role-based, where users assume multiple roles across accounts and environments.

Delivery Networking using security best practices for VPC, plus the extra ‘DNX layer’ of protection, is another advantage of DNX.One. Multiple Availability Zone, security groups and network ACLs, IAM policies to control access, and tools to monitor VPC components and VPC connections are the default for DNX.One and were automatically deployed to the infrastructure. In addition, having a dedicated and isolated subnet for the database and file system was considered to enhance the security around the networking infrastructure. Therefore, there are policies, permissions, and flow access to have access to sensitive data.

Another DNX.One best practice implemented for the customer was account management and separation. This practice isolates production workloads from development, test, and shared services workloads and also provides a robust logical boundary between workloads that process data of different sensitivity levels. The granular access control determines who can access each workload and what they can do with that access. In addition, it allows the customer to set guardrails as its workloads grow.

Some of the AWS Services provisioned:
Business Outcome
One of the most important topics around CI/CD pipelines is security. In public runners, provided by the pipeline tool, we cannot have control of or know if our builds are running in an isolated environment, or sharing resources across several other customers. Bringing the runners in-house, we have a stable and secure environment that enables the customer to run all the application build and deployments in isolating workspaces. Everything wrapped around the DNX.One foundation, bringing more control and confidence to the customer. Now, Agyle Time’s team can deploy releases for current and new customers automatically in a secure, elastic, and highly available way on AWS and their customers can take advantage of the workforce management platform with no data concerns.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas
Perx Health: Automated global deployments on AWS with HIPAA Best Practices
-
Posted by
DNX

About Perx Health
Perx Health is pioneering a motivational health community made for everyone. They are using leading-edge behavioural science, understanding of consumer tactics, and technology to assist and motivate people living with chronic conditions to stick to their treatment plans. Notably, Perx has already helped to increase engagement with thousands of patients, improved their adherence, and achieved better health outcomes. Their goal is a future where managing a chronic condition can really be simple, exciting, and rewarding.
The Business Challenge
Already running healthcare solutions on AWS, Perx Health aimed to leverage an elaborated multi-region automated deployment strategy in a HIPAA compliant way, requiring a move from higher-level AWS services like Elastic Beanstalk to services with more operational control. Achieving this target without adding infrastructure operations overhead was crucial to maintain a collaborative, innovative and flexible environment for the development team. Security of all data was of primary concern to Perx Health and this became a major focus of the solution delivered. Another challenge was to identify opportunities for cost reduction while running the application in the new environment.
To accomplish these challenges, DNX Solutions was heavily involved in the new architecture solution. Together, we evolved the platform to container-based orchestration, pushing stateless applications through CI/CD pipelines along with IaC (Infrastructure as code) using Terraform. We can meet security and compliance standards through management and governance solutions, also take advantage of the AWS shared responsibility model, specially for security and operations topics.
The Solution
We started assessing the existing infrastructure using HIPAA Best Practices and our DevOps Transformation guidelines. The project started by deploying our DNX Well-Architected AWS foundation, also called DNX.One, which implements operational excellence, security, reliability, performance efficiency, and cost optimisation using Infrastructure as Code, so that applications can thrive, while the business can remain focused on customer solutions.
With minimum infrastructure operations in mind, Elastic Container Service on AWS was the service of choice for the application modernisation strategy. It is important to mention that DNX used spot instances for the ECS cluster, focusing on availability while reducing AWS costs.
As security and privacy were of paramount importance to Perx Health we were able to develop systems to ensure production data was well secured from development workloads and that access was only via a secure VPN to a secure subnet in their VPCs which is not accessible to the public internet. Additionally, high levels of security best practices were enabled during the Foundation stage, including; A separate audit only account, centralised cloud trail, AWS Config, AWS Guard Duty, and AWS KMS.

Taking the blue-green deployment approach in a multi-region environment, we automated existing database migrations and deployments that were previously manual processes, providing the team confidence to release new features that can be easily tested in a prod-like environment before every deployment.

Perx Health also required an analytics solution to manage its multi-region environment. Using Terraform to manage Infrastructure as Code (IaC) enabled simple provisioning of a Data Warehouse cluster, which was essential to bring automation, security, and information management and control.
Data Overview

CI/CD Pipelines
Previously, deployments were semi-manual where the team would use a 3rd party deployment tool and required short amounts of downtime. At DNX, we used the current hosts CI/CD tool to provide the best pipeline architecture for deploying to multiple environments and regions with maximum flexibility and confidence while ensuring 0 downtime deployments.
As security is a critical topic, DNX has ensured that security controls were considered around the pipeline build-in on DNX.One Foundation. An IAM role is created specifically for CI/CD and we have been making use of it to deploy Perx’s applications. Discover more accessing our GitHub here.

ECR – Docker image scanning
To avoid releasing a docker image with major vulnerabilities, DNX has implemented an image scanning for Perx’s deployments.
On bitbucket, a step was added prior to deployment. This step will check the ECR report created for that image tag and if it contains critical level vulnerabilities, the deployment of that image will be prevented.

To ensure compliance, each container is scanned for vulnerability using ECR in the pipeline.
Read this article to learn more: AWS ECR — Improving container security by using Docker image scanning
Some of the AWS Services provisioned:
Conclusion
Perx Health’s project was highly collaborative and ultimately delivered beyond expectation. With an engaged and helpful development team working together with DNX, we built a resilient, secure, and reliable AWS platform for Perx Health applications. Now the team is able to focus on what they do best, using leading-edge behavioural science, consumer tactics, and technology to help and motivate people living with chronic conditions to better adhere to their treatment plans on a HIPAA compliant platform and automated deployments. Using spot instances for the Elastic Container Service (ECS) has been generating an average of 50% cost reduction.
With modern and efficient DevOps-oriented practices, Perx Health can test and release new features to the market, faster. Reducing operational constraints on AWS, the new platform is prepared for a global HIPAA compliant strategy.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas