Tag Archives: Cloud Tech Insights
Os benefícios de segurança que AWS pode prover para uma startup do setor de insurtech

Os benefícios de segurança que AWS pode prover para uma startup do setor de insurtech
No setor de insurtech, a AWS (Amazon Web Services) pode apresentar diversos benefícios de segurança por meio de suas soluções em nuvem. As startups desse setor, que estão lidando com informações confidenciais dos clientes, precisam garantir que seus dados estejam seguros e protegidos contra ameaças externas. A presença da AWS nesse processo pode ser bastante valiosa.
A seguir, descrevemos alguns dos principais benefícios de segurança que a AWS pode fornecer para uma startup do setor de insurtech:
Proteção contra ataques DDoS
A AWS oferece uma proteção contra ataques DDoS (Distributed Denial of Service), que é uma das ameaças mais comuns enfrentadas pelas empresas de tecnologia. Esses ataques podem interromper o serviço ou site da empresa, tornando-o inacessível para os clientes. Com a proteção da AWS, essas startups têm à disposição uma camada extra de segurança na sua infraestrutura de rede.
Armazenamento de dados em nuvem
A AWS oferece uma solução de armazenamento em nuvem segura e escalável. Isso significa que as startups de insurtech podem armazenar seus dados confidenciais na nuvem da AWS, que é protegida por diversos recursos de segurança, como criptografia de ponta a ponta e certificações de segurança. Além disso, a capacidade de escalar rapidamente os recursos necessários para atender a demanda também é um grande benefício.
Controle de acesso abrangente
Por meio da AWS, as startups de insurtech têm um controle de acesso muito mais amplo e detalhado do que outros provedores cloud. É possível definir o acesso de usuários, grupos de usuários, regras de acesso e permissões de maneira personalizada para cada usuário. Isso ajuda a minimizar o risco de ter um acesso indevido aos dados armazenados.
Monitoramento constante
A AWS oferece ferramentas de monitoramento de dados que alertam sobre possíveis problemas ou atividades suspeitas na rede. Por exemplo, é possível configurar alertas para detectar tentativas de intrusão, sinais de malware ou picos de tráfego incomum. Dessa forma, os profissionais de segurança da informação podem investigar o problema antes que ele se torne mais grave.
Backup automático e recuperação de desastres
O backup automático e a recuperação de desastres são recursos padrão da AWS. São medidas importantes para garantir que, em caso de falhas no sistema ou incidentes como cortes de energia elétrica, os dados da empresa não sejam perdidos. Essas funções garantem que os dados foram copiados e armazenados em locais seguros, proporcionando tranquilidade e segurança para a startup.
Em resumo, a AWS pode oferecer diversos benefícios de segurança para startups do setor de insurtech, como proteção contra ataques DDoS, armazenamento de dados seguro, controle de acesso abrangente, monitoramento constante, backup automático e recuperação de desastres. Todas essas medidas ajudam a assegurar que os dados da empresa permaneçam seguros e em bom estado, enquanto a equipe pode se concentrar na construção e crescimento do negócio em si.
A DNX Brasil, como um parceiro avançado da AWS, oferece soluções especializadas em nuvem AWS. Nossa equipe é composta por especialistas experientes e qualificados que podem implementar e monitorar seu ambiente em nuvem com segurança e confiabilidade. Se ocorrerem incidentes de segurança, podemos responder imediatamente, eliminando a necessidade de contratar uma equipe própria. Entre em contato conosco.
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
Os conceitos básicos de migração para a nuvem

O que é a migração para nuvem?
O conceito de migração em nuvem é bem difundido principalmente por pessoas que usam o armazenamento em nuvem no seu dia-a-dia. Então o que há de novidade sobre este assunto?
De maneira simples, a migração em nuvem é o processo de mover uma informação de uma fonte local para um ambiente de computação em nuvem.
Você já deve estar pensando na possibilidade de mover todos os seus dados e programas importantes do seu computador para um local onde eles são, automaticamente, copiados e protegidos. Se por acaso acontecer algum tipo de acidente com seu computador ou até mesmo se ele for roubado, mesmo assim você ainda teria acesso aos seus dados a partir de outro computador, e seria capaz de atualizar as funções de segurança, caso tenha ocorrido uma violação.
Com uma maior movimentação de funcionários e expansão da empresa, armazenar dados na nuvem facilita a segurança e inovação nos negócios, conduzindo para uma boa governança e eficiência, e prepara você para o futuro digital.
Em grande escala, a migração em nuvem para empresas inclui a migração de dados, aplicações, informações e outros elementos do negócio. Além disso, pode envolver a mudança de um data center local para a nuvem ou de uma plataforma de nuvem para outra.
O principal benefício é que, por meio da migração em nuvem, sua empresa pode hospedar aplicativos e dados no ambiente de TI da maneira mais eficaz possível, com uma infraestrutura flexível e com capacidade para escalar. Isso aumenta a economia de custos, o desempenho e a segurança do seu negócio a longo prazo.
A migração em nuvem é uma transformação que com certeza vai liderar os próximos passos pensando no futuro da sua empresa.
Quais são os benefícios de migrar para a nuvem?
A nuvem traz agilidade e flexibilidade ao seu ambiente de negócios. À medida que avançamos para o mundo de workspaces digitais, a migração para a nuvem volta-se para oportunidades de inovação mais aprimoradas, além do tempo de entrega ser mais rápido.
Com isso, as empresas vão obter todos os tipos de benefícios, incluindo a redução dos custos operacionais, TI simplificada, escalabilidade aprimorada e desempenho atualizado.
Estar em conformidade com as leis de privacidade de dados fica muito mais fácil, e a automação e a IA começam a melhorar a velocidade e a eficiência de suas operações. A migração para a nuvem tem como um dos principais resultados a otimização para quase todas as partes do seu negócio.
Quais são as opções para a migração em nuvem?
Existem seis métodos principais usados para migrar aplicações e bancos de dados para a nuvem. Vamos vê-los a seguir:
- Rehosting (“Lift-and-shift”). Por meio desse método, a aplicação é movida para a nuvem sem que nenhuma alteração seja feita para otimizar a aplicação para o novo ambiente. Permitindo uma migração mais rápida, e as empresas podem optar por realizar otimizações mais tarde.
- Replatforming (“Lift-tinker-and-shift”). Isso envolve fazer algumas otimizações em vez de migrar estritamente um banco de dados legado.
- Re-purchasing. Por meio dessa ação, é possível a compra de um novo produto, seja transferindo sua licença de software para um servidor online ou substituindo-o inteiramente usando opções de SaaS.
- Re-architecting/Refactoring. Esse método envolve o desenvolvimento de aplicações usando recursos nativos da nuvem. Embora, inicialmente, seja mais complexo, esse método, quando focado no futuro, oferece uma maior oportunidade de otimização.
- Retiring. Nesse caso, as aplicações que não são mais necessárias são aposentadas, obtendo assim, economia de custos e eficiência operacional.
- Retaining. Essa é uma opção para deixar certas aplicações como estão, com o potencial de revisitá-los no futuro e decidir se vale a pena migrar.
Quanto custa?
A migração para a nuvem requer uma estratégia abrangente, levando em consideração os vários desafios que envolvem, como o gerenciamento, as tecnologias e os recursos. Assim, o custo da migração pode variar muito, principalmente, porque os objetivos e requisitos são diferentes entre as organizações.
As opções de financiamento podem estar disponíveis para sua empresa ao migrar para a AWS. Dessa forma, considerar cuidadosamente todas as suas opções, incluindo as oportunidades, pode impactar a sua decisão e as metodologias que você escolher seguir.
Nos últimos anos, tecnologias e empresas de computação em nuvem foram desenvolvidas para criar facilidade e eficiência no processo de migração. Este é o caso da DNX.
Como a DNX pode te ajudar com a migração para a nuvem?
A DNX identifica as necessidades do seu negócio e traça o melhor caminho, tornando sua jornada de migração mais simples, rápida e econômica.
Com um processo de migração para a nuvem seguro e rápido, desde o primeiro dia, preparamos sua empresa para o sucesso.
Usar a expertise da DNX para Migração em Nuvem significa migrar da maneira certa — e ter todos os benefícios da AWS — por meio de uma base exclusiva, segura e automatizada.
A DNX facilita a migração para um ambiente AWS Well-Architected e compatível. Como parte do processo, modernizamos suas aplicações para que você possa aproveitar os benefícios das tecnologias nativas da nuvem. Isso significa que desde o início, a sua empresa vai desfrutar de mais resiliência, eficiência de custos, escalabilidade, segurança e disponibilidade.
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
DNX Solutions ganha dois prêmios de parceiro do ano da AWS

Os prêmios APN Partner são concedidos pela AWS todos os anos para reconhecer a excelência do partner na AWS Partner Network (APN). Em novembro de 2022, a DNX Solutions teve o orgulho de ser nomeada Parceira do Ano em duas categorias, recebendo os prêmios na conferência AWS re:Invent em Las Vegas.
Recebemos as maiores honras da AWS ao sermos nomeados Global Social Impact Partner of the Year e APJ (Asia Pacific and Japan) Industry Partner of the Year, cuja combinação é um reflexo perfeito do que buscamos como empresa.
O prêmio Social Impact reconhece os parceiros da AWS que estão comprometidos em retribuir à sociedade e mudar o mundo para melhor. Na DNX, entendemos o poder que a tecnologia tem para melhorar a vida das pessoas ao nosso redor e nos dedicamos a fornecer soluções inovadoras para organizações que fazem a diferença. Uma dessas organizações solicitou a assistência da DNX Solutions durante o desenvolvimento de um aplicativo inteligente da MedTech quando começaram a ter dificuldades com armazenamento, criptografia e transmissão do firmware do processador de som.
A DNX forneceu todo o código de back-end e infraestrutura necessários, permitindo que as atualizações remotas de firmware do dispositivo fossem executadas por meio da AWS cloud, construídas em um pipeline automatizado e compatível. Isso não apenas reduziu o tempo de atualização de vários dias para apenas 5 minutos, mas também evitou que os destinatários tivessem que viajar para uma clínica física, o que significa que eles poderiam se manter seguros durante os lockdows do Covid-19 na Austrália. Além disso, a modernização levou os recursos de teste aprimorados, reduzindo o tempo de implantação de 5 horas para 30 minutos. No geral, como resultado do trabalho da DNX, o go-to-market da MedTech foi drasticamente reduzido de 3 meses para 5 dias, e milhares de pessoas receberam o suporte e atendimento de que precisavam, sem ter que esperar.
O prêmio Industry Partner of the Year vai para os parceiros da AWS que demonstram profundo conhecimento do setor e resolvem com sucesso os pontos problemáticos específicos do setor. Nos últimos anos, a equipe da DNX Solutions tem se concentrado em setores regulados, como MedTech e FinTech, entre outros. Essas indústrias geralmente exigem que as empresas cumpram regulamentações rigorosas para continuar operando, e essas regulamentações diferem não apenas entre indústrias, mas também entre regiões.
A DNX Solutions foi fundada em 2019 por Helder Klemp (CEO) e Allan Denot (CTO) com a missão de democratizar o acesso à cloud. E em janeiro de 2021 abriu sua branch no Brasil, tendo a frente Emanuel Estumano como CEO (Brazil).
Como uma empresa especializada em cloud e parceira de consultoria avançada da AWS, temos orgulho de oferecer soluções avançadas de nível empresarial para startups, scale-ups e SMBs na Austrália e no mundo. Em menos de quatro anos, nossa equipe alcançou resultados surpreendentes, incluindo 2 competências da AWS e mais de 100 certificações da AWS em nossa equipe, 4 programas de parceiros, 2 validações de serviços da AWS e mais de 100 lançamentos de clientes. Ser nomeado Parceiro do Ano da APJ e Parceiro de Impacto Social Global do Ano são duas conquistas adicionais que demonstram a experiência e a paixão que nos motivam todos os dias.
A DNX Solutions é composta por uma equipe qualificada e experiente de consultores de nuvem e engenharia de dados com uma gama de soluções de alta qualidade disponíveis. Nosso objetivo é continuar injetando valor nas organizações de nossos clientes, ajudando-os a aproveitar tudo o que a nuvem AWS tem a oferecer.
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
Saiba mais sobre os benefícios do Managed Services para a sua empresa

A tecnologia está se aprimorando cada vez mais rápido e, com isso, inúmeras mudanças ocorrem diariamente em nosso cotidiano. Há também uma grande variedade de produtos e serviços acompanhando uma demanda crescente por profissionais da área de Tecnologia da Informação.
Nesse contexto, quando analisamos o cenário atual dos serviços em Cloud, percebemos que a situação não está tão diferente. Por ser ainda, uma área de trabalho relativamente recente, os profissionais atuantes são raros no mercado e quando encontrados, demandam um alto pagamento.
Por isso, pensando em resolver e atender melhor esta dificuldade, a DNX criou o Managed Services. Assim, é possível encontrar e contratar profissionais extremamente qualificados em Cloud AWS.
Entenda como funciona o Managed Services disponibilizado pela DNX Brasil
Inicialmente, precisamos saber que a DNX Brasil é uma empresa cloud-native, isto é, focada na entrega de soluções altamente especializadas através de uma cultura de DevOps. Além disso, segue os princípios e valores da Well-Architected e é parceira Advanced da AWS.
Dessa forma, a DNX Brasil trabalha oferecendo o Managed Services através de pacotes de horas. Assim, geralmente, são contratados os pacotes nas modalidades de 40 horas, 80 horas ou 120 horas mensais.
O prazo mínimo de contrato é de 6 meses, momento no qual, o cliente realiza a contratação, de acordo com a sua perspectiva de necessidade. Pode ocorrer também um ajuste posterior. E se houver alguma situação que ultrapasse o pacote de horas previamente contratado, negociamos uma diferença por horas extras utilizadas.
Serviço sob medida, pague pelo que usar!
Assim, resumidamente, visto de uma forma mais administrativa e financeira, o Managed Services é um serviço de profissionais altamente qualificados, onde você paga pelo que for consumido.
Nesse aspecto, a empresa contratante evita arcar com os custos referentes à legislação trabalhista, como o 13º salário, férias, indenização, em caso de demissão sem justa causa, multa do FGTS etc. Bem como problemas comuns em equipes de TI como o Turnover e capacitação.
Dessa forma, entre os serviços realizados dentro do Managed Services encontram-se diversos tipos de trabalhos que podem ser iniciados a partir de uma demanda específica ou proativamente. Sempre alinhado às identificações de melhorias pela própria equipe da DNX.
Vantagens de optar pelo Managed Services
A DNX Brasil compreende que as interações estão se tornando cada vez mais rápidas, principalmente, quando o assunto envolve o mercado. Nesse caso, é comum também que empresas e startups não tenham tempo disponível ou a expertise necessária para lidar com a infraestrutura e os mais de 165 serviços e produtos oferecidos pela AWS.
Por esse motivo, nossos técnicos atuam de forma proativa a fim de propor e alcançar melhorias para o ambiente. Entre algumas atividades exercidas e realizadas pela nossa equipe, destacam-se:
- Checklists operacionais
- Análise de logs e alertas (Health Check de ambiente)
- Relatórios Mensais de Incidentes, Performance, Segurança e Custo
- Melhoria cabíveis e contínuas do ambiente (do ponto de vista do técnico)
- Controle e análise de billing
Outra atividade também exercida pela nossa equipe é a identificação de oportunidades de melhorias do ambiente AWS, como PoC (Provas de Conceito), WAFR (Well-Architecture Framework Review), Modernization, entre outras. Nesse contexto, algumas desses trabalhos podem gerar créditos* para o cliente, podendo ser utilizado de diversas formas, inclusive na fatura.
Possibilidade de trabalho sob demanda!
Nesse caso, vamos tratar de outro serviço, que é muito comum, e prestado através do Managed Services. Eles são os trabalhos efetuados sob demanda, que podem incluir projetos específicos, execução de determinada tarefa no ambiente AWS solicitada pelo cliente, entre muitas outras.
Logo a seguir, nós temos uma lista com um exemplo de outras atividades sob demanda:
- Gestão de incidentes (suporte)
- Atuação diferenciada conforme a gravidade
- Relatórios de solicitados pelo Cliente
- Gestão de backlog (melhorias e mudanças):
- Evolução do monitoramento com Grafana/Prometheus e Dashboards Customizados
- Automação em geral
- Projetos e Consultorias em geral
- Backlog de DevOps
Conheça os canais de comunicação do Managed Services
Como canal de comunicação do Managed Services, utilizamos diversas ferramentas para a abertura, acompanhamento e alinhamento das solicitações. Entre as quais, destacamos:
- Wrike – Com essa ferramenta é possível abrir e acompanhar os chamados abertos, bem como visualizar as horas utilizadas.
- Slack – É bastante utilizada para o alinhamento das atividades solicitadas pelos clientes e realizadas pela DNX Brasil.
- E-mail – É utilizado para envio de documentação e informações, sobretudo, quando se refere a detalhes administrativos e burocráticos.
Ao longo dessa leitura, você já percebeu que, se a sua empresa está passando por algum tipo de dificuldade em ambientes de Cloud e AWS, a DNX Brasil ´e a escolha certa para te ajudar.
Além disso, a DNX Brasil conta com profissionais altamente qualificados. Assim, se você deseja receber algum tipo de suporte para sua equipe de DevOps, entre em contato com a gente e peça por informações sobre o plano de horas oferecido no Managed Services.
Se além de profissionais especializados em AWS, sua empresa possui necessidade e interesse em um NoC com monitoramento 24/7 do seu ambiente, o Grupo Vibe Tecnologia possui esse serviço prestado pela Master uma das empresas do Grupo.
*A AWS possui também diversos programas de incentivo que oferecem créditos para o uso de seus produtos. Que pode ser para começar a utilizar o Migration, melhoria de ambiente como o WAFR e até um programa especial para as Startups chamado de AWS Activate.
Escrito por: Caio Iketani
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
One Foundation da DNX Brasil: melhore o desempenho de sua empresa

Em um mundo cada vez mais conectado e interligado, é essencial que as empresas demonstrem segurança, confiabilidade e estabilidade em suas operações para os clientes/usuários.
E com os avanços e serviços oferecidos pela tecnologia em cloud (nuvem), diversas empresas (de médio e grande porte) e startups estão migrando de seu ambiente local on-premise para um ambiente em cloud. Nesse caso, isso pode ocorrer de forma total ou híbrida.
Nesse contexto, onde se destacam grandes avanços tecnológicos, a Amazon Web Services (AWS) é uma das principais e mais abrangentes plataformas de cloud do mundo.
E a DNX Brasil, por ser um parceiro Advanced da AWS, oferece diversas soluções e benefícios que unem a qualidade e a excelência profissional em tecnologia em cloud.
Por isso, vamos conhecer agora a ferramenta One Foundation e como ela pode ajudar a sua empresa a escalar ainda mais. E aplicar o que há de mais atual na tecnologia em cloud.
A DNX Brasil é parceira Advanced da AWS
nicialmente, a DNX Brasil é uma empresa focada em cloud nativa, e é também certificada como Advanced Partner da AWS. Nós trabalhamos com um time técnico de engenheiros e arquitetos altamente capacitados. Logo, eles estão prontos para trazerem soluções eficientes em cloud para melhorar a organização de sua empresa/startup.
Ferramenta One Foundation da DNX
E, a fim de remediar um ambiente não otimizado para os recursos e soluções da AWS, a DNX criou um produto chamado One Foundation. Com essa ferramenta, somos capazes de melhorar o ambiente do cliente aplicando as melhores práticas do Well Architected.
Dessa forma, estamos aptos para ajudar as organizações a criarem ambientes com as melhores práticas baseadas nos seis pilares de uma estrutura Well-Architected.
Assim, nós da DNX One Foundation ajudaremos você a entender os prós e os contras das soluções, produtos e serviços oferecidos pela AWS. Nesse caso, ao criar sistemas em plataformas em cloud, podemos facilitar as decisões que você precisa tomar em prol da empresa/startup.
Para isso, ao usar o Well-Architected, as melhores práticas de arquitetura estarão ao seu alcance para projetar e operar sistemas confiáveis, seguros, eficientes e econômicos com o uso da tecnologia em cloud.
Como funciona a One Foundation?
Assim, por ser um parceiro da AWS do programa APN (Amazon Partner Network), a DNX Brasil possui arquitetos certificados. Eles analisam, criteriosamente, o ambiente do cliente para identificar os problemas e as soluções possíveis. Com o fim de produzir um plano de remediação e roadmap com soluções para curto, médio e longo prazo.
Logo, quando você contrata os serviços da DNX Brasil, inicialmente, fazemos um levantamento do estado atual do ambiente em questão. Nesse momento, consideramos os riscos que o ambiente atual pode conter e trabalhamos para tornar um ambiente que leve em conta as melhores práticas do Well-Architeced.
O levantamento feito pela DNX Brasil será valioso para a finalização do projeto, sendo fundamental para o recebimento de crédito do cliente.
Aplicação dos seis pilares da Well-Architected
Durante a execução do projeto, nossos técnicos atuarão levando em conta os seis pilares definidos pela AWS: segurança, confiabilidade, eficiência e performance, excelência operacional, otimização de custos e sustentabilidade. Com o objetivo de que o projeto esteja em conformidade com o que é considerado um ambiente Well-Architected.
No final do projeto, emitimos um relatório do ambiente, descrevendo os objetivos alcançados em comparação ao ambiente inicial, e apontamos o que foi melhorado.
Desse modo, com base no relatório que é enviado para a AWS, o ambiente passará por uma vistoria, com o propósito de identificar se foi solucionado o mínimo de 45% dos riscos levantados na fase inicial da avaliação.
Em caso positivo, a AWS disponibiliza um voucher no valor de U$ 5.000,00 em créditos, podendo ser usado de diferentes formas na conta do cliente.
Não perca mais tempo, se você possui uma conta na AWS e está interessado em ter um ambiente em conformidade com os seis pilares do Well Architected, entre em contato com a DNX Brasil para saber mais sobre a One Foundation, e como podemos te ajudar!
Escrito por: Caio Iketani
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
Saiba o que realmente são as práticas Well-Architected da AWS

Não é de hoje que a computação em nuvem vem revolucionando o mundo. Nesse contexto, com as soluções encontradas, diversas áreas da vida cotidiana estão se transformando.
E no que se refere aos serviços em nuvem (cloud services), há inúmeras possibilidades de uso que podem variar de acordo com os interesses da empresa e/ou startup.
Assim, nesse meio, é comum nos depararmos com os seguintes termos: PaaS, Plataform as a Service, SaaS, Software as a Service, IaaS, Infrastructure as a Service, entre outros. Processos que são bem compreendidos, principalmente, por quem trabalha na área.
Logo, você já ouviu falar na AWS (Amazon Web Services) e nos produtos e serviços oferecidos pela AWS, como o Well-Architected? Caso não, continue a leitura e vamos solucionar este problema agora!
Um dos maiores e melhores serviços em nuvem (cloud) do planeta
A AWS é conhecida mundialmente, pois é a maior empresa de computação em nuvem do mundo possuindo uma oferta de mais de 165 produtos e serviços.
Entre os serviços oferecidos pela AWS, destacam-se: o Storage, Banco de Dados, Computação, Servidores, Machine Learning etc. E em relação a infraestrutura, destacam-se a IaaS, Amazon S3, AWS EC2 e Lambda.
No que se refere a plataforma (PaaS) elencamos ainda, Elastic Beanstalk e Dynamodb, além de diversos softwares (SaaS) que se encontram disponíveis à venda na própria AWS.
Essa gama de produtos e serviços cresce a cada ano, dando muitas opções de soluções para o cliente. Porém, às vezes, o ambiente oferecido não é utilizado da forma mais adequada, e isso pode impactar em questões de segurança, desempenho, custo, infraestrutura, atendimento ao cliente entre outros.
Diante disso, a AWS resolveu ajudar os seus clientes a utilizarem da forma mais eficiente a plataforma, por meio de práticas mais adequadas que estão disponíveis a partir das premissas do Well-Architected.
Afinal, do que se trata o Well-Architected?
Para ajudar seus clientes a utilizar da melhor forma possível todos os serviços oferecidos, a AWS fixou seis áreas que foram definidas como os pilares do Well-Architected, que são: Excelência Operacional, Segurança, Confiabilidade, Otimização de Custos, Eficiência e Performance, e Sustentabilidade.
A seguir, apresentamos os seis pilares que dão a base para a Well-Architected:
Excelência Operacional
Esse importante pilar, concentra-se na execução e monitoramento de sistemas e na melhoria contínua de processos e procedimentos.
Os principais tópicos incluem automação de alterações, reação a eventos e definição de padrões para gerenciar as operações diárias.
Segurança
Destaca-se na proteção de informações e sistemas. Os principais tópicos incluem confidencialidade e integridade de dados, gerenciamento de permissões de usuário e estabelecimento de controles para detectar eventos de segurança.
Confiabilidade
Aplica-se nos workloads que executam as funções pretendidas e na recuperação rápida de falhas em atender demandas.
Os principais tópicos desse pilar incluem: projeto de sistemas distribuídos, planejamento de recuperação e requisitos de adaptação a mudanças.
Eficiência de Performance
Concentra-se na alocação estruturada e simplificada de recursos de TI e computação. Os principais tópicos incluem seleção dos tipos e tamanhos certos dos recursos otimizados para os requisitos de workload, monitoramento de performance e manutenção da eficiência à medida que as necessidades comerciais evoluem.
Otimização de Custos
Destaca-se em evitar custos desnecessários. Os principais tópicos incluem: compreensão dos gastos ao longo do tempo e controle da alocação de fundos, seleção do tipo e quantidade certa de recursos e dimensionamento para atender às necessidades de negócios sem gastos excessivos.
Sustentabilidade
Esse pilar concentra-se em minimizar os impactos ambientais da execução de workloads em nuvem.
Assim, os principais tópicos incluem: um modelo de responsabilidade compartilhada para a sustentabilidade, compreensão do impacto e maximização da utilização para minimizar os recursos necessários e reduzir os impactos posteriores.
A AWS confia tanto nos pilares propostos, e leva tão a sério esta arquitetura, que oferece um crédito de U$5.000,00 aos clientes que atualizem o ambiente levando em consideração essas seis áreas. Contudo, para ter acesso a esse crédito, é necessário realizar avaliação e reavaliação com um parceiro AWS acreditado para esse procedimento. E a DNX Brasil é acreditada pela AWS para realizar esse serviço.
E para facilitar que as empresas adquiram este ambiente otimizado, assim como, o crédito fornecido pela AWS, a DNX criou um produto chamado de DNX One Foundation, que você poderá conhecer melhor acompanhando as nossas próximas postagens!
Gostou do conteúdo? Siga as nossas publicações para ficar por dentro de tudo o que acontece no ambiente Cloud. Em caso de dúvida, entre em contato com a DNX Brasil, estamos aqui para te ajudar!
Escrito por: Caio Iketani
A DNX Brasil tem as melhores soluções e a experiência que você precisa para impulsionar seu negócio. Entre em contato conosco para obter um plano para sua jornada na nuvem.
Saiba o que realmente são as práticas Well-Architected da AWS

Não é de hoje que a computação em nuvem vem revolucionando o mundo. Nesse contexto, com as soluções encontradas, diversas áreas da vida cotidiana estão se transformando.
E no que se refere aos serviços em nuvem (cloud services), há inúmeras possibilidades de uso que podem variar de acordo com os interesses da empresa e/ou startup.
Assim, nesse meio, é comum nos depararmos com os seguintes termos: PaaS, Plataform as a Service, SaaS, Software as a Service, IaaS, Infrastructure as a Service, entre outros. Processos que são bem compreendidos, principalmente, por quem trabalha na área.
Logo, você já ouviu falar na AWS (Amazon Web Services) e nos produtos e serviços oferecidos pela AWS, como o Well-Architected? Caso não, continue a leitura e vamos solucionar este problema agora!
Um dos maiores e melhores serviços em nuvem (cloud) do planeta
A AWS é conhecida mundialmente, pois é a maior empresa de computação em nuvem do mundo possuindo uma oferta de mais de 165 produtos e serviços.
Entre os serviços oferecidos pela AWS, destacam-se: o Storage, Banco de Dados, Computação, Servidores, Machine Learning etc. E em relação a infraestrutura, destacam-se a IaaS, Amazon S3, AWS EC2 e Lambda.
No que se refere a plataforma (PaaS) elencamos ainda, Elastic Beanstalk e Dynamodb, além de diversos softwares (SaaS) que se encontram disponíveis à venda na própria AWS.
Essa gama de produtos e serviços cresce a cada ano, dando muitas opções de soluções para o cliente. Porém, às vezes, o ambiente oferecido não é utilizado da forma mais adequada, e isso pode impactar em questões de segurança, desempenho, custo, infraestrutura, atendimento ao cliente entre outros.
The process of migrating from an on-premise system to the cloud is complex, but when companies hide from the future, they get left behind.
Don’t let your company get stuck in the past, read on to find out what you need to know when considering migration. Our Cloud Migration Checklist will help you craft a well-informed plan to prepare and strategise for your migration to the cloud.
Afinal, do que se trata o Well-Architected?
Para ajudar seus clientes a utilizar da melhor forma possível todos os serviços oferecidos, a AWS fixou seis áreas que foram definidas como os pilares do Well-Architected, que são: Excelência Operacional, Segurança, Confiabilidade, Otimização de Custos, Eficiência e Performance, e Sustentabilidade.
A seguir, apresentamos os seis pilares que dão a base para a Well-Architected:
Excelência Operacional
Esse importante pilar, concentra-se na execução e monitoramento de sistemas e na melhoria contínua de processos e procedimentos.
Os principais tópicos incluem automação de alterações, reação a eventos e definição de padrões para gerenciar as operações diárias.
Segurança
Destaca-se na proteção de informações e sistemas. Os principais tópicos incluem confidencialidade e integridade de dados, gerenciamento de permissões de usuário e estabelecimento de controles para detectar eventos de segurança.
Confiabilidade
Aplica-se nos workloads que executam as funções pretendidas e na recuperação rápida de falhas em atender demandas.
Os principais tópicos desse pilar incluem: projeto de sistemas distribuídos, planejamento de recuperação e requisitos de adaptação a mudanças.
Eficiência de Performance
Concentra-se na alocação estruturada e simplificada de recursos de TI e computação. Os principais tópicos incluem seleção dos tipos e tamanhos certos dos recursos otimizados para os requisitos de workload, monitoramento de performance e manutenção da eficiência à medida que as necessidades comerciais evoluem.
Otimização de Custos
Destaca-se em evitar custos desnecessários. Os principais tópicos incluem: compreensão dos gastos ao longo do tempo e controle da alocação de fundos, seleção do tipo e quantidade certa de recursos e dimensionamento para atender às necessidades de negócios sem gastos excessivos.
Sustentabilidade
Esse pilar concentra-se em minimizar os impactos ambientais da execução de workloads em nuvem.
Assim, os principais tópicos incluem: um modelo de responsabilidade compartilhada para a sustentabilidade, compreensão do impacto e maximização da utilização para minimizar os recursos necessários e reduzir os impactos posteriores.
A AWS confia tanto nos pilares propostos, e leva tão a sério esta arquitetura, que oferece um crédito de U$5.000,00 aos clientes que atualizem o ambiente levando em consideração essas seis áreas. Contudo, para ter acesso a esse crédito, é necessário realizar avaliação e reavaliação com um parceiro AWS acreditado para esse procedimento. E a DNX Brasil é acreditada pela AWS para realizar esse serviço.
E para facilitar que as empresas adquiram este ambiente otimizado, assim como, o crédito fornecido pela AWS, a DNX criou um produto chamado de DNX One Foundation, que você poderá conhecer melhor acompanhando as nossas próximas postagens!
Gostou do conteúdo? Siga as nossas publicações para ficar por dentro de tudo o que acontece no ambiente Cloud. Em caso de dúvida, entre em contato com a DNX Brasil, estamos aqui para te ajudar!
Entre em contato com um especialista da DNX Brasil e reserve uma reunião gratuita de 15 minutos e explore suas possibilidades de Migração para Nuvem
AWS, Azure, or GCP: Which cloud provider is right for you?

AWS, Azure, or GCP: Which cloud provider is right for you?
The Big Three
In modern day cloud computing, three major providers hold the top spots: Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS).
While all three platforms look similar on the surface, with features such as self-service, autoscaling and high level security and compliance, the difference is in the details. Each of these providers vary in their computing capabilities, storage technologies, pricing structures and more.
When migrating to the cloud, the key to success is choosing a provider that matches your unique business goals. In this article, we outline the major differences and provide guidance on how to choose the right cloud provider for you.
Computing Power
GCP is less functionally rich than Azure and AWS, though it offers unique advantages including managing and deploying cloud applications, payable only when code is deployed.
Azure uses a network of virtual machines to offer a full variety of computing services, including app deployment, extensions and more.
AWS computing, termed E2C, is highly flexible, powerful and less costly than other services. E2C provides auto scaling to your usage, so you don’t pay more than necessary. AWS offers a sophisticated range of computing features including speed, optimal security, managing security groups, and much more.
Storage Technologies
Whilst GCP’s storage options are reliable, they remain fairly basic, with features including cloud storage and persistent disk storage.
Azure offers many storage cloud types to target various organisational needs, including Data Lake Storage, Queue Storage and Blob Storage. Additionally, File Storage is optimised for most business requirements.
AWS offers a wide range of storage solutions that allow for a high level of versatility. Simple Storage Service is industry standard, while Storage Gateway offers a more comprehensive storage approach.
Network & Location
GCP does not match the reach of Azure or AWS, currently serving 21 regions with aims to grow its number of data centres around the world.
Azure is available in 54 regions worldwide, keeping traffic within the Azure network for a secure networking solution.
AWS runs on a comprehensive global framework around 22 different regions, including 14 data centres and 114 edge locations. This ensures continuous service, reliable performance, speedy cloud deployment and lightning-fast response times.
Pricing Structure
GCP offers multiple pricing options, from free tier to long-term reservations. These prices are affected by many factors including network, storage and serverless pricing.
Azure charges on a per-second basis, allowing users to start and stop the service, paying only for what they use.
AWS provides a convenient pay-as-you-go model, allowing users to pay only for what they consume, without any termination fees.
Conclusion
AWS is the superior cloud provider in the market, reducing time to value for customers and increasing business agility. With significantly more services than the other providers, AWS offers a greater range of features to its users. For these reasons, among others, DNX Solutions works exclusively with AWS, helping our clients take full advantage of all the benefits it provides. Each of our solutions are designed with AWS in mind, allowing us to focus on getting the most out of the cloud for our clients, today and in the future.
How can DNX help you?
Contact us now to learn more about making the most of the many AWS benefits.
As an AWS partner, DNX offers the guidance and expertise for cloud migrations done right. We offer seamless migration to AWS, following best practice architectural solutions while offering modernisation as a part of the process. With a professional team by your side ensuring security, compliance and best practices, your business will get the most out of this powerful cloud provider.
Contact a DNX expert to book a free 15-minute consultation and explore your possibilities for Cloud Migration
The basics of Cloud Migration

What is Cloud Migration all about?
The concept of cloud migration is familiar to those who use cloud storage in their personal lives. Simply put, cloud migration is the process of moving information from an on-premise source to a cloud computing environment. You can think of it as moving all your important data and programs from your personal computer to a place where they are automatically backed-up and protected. If your computer were to experience a power failure, have hot coffee spilled over it, or be stolen, you would be able to access all of your data from another computer, and have the ability to update your security functions if a breach had occurred. With greater movement of employees and company expansions, storing data in the cloud facilitates business innovation and security, leading to efficiency and ease of governance, preparing you for the digital future.
On a larger scale, cloud migration for businesses includes the migration of data, applications, information, and other business elements. It may involve moving from a local data centre to the cloud, or from one cloud platform to another.
The key benefit is that, through cloud migration, your business can host applications and data in the most effective IT environment possible with flexible infrastructure and the ability to scale. This enhances the cost savings, performance and security of your business over the long term.
Cloud migration is a transformation that will lead the way forward in years to come.
What are the benefits of migrating to the cloud?
The cloud brings agility and flexibility to your business environment. As we move into the world of digital workspaces, cloud migration allows for enhanced innovation opportunities, alongside faster time to delivery.
Businesses will realise all kinds of benefits, including reduced operating costs, simplified IT, improved scalability, and upgraded performance. Meeting compliance for data privacy laws becomes easier, and automation and AI begin to improve the speed and efficiency of your operations. Cloud migration results in optimisation for nearly every part of your business.
What are the options for Cloud Migration?
There are six main methods used to migrate apps and databases to the cloud.
- Rehosting (“Lift-and-shift”). Through this method, the application is moved to the cloud without any changes made to optimise the application for the new environment. This allows for a fast migration, and businesses may choose to optimise later.
- Replatforming (“Lift-tinker-and-shift”). This involves making a few optimisations rather than strictly migrating a legacy database.
- Re-purchasing. This involves purchasing a new product, either by transferring your software licence to an online server or replacing it entirely using SaaS options.
- Re-architecting/Refactoring. This method involves developing the application using cloud-native features. Although initially more complex, this future-focussed method provides the most opportunity for optimisation.
- Retiring. Applications that are no longer required are retired, achieving cost savings and operational efficiencies.
- Retaining. This is a choice to leave certain applications as they are with the potential to revisit them in the future and decide whether they are worth migrating.
How much does it cost?
Migrating to the cloud requires a comprehensive strategy, taking into account multiple management, technology and resource challenges. This means the cost of migration can vary widely, particularly as goals and requirements differ between organisations. Funding options may be available to your business when migrating to AWS, so considering all your options carefully may factor such opportunities into your decision and have an impact on which methodology you choose to follow.
In recent years, technologies and cloud computing companies have been developed to create ease and efficiency in the migration process, such as cloud migration powerhouse DNX Solutions.
How does DNX help you with Cloud Migration?
DNX identifies your unique business needs to uncover the best pathway for you, making your migration journey simpler, faster, and more cost-effective. With a secure, speedy cloud migration process, DNX sets your business up for success from day one.
Using DNX for Cloud Migration means you migrate the right way — and unlock full value from AWS — through a unique, secure, and automated foundation.
DNX makes it easy to migrate to a Well-Architected, compliant AWS environment. As part of the process, DNX modernises your applications so you can leverage the benefits of cloud-native technologies. This means your business will enjoy more resilience, cost efficiency, scalability, security, and availability from the very beginning.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data engineering.
Quicksight vs Tableau for Data Analytics. A Comprehensive Comparison
With so many tools available to improve business experiences, it can be difficult to know which will work best for your specific needs. Comparisons between the top competitors can save you significant resources before investing in tool purchases and training your team. Two well-known data analytics tools are Tableau and QuickSight, both of which offer a range of visualisations allowing you and your team to understand your data better. In a world where data is becoming more and more powerful, understanding the story your data tells is absolutely essential for future success.
Whilst all businesses are at different stages of their data modernisation journeys, those who invest in getting ahead now find themselves with a huge advantage over the competition. Data analytics has gone a long way since manually manipulating data in excel, and today a number of simplified platforms are available, meaning you don’t need a team full of data scientists in order to understand what’s going on around you. Tableau, founded in 2003, is now competing with QuickSight, rolled out in 2016. In this article we will comprehensively compare these two analytics tools, so you don’t have to.
Getting Started:
Unlike Tableau’s need for a desktop to create data sources, QuickSight has a range of options for data connectivity. Anyone can start viewing insights on QuickSight despite their level of training, so it allows for the whole team to understand what the data is saying. Tableau is not the easiest tool to navigate with many business users only benefitting from the tool after undertaking training. If you have a diverse team with varying technical knowledge, QuickSight is the right tool for you.
Management:
Tableau has two options for servers, Tableau Online and On-Premises Tableau servers. On-prem servers require dashboards to be developed by analysts and pushed to the server. In addition, they require provision of servers and infrastructure which can be costly to maintain, upgrade and scale. The Tableau Online option has support for a limited number of data sources and is plagued with a history of performance issues. QuickSight, on the other hand, is a cloud-native SaaS application with auto-scaling abilities. Content is browser based, meaning different version usage by clients and servers is inconsequential. In addition, QuickSight’s release cycles allow customers to use new functionality as they emerge with no need to upgrade the BI platform.
Speed and Innovation:
The use of local machines and self-managed servers inhibits Tableau’s ability to perform at great speed and often requires technology upgrades. QuickSight however, produces interactive visualisations in milliseconds thanks to its in-memory optimised engine SPICE. In regards to innovation, despite Tableau’s quarterly release cycle, most users only upgrade annually due to the complexity and costs involved. In contrast, QuickSight users can take advantage of the constant stream of new features as soon as they are released.
Cost and Scalability:
The cost difference between the two tools is so extreme that it is barely worth comparing. Tableau has three pricing options, all of which are required to be paid in full regardless of monthly usage. Tableau’s plans range from $15 to $70 per month. QuickSight is priced on a per-user basis and ranges from $5 to $28 per month. If a user goes a month without logging in, they pay nothing. In the most common scenario, QuickSight is 85% cheaper than Tableau.
The inflexible pricing plans offered by Tableau mean deciding to scale is a difficult call to make. In addition, as the amount of users and data increases so too do the overhead costs of maintaining the BI infrastructure. QuickSight, like all AWS products, is easily scalable and doesn’t require server management. Risk is reduced when experimenting with scaling thanks to QuickSight’s usage-based pricing model.
Security:
Customers utilising Tableau have some difficult decisions to make when it comes to security. Due to the deployment of agents/gateway to connect data on-premises or in Private VPCs, security levels are compromised. QuickSight allows customers to link privately to VPCs and on-premises data, protecting themselves from exposure through the public internet. With automatic back-ups in S3 for 11 9s durability and HA/multi-AZ replication, your data is safe with QuickSight.
Memory:
Tableau’s in-memory data engine Hyper, may be able to handle very large datasets, but it is no match to SPICE. SPICE by QuickSight has a constantly increasing row limit and QuickSight Q offers superior performance when it comes to integrating with RedShift and Athena to analyse large amounts of data in real time.
Sourcing and Preparing Data:
Although the frequency of data being stored on-premises is slowing, some companies are yet to undertake full data modernisation solutions and require access to on-prem locations. Tableau can handle this issue with access to data from sources such as HANA, Oracle, Hadoop/Hive and others. QuickSight, whilst primarily focussed on cloud based sources, also has the ability to connect to on-premises data through AWS Direct Connect. The growing list of databases available to QuickSight includes Teradata, SQL Server, MySQL, PostgreSQL and Oracle (via whitelisting). Tableau allows users to combine multiple data sources in order to prepare data for analysis through complex transformations and cleansing. QuickSight can utilise other AWS tools such as Glue and EMR to guarantee quality treatment of data. Beyond the two mentioned, there are multiple other ETL partners that can be accessed for data cleansing.
Dashboard Functionality and Visualisations:
Tableau has built-in support for Python and R scripting languages and offers a range of visualisation types as well as highly formatted reports and dashboards. QuickSight tends to be more popular in its visualisations, with over a dozen types of charts, plots, maps and tables available. The ease at which data points can be added to any analysis ensures clarity and allows comparisons to be made with the click of a button. Furthermore, machine learning enhances user experience by making suggestions based on the data being considered at the time.
Conclusion:
Whilst Tableau was an extremely innovative tool back when it was founded in 2003, it is no match to QuickSight. With the ability to connect to a full suite of software and platforms available within Amazon Web Services, QuickSight is so much more than a stand-alone tool. For businesses looking for a fast, scalable and easily understood data analytics tool, they cannot go wrong with QuickSight.
With the importance of data growing exponentially, it is no longer realistic to rely on the extensive knowledge of data scientists and analysts for everyday visualisations. QuickSight allows employees throughout the business to gain quick understanding of data points without having to wait for help from analysts. QuickSight is continually releasing new features to make the tool even more user friendly as time goes on.
Data Modernisation solutions offered by DNX frequently utilise QuickSight in order to provide clients with the most cost-effective, scalable and easy to use systems, increasing the power they have over their data.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data security.
Harnessing the Power of Data in the Financial Sector
Digitisation has enabled technology to transform the financial industry. Advanced analytics, machine learning (ML), artificial intelligence (AI), big data, and the cloud have been embraced by financial companies globally, and the use of this technology brings an abundance of data.
When it comes to FinTech, pace is paramount. The more accurate trends and predictions are, the more positive the outcomes will be. Data-driven decision making is key.
How Data Can Benefit the Financial Industry
Today, FinTech businesses must be data-driven to thrive, which means treating data as an organisational asset. The collection and interpretation of data enable businesses to gain quick and accurate insights, resulting in innovation and informed decision-making.
It is recommended to set up business data in a way that provides easy access to those who need it.
Finance and Big Data
The compilation of globally collected data, known as Big Data, has had fascinating effects on the finance industry. As billions of dollars move each day, Big Data in finance has led to technological innovations, transforming both individual businesses and the financial sector as a whole.
Analysts monitor this data each day as they establish predictions and uncover patterns. In addition, Big Data is continuously transforming the finance industry as we know it by powering advanced technology such as ML, AI, and advanced analytics..
The Influence of ML on the Market
Powered by big data, ML is changing many aspects of the financial industry, such as trade and investments, as it accounts for political and social trends that may affect the stock market, monitored in real-time.
ML powers fraud detection and prevention technologies, reducing security risks and threats. Additiontionally, it provides advances in risk analysis, as investments and loans now rely on this technology.
Despite all the gains made so far, the technologies powered by advanced machine learning continue to evolve.
Security and Data Governance
The cost of data breaches are increasing. In 2021, the financial sector had the second-highest costs due to breaches, behind only healthcare. The technology sector was the fourth most affected, meaning the risk of breaches for FinTech organisations is high.
Data governance is necessary to mitigate risks associated with the industry, which means many companies are required to undergo data modernisation. Businesses must ensure all data is secure and protected and suspicious activity is detected and flagged, in line with strict government standards.
Taking the first steps
The journey to data modernisation offers benefits that far exceed the initial cost of investment, though the process to accreditation can be daunting. The journey begins with building strategies from clear objectives, then mapping the plan, migrating data, implementing cloud tools, and beyond.
To simplify the initial steps towards compliant data modernisation, DNX Solutions has prepared a guide to help FinTech businesses modernise their data. Click here to view the 8 steps you need to take to prepare for your Data Modernisation journey.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data security.
Automating .NET Framework deployments with AWS CodePipeline to Elastic Beanstalk
When it comes to Windows CI/CD pipeline people immediately start thinking about tools like Jenkins, Octopus, or Azure DevOps, and don’t get me wrong because those are still great tools to deal with CI/CD complexities. However, today I will be explaining how to implement a simpler .NET Framework (Windows) CI/CD pipeline that will deploy two applications (API and Worker) to two different environments using GitHub, CodePipeline, CodeBuild (Cross-region), and Elastic Beanstalk.

Requirements
- AWS Account
- GitLab repository with a .NET Framework blueprint application
- Existing AWS Elastic Beanstalk Application and Environment
CodePipeline setup
Let’s create and configure a new CodePipeline, associating an existing GitHub repository via CodeStar connections, and linking it with an Elastic Beanstalk environment.

First, let’s jump into AWS Console and go to CodePipeline.

Once in the Codepipeline screen, let’s click on Create Pipeline button.

This will start the multi-step screen to set up our CodePipeline.
Step 1: Choose pipeline setting
Please enter all required information as needed and click Next.

Step 2: Add source stage
Now let’s associate our GitHub repository using CodeStar connections.
For Source Provider we will use the new GitHub version 2 (app-based) action.
If you already have GitHub connected with your AWS account via CodeStar connection, you only need to select your GitHub repository name and branch. Otherwise, let’s click on Connect to GitHub button.

Once at the Create a connection screen, let’s give it a name and click on Connect to GitHub button.

AWS will ask you to give permission, so it can connect with your GitHub repository.

Once you finish connecting AWS with GitHub, select the repository you want to set up a CI/CD by searching for its name.
The main branch we’ll use to trigger our pipeline will be main as a common practice, but you can choose a different one you prefer.
For the Change detection options, we’ll select Start the pipeline on source code change, so whenever we merge code or push directly to the main branch, it will trigger the pipeline.
Click Next.

Step 3: Add build stage
This step is the one we will generate both source bundle artifacts used to deploy both our API and Worker (Windows Service application) to Elastic Beanstalk.
We will also need to use a Cross-region action here due to CodeBuild limitations regarding Windows builds as stated by AWS on this link.
Windows builds are available in US East (N. Virginia), US West (Oregon), EU (Ireland) and US East (Ohio). For a full list of AWS Regions where AWS CodeBuild is available, please visit our region table.
⚠️ Note: Windows builds usually take around 10 to 15 minutes to complete due to the size of the Microsoft docker image (~8GB).

At this point, if you try to change the Region using the select option, the Create project button will disappear, so for now, let’s just click on Create project button and we can change the region in the following screen. And, please, make sure to select one of the regions where Windows builds are available.

Once you’ve selected a region where Windows builds are available, you can start entering all the required information for your build.

For the Environment section, we need to select the Custom image option, choose Windows 2019 as our Environment type, then select Other registry and add the Microsoft Docker image registry URL (mcr.microsoft.com/dotnet/framework/sdk:4.8) to the External registry URL.

Buildspec config can be left as default.

I highly recommend you to have a look at AWS docs Build specification reference for CodeBuild If you don’t know what a buildspec file is. Here is a brief description extracted from AWS documentation.
A buildspec is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a buildspec as part of the source code or you can define a buildspec when you create a build project. For information about how a build spec works, see How CodeBuild works.
Let’s have a look at our Buildspec file.
version: 0.2
env:
variables:
SOLUTION: DotNetFrameworkApp.sln
DOTNET_FRAMEWORK: 4.8
PACKAGE_DIRECTORY: .\packages
phases:
install:
commands:
- echo "Use this phase to install any dependency that your application may need before building it."
pre_build:
commands:
- nuget restore $env:SOLUTION -PackagesDirectory $env:PACKAGE_DIRECTORY
build:
commands:
- msbuild .\DotNetFrameworkApp.API\DotNetFrameworkApp.API.csproj /t:package /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release
- msbuild .\DotNetFrameworkApp.Worker.WebApp\DotNetFrameworkApp.Worker.WebApp.csproj /t:package /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release
- msbuild .\DotNetFrameworkApp.Worker\DotNetFrameworkApp.Worker.csproj /t:build /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release
post_build:
commands:
- echo "Preparing API Source bundle artifacts"
- $publishApiFolder = ".\publish\workspace\api"; mkdir $publishApiFolder
- cp .\DotNetFrameworkApp.API\obj\Release\Package\DotNetFrameworkApp.API.zip $publishApiFolder\DotNetFrameworkApp.API.zip
- cp .\SetupScripts\InstallDependencies.ps1 $publishApiFolder\InstallDependencies.ps1
- cp .\DotNetFrameworkApp.API\aws-windows-deployment-manifest.json $publishApiFolder\aws-windows-deployment-manifest.json
- cp -r .\DotNetFrameworkApp.API\.ebextensions $publishApiFolder
- echo "Preparing Worker Source bundle artifacts"
- $publishWorkerFolder = ".\publish\workspace\worker"; mkdir $publishWorkerFolder
- cp .\DotNetFrameworkApp.Worker.WebApp\obj\Release\Package\DotNetFrameworkApp.Worker.WebApp.zip $publishWorkerFolder\DotNetFrameworkApp.Worker.WebApp.zip
- cp -r .\DotNetFrameworkApp.Worker\bin\Release\ $publishWorkerFolder\DotNetFrameworkApp.Worker
- cp .\SetupScripts\InstallWorker.ps1 $publishWorkerFolder\InstallWorker.ps1
- cp .\DotNetFrameworkApp.Worker.WebApp\aws-windows-deployment-manifest.json $publishWorkerFolder\aws-windows-deployment-manifest.json
- cp -r .\DotNetFrameworkApp.Worker.WebApp\.ebextensions $publishWorkerFolder
artifacts:
files:
- '**/*'
secondary-artifacts:
api:
name: api
base-directory: $publishApiFolder
files:
- '**/*'
worker:
name: worker
base-directory: $publishWorkerFolder
files:
- '**/*'
As you can see, we have a few different phases in our build spec file.
- install: Can be used, as its names suggest, to install any build dependencies that are required by your application and not listed as a NuGet package.
- pre_build: That’s a good place to restore all NuGet packages.
- build: Here’s where we will build our applications. In this example, we are building and packing all our 3 applications.
msbuild .\DemoProject.API\DemoProject.API.csproj /t:package /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release
- msbuild: The Microsoft Build Engine is a platform for building applications.
- **\DemoProject.API.csproj: The web application we are targeting in our build.
- /t:Package: This is the MSBuild Target named Package which we have defined as part of the implementation of the Web Packaging infrastructure.
- /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK: A target framework is the particular version of the .NET Framework that your project is built to run on.
- /p:Configuration=Release: The configuration that you are building, generally Debug or Release, but configurable at the solution and project levels.
- For .NET Core/5+ we use the .NET command-line interface (CLI), which is a cross-platform toolchain for developing, building, running, and publishing .NET applications.
- Last but not least, we have our Worker or Windows Service application build. One of the differences here is the absence of the parameter MSBuild Target parameter
/t:build
when compared with ourDemoProject.API.csproj
web API project. Another difference is the folder where all binaries will be published.
- post_build: After all applications have been built we need to prepare the source bundle artifacts for Elastic Beanstalk. At the end of this phase, CodeBuild will prepare two source bundles which will be referred to by the artifacts section.
- In the first part of this phase, we are creating two workspace folders, one for our API and another for our Worker.
- Next, we are copying a few files to our API workspace.
- **\DemoProject.API.zip: This is the Web API package generated by MSBuild.
- **\InstallDependencies.ps1: Optional PowerShell script file that can be used to install, uninstall or even prepare anything you need in your host instance before your application starts running.
- aws-windows-deployment-manifest.json: A deployment manifest file is simply a set of instructions that tells AWS Elastic Beanstalk how a deployment bundle should be installed. The deployment manifest file must be named
aws-windows-deployment-manifest.json
.
- Next, we are copying a few files to our API workspace.
- In the first part of this phase, we are creating two workspace folders, one for our API and another for our Worker.

-
- Our Worker’s source bundle is prepared in the second part of this phase and it contains 2 applications:
- One is just an almost empty .NET core Web application required by Beanstalk that we are using as a health check.
- The second one is our actual Worker in form of a Windows Service Application.
- **\InstallWorker.ps1: Here’s a sample of a PowerShell script used to execute our Worker installer.
- Our Worker’s source bundle is prepared in the second part of this phase and it contains 2 applications:

aws-windows-deployment-manifest.json: Very similar to the previous one, this file, the only difference is that now we have a specific script containing instructions to install our service in the host machine.

In the artifacts section, CodeBuild will output two source bundles (API and Worker), which will be used as an input for the deploy stage.
Once you finish configuring your CodeBuild project, click on the Continue to CodePipeline button.

Now back to CodePipeline, select the region you created a CodeBuild project, then select it from the Project name dropdown. Feel free to add environment variables if you need them.
Click Next.

Step 4: Add deploy stage
We are now moving to our last CodePipeline step, the deployment stage. This step is to decide where our code is going to be deployed, or what AWS service we’re going to use to get our code to work on.
⚠️ Note: You will notice that we don’t have a way to configure two different deployments, so at this time you can either skip the deploy stage or set up only one application, then fix it later on. I will choose the latter option for now.
Select AWS Elastic Beanstalk for our Deploy provider.
Choose the Region that your Elastic Beanstalk is deployed under.
Then, search and select an Application name under that region or create an application in the AWS Elastic Beanstalk console and then return to this task.
⚠️ Note: If you don’t see your application name, double-check that you are in the correct region in the top right of your AWS Console. If you aren’t you will need to select that region and perhaps start this process again from the beginning.
Search and select the Environment name from your application.
Click Next.

Review
Now it’s time to review the entire settings of our pipeline to confirm before creating.

Once you are done with the review step, click on Create pipeline.
Pipeline Initiated
After the pipeline is created, the process will automatically pull your code from the GitHub repository and then deploy it directly to Elastic Beanstalk.

Let’s customize our Pipeline
First, we need to change our Build step to output two artifacts as stated in the build spec file.
In the new Pipeline, let’s click on the Edit button.

Click on Edit stage button located in the “Edit: Build” section.

Let’s edit our build.

Let’s specify our Output artifacts according to our build spec file. Then, click on Done.

Now, let’s click on the Edit stage button located in the “Edit: Deploy” section.

Here we will edit our current Elastic Beanstalk, then we will add a second one.
Let’s edit our current Elastic Beanstalk deployment first.

Change the action name to something more unique for your application, then select “api” in the Input artifacts dropdown and click on Done.

Let’s add a new action.

Add an Action name, like DeployWorker for instance.
Select AWS Elastic Beanstalk in the Action provider dropdown.
Choose the Region that your Elastic Beanstalk is located.
Select “worker” in the Input artifacts dropdown.
Then, select your Application and Environment name, and click on Done.

Save your changes.

Now we have both of our applications covered by our pipeline.

Confirm Deployment
If we go to AWS Console and access the new Elastic Beanstalk app, we should see the service starting to deploy and then transition to deployed successfully.
⚠️ Note: If you, as in this application repository demo, are creating an AWS WAF, your deployment will fail if the CodePipeline role doesn’t have the right permission to create it.

Let’s fix it!
On AWS Console, navigate to IAM > Roles under IAM dashboard, and find and edit the role used by your CodePipeline by giving the right set of permissions required to CodePipeline be able to create a WAF.

Go back to your CodePipeline and click on Retry.

That will trigger the deploy step again and if you go to your Elastic Beanstalk app, you will see the service starting to deploy and then transition to deployed successfully.

After a few seconds/minutes, the service will transition to deployed successfully.

If we access the app URL, we should see our health check working.

See deployment in action
This next part is to make a change to our GitHub repository and see the change automatically deployed.

Demo application
You can use your repository, but for this part, we’ll be utilizing this one.
Here’s the current project structure.
(root directory name) ├── buildspec.yml ├── DotNetFrameworkApp.sln ├── DotNetFrameworkApp.API │ ├── .ebextensions │ │ └── waf.config │ ├── App_Start │ │ ├── SwaggerConfig.cs │ │ └── WebApiConfig.cs │ ├── Controllers │ │ ├── HealthController.cs │ │ └── ValuesController.cs │ ├── aws-windows-deployment-manifest.json │ ├── DotNetFrameworkApp.API.csproj │ ├── Global.asax │ └── Web.config ├── DotNetFrameworkApp.Worker │ ├── App.config │ ├── DotNetFrameworkApp.Worker.csproj │ ├── Program.cs │ ├── ProjectInstaller.cs │ └── Worker.cs ├── DotNetFrameworkApp.Worker.WebApp │ ├── .ebextensions │ │ └── waf.config │ ├── App_Start │ │ └── WebApiConfig.cs │ ├── Controllers │ │ ├── HealthController.cs │ │ └── StatusController.cs │ ├── aws-windows-deployment-manifest.json │ ├── DotNetFrameworkApp.Worker.WebApp.csproj │ ├── Global.asax │ └── Web.config
DotNetFrameworkApp repository contains 3 applications (API, Worker, and a WebApp for the Worker) created with .NET Framework 4.8.
We are also adding an extra security layer using a Web Application Firewall (WAF) to protect our Application Load Balancer, created by Elastic Beanstalk, against attacks from known unwanted hosts.
Code change
Make any change you need in your repository and either commit and push directly to main or create a new pull request and then merge that request to the main branch.
Once pushed or merged, you can take a look at the CodePipeline automatically pull and deploy this new code.

What’s next?
The next step would be to introduce Terraform, have everything we have built here as code, have an automatic way to pass additional environment variables, and introduce logging.
Final Thoughts
AWS CodePipeline when combined with other services can be a very powerful tool you can use to modernize and automatize your Windows workloads. This is just a first step, and you definitely should start planning to have: automated tests, environment variables, and even a better way to have Observability on your application.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data engineering.
Data Archiving utilising Managed Workflows for Apache Airflow
We assisted a Fintech client to minimize its storage cost by archiving its data from RDS (MySQL) to S3 using an automated batch process, where all data from a specific time range should be exported to S3. Once the data is stored on S3 the historical data can be analyzed using AWS Athena and Databricks. The solution should include a delete strategy to remove all data older than two months.
Currently, the database size has increased exponentially with the number of logs that are stored in the database, this archive procedure should have a minimal impact on the production workload and be easy to orchestrate, for this specific data archiving case we are handling tables with more than 6 TB of data which should be archived in the most efficient manner, part of this data will no longer be necessary to be stored on the database.
In this scenario, Managed Workflows for Apache Airflow (MWAA), a managed orchestration service for Apache Airflow, helps us to manage all those tasks. Amazon MWAA fully supports integration with AWS services and popular third-party tools such as Apache Hadoop, Presto, Hive, and Spark to perform data processing tasks.
In this example, we will demonstrate how to build a simple batch processing that will be executed daily, getting the data from RDS and exporting it to S3 as shown below.
Export\Delete Strategy:
- The batch routine should be executed daily
- All data from the previous day should be exported as CSV
- All data older than 2 months should be deleted
Solution
- RDS – Production database
- MWAA – (to orchestrate the batches)
- S3 bucket – (to store the partitioned CSV files)

As shown in the architecture above, MWAA is responsible for calling the SQL scripts directly on RDS, in Airflow we use MySQL operator to execute SQL scripts from RDS.
To encapsulate those tasks we use an Airflow DAG.
Airflow works with DAGs, DAG is a collection of all the tasks you want to run. A DAG is defined in a Python script, which represents the DAGs structure (tasks and their dependencies) as code.
In our scenario, the DAG will cover the following tasks:
- Task 1 – Build procedure to export data
- Task 2 – Execute procedure for export
- Task 3 – Build procedure to delete data
- Task 4 – Execute delete procedure
Airflow DAG graph

Creating a function to call a stored procedure on RDS
EXPORT_S3_TABLES = {
"id_1": {"name": "table_1", },
"id_2": {"name": "table_2" },
"id_3": {"name": "table_3"},
"id_4": {"name": "table_4"}
}
def export_data_to_s3(dag, conn, mysql_hook, tables):
tasks = []
engine = mysql_hook.get_sqlalchemy_engine()
with engine.connect() as connection:
for schema , features in tables.items():
run_queries = []
t = features.get("name") #extract table name
statement = f'call MyDB.SpExportDataS3("{t}")'
sql_export = (statement).strip()
run_queries.append(sql_export)
task = MySqlOperator(
sql= run_queries,
mysql_conn_id='mysql_default',
task_id=f"export_{t}_to_s3",
autocommit = True,
provide_context=True,
dag=dag,
)
tasks.append(task)
return tasks
To deploy the stored procedure we can use MySQL Operator that will be responsible for executing the “.sql” files as shown below
build_proc_export_s3 = MySqlOperator(dag=dag,
mysql_conn_id='mysql_default',
task_id='build_proc_export_to_s3',
sql='/sql_dir/usp_ExportDataS3.sql',
on_failure_callback=slack_failed_task,
)
Once the procedure has been deployed we can execute it using mysqlhook which will execute the stored procedure using the export_data_to_s3 function.
t_export = export_data_to_s3(dag=dag,
conn="mysql_default",
mysql_hook=prod_mysql_hook,
tables=EXPORT_S3_TABLES,
)
MWAA will orchestrate each SQL script that will be called on RDS, 2 stored procedures will be responsible for exporting and deleting the data consecutively. With this approach, all intensive work (read/process data) will be handled by the database and Airflow will work as an orchestrator for each event.
In addition, Aurora MySQL has a built-in function (INTO OUTFILE S3) that is able to export data directly to S3, that way we do not need another service to integrate RDS with the S3, the data can be persisted directly on the bucket once the procedure is called.
E.g: INTO OUTFILE S3
SELECT id , col1, col2, col3
FROM table_name
INTO OUTFILE S3 's3-region-name://my-bucket-name/mydatabase/year/month/day/output_file.csv'
FORMAT CSV HEADER
FIELDS TERMINATED BY ' ,'
LINES TERMINATED BY '\n'
OVERWRITE ON;
With this function we don’t need to handle the data with python scripts from Airflow, the data will be totally processed by the database and won’t be necessary create data transformation to output the data as CSV.
Conclusion
Airflow is a powerful tool that allows us to deploy smart workflows using simple python code. With this example, we demonstrated how to build a batch process to move the data from a relational database to S3 in simple steps.
There are an unlimited number of features and integrations that can be explored on MWAA, if you need flexibility and easy integration with different services (even non-AWS services), this tool can likely meet your needs.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data engineering.
How to Attract and Retain IT Personnel
Attract and Retain IT Personnel
Finding and retaining IT personnel can be challenging. Tech companies are the new black, and everyone is always on the lookout for the next big thing. The tech industry is constantly changing, meaning you not only need an employee who is competent and has the right skills for the job, but you also need someone adaptable. On top of a very specific skill set, you’re searching for the right fit for your team. Often, after a long but successful search, your IT personnel up and leave as they get a better offer. Now you are back at square one. If you’re not in Silicon Valley you may feel as though the best talents are passing you by, so how can you make your company more attractive to IT personnel, and furthermore, how can you keep them interested? Read on to learn what attracts and retains talent in tech.
First and foremost, technology professionals care about technology
The majority of people who choose technology as a profession, do so because they love it. IT professionals are passionate about their work and they are looking for ways to advance technology usage and types. Passion results in high levels of knowledge and curious minds that never stop researching. For this reason, IT personnel want to know what they will be working with, and how the company will react to new technologies and software as they are developed. By having a detailed technology roadmap in place you can entice IT personnel to take an interest in your business. A roadmap that is up-to-date, data-driven and forward-facing is what will catch the eye of professionals. If your software is behind the times you would benefit from planning to modernise your data. Outdated technology is difficult to upgrade and unable to meet modern day standards. If you are running an old version of .NET or Java, for example, you are unlikely to attract the IT professionals of the future. There is nothing more unattractive than a tech company plagued by inertia. By modernising your data and having a solid roadmap in place you can show the tech community that you are heading in the right direction. It isn’t too late, but if you don’t make the move soon, it may be. Aside from general enquiries, IT professionals may come to interviews with specific questions, and the more specific you can be when answering the more they will know you care about technology too.
Who is interviewing who?
IT personnel face no shortage of job opportunities. When interviewing someone for a tech position in your company, you may see the tables turn and find yourself on the receiving end. Preparing answers to the questions interviewees are likely to ask will give them faith in you and your business. Here are a few questions that an experience IT professional may throw your way:
- What’s your current tech stack?
- What are your policies on updating and using current and modern technologies?
- How do you keep your technology updated?
- How do you release new versions?
- How do you adopt new versions?
- How do you test new possibilities?
Be specific. Ensure you have someone knowledgeable on the panel who can answer these questions with confidence. Having the CTO available to outline the roadmap and dive deep into the softwares used, may win over the candidate. In addition, by letting it be known which softwares and programs you use, you may attract more tech talents who like working with that particular technology.
Catching it and keeping it are two different things.
So having an up-to-date roadmap and modernised data is a way of attracting tech talent into your business, but how do you hold on to them with the ever-present threat of tech giants peeking over your shoulders?
IT professionals are some of the most innovative minds of our times. They like to stay stimulated and they like to move forward. If you want to retain IT personnel, you have to make sure they are being rewarded with more than just a good salary. Empower your employees by embracing a learning environment: invest in education and hands-on training opportunities. Give employees the option of focussing on what interests them and play to their strengths. If an employee is keen to study machine learning, find out if there is room for machine learning in your business and implement it. This way not only are you supporting the growth of your employee but you will likely benefit from what they learn. In addition, consider including your IT personnel in the development or revision of your technology roadmap. Put them on the team and incorporate their insights, allowing them to see that their inputs are valued. Professionals are more likely to stay on a project where they feel they have some ownership. Professionals who are new to your team are also likely to have an idea of what competitors are doing, which is important to know. Using tools such as Tech Radar provides insight into which technology the community is currently excited about and what is on its way out.
We can forecast, but we’re not fortune tellers!
It is true that technology can be unpredictable. There are plenty of examples in recent history where hindsight has taught us a thing or two. Remember when Blockbuster laughed in Netflix’s face at the suggestion of buying them out? Um, does anyone even remember Blockbuster at all? We rest our case: technology can be tricky. There is always a gamble in the future of tech, and not every business is going to get it right. There are entire organisations that can crash simply because of a new technology that disrupted the industry and made certain products or services obsolete. The important thing is to always be prepared as you can be, be agile and flexible. Value the input of your IT professionals and be willing to consider all options. Don’t walk among the dinosaurs, soar among the stars.
Need a technology professional, but don’t work in a technology company? We have news for you.
Technology companies are no longer restricted to technology companies. What? Let us explain. Just because your company is not categorised as being in the technology industry does not mean you are exempt from needing a technological roadmap and structured tech activities. In this day and age, technology is integral to everything we do. The agriculture industry utilises IoT devices and drones undertaking recognition via GPS; the energy industry provides homes with smart meters showing real-time measurements; even the CEO of General Motors referred to GM as a software company for cars back in 2013. If you need to hire an IT professional, you need to consider yourself a technology company.
Know your target.
In conclusion, to attract and retain IT personnel, you need to know what they want. You must understand their desire for advanced technology, a culture of agility, and a learning environment, and then you must implement it. Make your company a place where people can grow so they don’t feel the urge to find growth elsewhere.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data security.
What is the Real Cost of a Data Breach in 2022?
Did Data Breaches increase in 2021?
One of the biggest changes that occurred as a result of the COVID-19 pandemic is the way in which we work. Whilst remote work began as a temporary fix to deal with lockdowns, it is a shift that has been embraced by numerous businesses over the past two years. Such a sudden change, however, was not free of risk. The unpredictability of recent years has seen a focus on survival, with security falling by the wayside. And while we are all distracted by global happenings, hackers have been taking advantage.
Data breaches and the costs associated with them have been on the rise over the past several years, but the average cost per breach jumped from US$3.86 million in 2020 to US$4.24 million in 2021, becoming the highest average total cost seen in the history of IBM’s annual Data Breach report. Remote working is not solely to blame for increased data breaches, however, companies that did not implement any digital transformation changes in the wake of the pandemic had a 16.6% increase in data breach costs compared to the global average. For Australian companies, it is estimated that 30% will fall victim to some sort of data breach, and consequences can be felt for years. The Australian Cyber Security Centre (ACSC) estimates the cost of cybercrimes for Australian businesses and individuals was AU$33 billion in 2021. To protect your business from becoming a part of these statistics, it is crucial to understand how data breaches can affect you and how to take necessary precautions.
What exactly is a data breach?
Data breaches are diverse; they can be targeted, self-spreading or come from an insider; affect individuals or businesses; steal data or demand ransoms. Although certain Australian businesses are mandated by law to notify customers when a breach has occurred, many attacks are kept quiet, meaning their frequency is higher than commonly believed.
What are the different types of data breaches?
- Scams/phishing: Fraudulent emails or websites disguised as a known sender or company.
- Hacking: Unauthorised access gained by an attacker, usually through password discovery.
- Data spill: Unauthorised release of data by accident or as a result of a breach.
- Ransomware: Malicious software (malware) accesses your device and locks files. The criminals responsible then demand payment in order for access to be regained.
- Web shell malware: Attacker gains access to a device or network, a strategy that is becoming more frequent.
The most common category of sensitive data stolen during data breaches is the Personal Identifiable Information (PII) of customers. This data not only contains financial information such as credit card details, but can also be used in future phishing attacks on individuals. The average cost per record is estimated between US$160 and US$180, meaning costs can add up very quickly for a business that loses thousands of customers’ PII in a single attack. All industries can be affected by data breaches, but those with the highest costs are healthcare, financials, pharmaceuticals and technology. According to the 2021 IBM report, each of these industries had a slight decrease in costs associated with data breaches from 2020 to 2021, except for healthcare which increased by a shocking 29.5%
What are the costs?
IBM identified the ‘Four Cost Centres’ which are the categories contributing most global data breach costs. In 2021 the costs were: Lost business cost (38%), Detection and escalation (29%), Post breach response (27%), Notification (6%).
Lost business, the highest cost category for seven consecutive years, includes business disruption and loss of revenue through system downtime (such as delayed surgeries due to ransomware in hospitals), lost customers, acquiring new customers, diminished goodwill and reputation losses.
Detection and escalation costs refer to investigative activities, auditing services, crisis management and communications.
Post breach response costs are associated with helping clients recover after a breach, such as opening new accounts and communicating with those affected. These also include legal expenditures, and, with compliance standards such as HIPAA and CDR becoming more commonplace, regulatory fines are adding significantly to costs in this category. Businesses with a high level of compliance failures are spending on average 51.1% more on data breaches than those with low compliance failures.
Notification costs include communications to those affected and regulators, determination of regulatory requirements and recruiting the assistance of experts. In Australia, businesses and not-for-profits with an annual turnover of more than $3 million, government agencies, credit reporting bodies and health service providers are required by law to inform customers of data breaches and how they can protect themselves from such breaches. It is crucial for businesses to be aware of these responsibilities or they may be subjected to paying further fines.
With lost business being the highest cost associated with breaches, it is no surprise that consequences can be felt years after the initial breach. Reports have found 53% of costs to be incurred two to three years after the breach for highly regulated industries such as healthcare and financial services.
Although significantly less than the global average, the average cost of a data breach in Australia still sits at around AU$3.35 million. Approximately 164 cybercrimes are reported each day in Australia and the attacks are growing more organised and sophisticated. One predictive factor of overall costs is the response time: the longer the lifecycle of a data breach, the more it will cost. Whilst a hacker can access an entire database in just a few hours, detecting a breach takes the average Australian organisation over six months! Many organisations never even identify that a breach has occurred, or find out through victory posts on the dark web. IBM reported that breaches contained in over 200 days cost a business US$1.26 million more than those contained in under 200 days. In addition, they found the average data breach lifecycle was a week longer in 2021 compared to the previous year.
How to avoid data breaches?
The way to protect your business against malicious use of advanced and sophisticated technology is by utilising advanced and sophisticated technology in your security systems. IBM found significantly lower overall costs for businesses with mature security postures, utilising zero trust, cloud security, AI and automation. It is estimated that with AI and machine learning, breaches are detected 27% faster. Mature zero trust systems also resulted in savings of US$1.76 million compared to organisations not utilising zero trust. Organisations with mature cloud modernisation contained breaches 77 days faster than other organisations, and those with high levels of compliance significantly reduced costs.
With data breaches on the rise, and modern businesses relying on technology more heavily than ever before, it is reasonable to predict the cost of data breaches in Australia will only increase in 2022. You can avoid becoming a victim and having to pay the price for years to come by modernising your data and meeting industry compliance regulations.
DNX has the solutions and experience you need. Contact us today for a blueprint of your journey towards data security.
Data Dependency
The Importance of Data Dependency
Why not investing in data platforms is setting your company up for disaster.
Companies with legacy systems or workloads face one of three problems more often than not. Maybe your company has already experienced issues with time to market, bugs in production or limited task coverage due to a lack of confidence in releasing new features. These are the issues usually picked up upon by the CTO or a technical leader who recognises the need to invest in architecture to increase the quality and speed of progress. There is, however, another important underlying problem that no one seems to be talking about.
How and where are you storing your data?
Looking at a typical legacy system these days, it is likely to be a Java, .Net application using a relational database storing a huge amount of data; we have seen companies with tables containing up to 13 years of data!
When we ask customers why they keep all their data on the same database, they rarely have an answer. Often, old or irrelevant data has been retained without reasons, but simply because it has been forgotten about and ended up getting lost among the masses.
With consistently increasing amounts of data comes consistently increased response times for querying information from the database. Whilst it may not be noticable day to day, it could lead to serious consequences, such as losing valuable time and revenue whilst waiting for a backup to restore after a database outage.
It is puzzling to think that whilst we have our best minds considering so much down to the minute details, we largely ignore the way in which we store data. It seems we have a collective ‘out of sight, out of mind’ attitude.
It is extremely common to come across companies that are generating reports from a single database. But here’s the interesting part: each software is unique in terms of security and operation, meaning storage is different for each and every one. Let’s consider an ecommerce store. In this case you would want to organise your tables and data in a way that allows users to easily add items to their shopping cart, place an order, and pay. To make this possible, you would have a shopping cart table, orders table, and products table, which is what we call normalising the database – a relational database. So far, so good. Now let’s look at what happens when you want to run a report. To fully understand your ecommerce business you will want to see your data in various ways, for example, number of sales in NSW in the last seven days; average shopping cart price; average checkout amount; average shipping time frame. Each of these scenarios require data from multiple sources, but by keeping all your data on the same database you are risking the whole operation.
Just as you may lose deals if customers have to wait five seconds to add an item to their shopping cart, you also lose valuable resources while waiting an hour to generate a report – something that is not uncommon to see on legacy applications. Not to mention the direct and indirect consequences of having to wait hours to restore a backup after a database outage (that is, if you even have a backup!).
By choosing not to modernise your data, your business is perched squarely on a ticking time bomb. With a typical ratio of 15 – 20 Developers to 1 Database Administrator (DBA), the DBA is without a doubt the underdog. If the DBA’s suggestions are ignored, developers may begin to modernise their source code and adopt microservices whilst the company’s data in its entirety.
So what happens next?
They might use a database and probably they will use a database. They could use something else. They have to manage states.
Instead of having separate tables for your shopping cart, orders, and products; you now have a product microservice with its own product table, far from the customer microservice and its customer table, which is located in a different database. In addition, the shopping cart may now be in a no sequence database.
Now comes the time to run your reports, but you can no longer do a SELECT in a database and join all the tables because the tables are unreachable. Now you find yourself with a whole host of different problems and a new level of complexity.
Consider the data dimension to fully modernise your application
Now that you understand the importance of data modernisation, you need to know a few key points. To take full advantage of the cloud when modernising your architecture and workloads, you have to find out which tools the cloud has to offer. First, you need to understand that Microservices have to manage states and will likely use a database to do so, due to transactional responsibility. For example, when you create a new product for your ecommerce store, you want it to exist until you actively decide to discontinue it, so you don’t want the database to forget about it – we refer to this as durability.
Consistency is equally important; for example, when you market the product as unavailable, you do not want it to be included in new orders. This is a transactional orientation.
Now we need to understand the analytical view. In order to see how many products you are selling to students in year 8 to year 12 you need to run a correlation between the products, customers and orders. This requires you to have a way of viewing things differently. Most companies choose to build a data warehouse where they can store data in a way that enables them to slice and change the dimension they are looking at. Whilst this is not optimal for transactional operations, it is optimal for analytical operations.
That segregation is crucial. If you build that, you can keep your Microservices with multiple different databases in one state or multiple states in an architecture that is completely decoupled from an analytical data warehouse facility that enables and empowers the business to understand what is happening in the business.
This is hugely important! Operating without these analytical capabilities is like piloting a plane with no radio or navigating systems: you can keep flying but you have no idea where you are going, nor what is coming your way! This analytical capability is crucial to the business but you have to segregate that responsibility. Keeping your new modernised architecture independent from your data warehouse and analytical capability is key.
So, where do we go from here? Utilising Data Lakes
DNX assisted companies enjoying high levels of success through the adoption of data lakes. A data lake can contain structured and unstructured data as well as all the information you need from Microservices, transactional databases and other sources. If you want to include external data from the market today, such as fluctuations in oil prices – go ahead! You can input them into the data lake too! You should take care to extract and clean your data if you can before putting it into the data lake, as this will make its future journey smoother.
Once all your data is in the data lake, you can then mine relevant information and input it in your data warehouse where it can be easily consumed.
Data modernisation can save your company from impending disaster, but it is no small feat!
Most people assume it is as simple as breaking down a monolithic into microservices, but the reality is far more complex.
When planning your data modernisation you must consider reporting, architectural, technical and cultural changes, as well as transactional versus analytical responsibilities of storing stages, and their segregation. All of this becomes a part of your technological road map and shows you the way to a more secure future for your business.
If you would like to know how we have achieved this for multiple clients, and can do the same for you
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Using DbT and Redshift to provide efficient Quicksight reports

TL;DR:
Using Redshift as a Data Warehouse to integrate data from AWS Pinpoint, AWS DynamoDB, Microsoft Dynamics 365 and other external sources.
Once the data is ingested to Redshift, DbT is used to transform the data into a format that is easier to be consumed by AWS Quicksight.
Each Quicksight report/chart has a fact table. This strategy allows Quicksight to efficiently query the data needed.
The Customer
The client is a health tech startup. They created a mobile app that feeds data to the cloud using a serverless architecture. They have several data sources and would like to integrate this data into a consolidated database (Data Warehouse). This data would then be presented in a reporting tool to help the business drive decisions. The client’s data sources:
- AWS DynamoDB – User preferences
- AWS Pinpoint – Mobile application clickstream
- Microsoft Dynamics 365 – Customer relationship management
- Stripe – Customer payments
- Braze – A customer engagement platform
The client also needs to send data from the Data Warehouse to Braze, used by the marketing team to develop campaigns. This was done by the client, using Hightouch Reverse ETL.
The Solution

The overall architecture of the solution is presented in Figure 1. AWS Redshift is the Data Warehouse, which receives data from Pinpoint, DynamoDB, Strip and Dynamics 365. Quicksight then queries data from Redshift to produce business reports. Following, we will describe each data source integration. As a Cloud-native company, we work towards allowing our clients to easily manage their cloud infrastructure. For that reason, the infrastructure was provisioned using Terraform. Terraform allowed the client to apply the same network and data infrastructure in their 3 different environments with ease.
DynamoDB
The users’ preferences are stored on AWS DynamoDB. A simple AWS Glue job, created using Glue Studio, is used to send DynamoDB data to Redshift. It was not possible to use the COPY command from Redshift as the client’s DynamoDB contains complex attributes (SET). The job contains a 5-line custom function to flatten the JSON records from DynampoDB, presented in Table 1. For Glue to access DynamoDB tables we needed to create a VPC Endpoint
def MyTransform (glueContext, dfc) -> DynamicFrameCollection:
df = dfc.select(list(dfc.keys())[0])
dfc_ret = Relationalize.apply(frame = df, staging_path = "s3://bucket-name/temp", name = "root", transformation_ctx = "dfc_ret")
df_ret = dfc_ret.select(list(dfc_ret.keys())[0])
dyf_dropNullfields = DropNullFields.apply(frame = df_ret)
return(DynamicFrameCollection({"CustomTransform0": dyf_dropNullfields}, glueContext))
Pinpoint
The mobile app clickstream is captured using AWS Pinpoint and stored on S3 using an AWS Kinesis delivery stream. There are many ways to load data from S3 to Redshift. Using COPY command, a Glue Job or Redshift Spectrum. We decided to use Redshift Spectrum as we would need to load the data every day. Using Spectrum we can rely on the S3 partition to filter the files to be loaded. The pinpoint bucket contains partitions for Year, Month, Day and Hour. At each run of our ELT process, we filter S3 load based on the latest date already loaded. The partitions are automatically created using Glue Crawler. Glue Crawler also automatically parse JSON into struct columns types. Table 2 show a SQL query that illustrates the use of Spectrum partitions.
select
event_type,
event_timestamp,
arrival_timestamp,
attributes.page,
attributes.title,
session.session_id as session_id,
client.cognito_id as cognito_id,
partition_0::int as year,
partition_1::int as month,
partition_2::int as day,
partition_3::int as hour,
sysdate as _dbt_created_at
from pinpoint-analytics.bucket_name
-- this filter will only be applied on an incremental run
where
partition_0::int >= (select date_part('year', max(event_datetime)) from stg_analytics_events)
and partition_1::int >= (select date_part('month', max(event_datetime)) from stg_analytics_events)
and partition_2::int >= (select date_part('day', max(event_datetime)) from stg_analytics_events)
Microsoft Dynamics 365 and Stripe
Two important external data sources required in this project are CRM data from Dynamics and Payment information from Stripe. An efficient and user-friendly service that helps with data integration is Fivetran. Fivetran has more connectors than other tools, including Microsoft Dynamics and Stripe. Fivetran provides such a connector and has an easy to use interface which was essential for this client.
DbT – ELT FLow
The client wanted a data transformation tool that was scalable, collaborative and that allowed version control. DbT was our answer. As we have seen in many other clients, DbT has been the first answer when it comes to running ELT (Extract, Load, Transform) workflows. After we built the first DAGs (Directed Acyclic Graph) with DbT, using Jinja template for raw tables (source) and staging table (references) and showed it to the client, they were amazed by the simplicity and software engineering way that DbT works. Having an ELT workflow that is source controlled is a very unique feature from DbT.
In DbT, the workflow is separated into different SQL files. Each file contains a partial staging transformation of the data until the data is consolidated into a FACT or DIMENSION table. These final tables are formed by one or more staging tables. Using the Jinja templates to reference tables between each other allows DbT to create a visual representation of the relationships. Figure 2 presents an example of a DbT visualization. DbT allowed us to create tables that could be efficiently queried by Quicksight.

Quicksight
Once the data is organised and loaded into Redshift, it is time for visualising it. AWS Quicksight easily integrates with Redshift and several other data sources. It provides a number of chart options and allows the clients to embed their reports in their internal systems. For this client, we use Bar charts, Pie charts, Line charts and a Sankey diagram for customer segment flow. The client was very happy with the look and feel of the visualizations and with the loading speed. Some minor limitations from Quicksight include a) not being able to give a title to multiple Y-axis and b) making the Sankey diagram follow the dashboard theme. Except that, it allowed us to reach a great improvement in the client’s ability for data-driven decision making.
A great next step regarding the Quicksight would be to implement QuickSight object migration and version control from staging to production environments.
Conclusion
In this article, we described a simple and efficient architecture that enabled our client to obtain useful insights from their data. Redshift was used as the central repository of data, the Data Warehouse, receiving ingestion from several data sources such as Pinpoint, DynamoDB, Dynamics and Stripe. DbT was used for the ELT workflow and Quicksight for the dashboard visualisations. We expect to be using this same architecture for clients to come as it provides agile data flows and insightful dashboards.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas
Launching Amazon FSx for Windows File Server and Joining a Self-managed Domain using Terraform

TL;DR:
The github repo with all scripts are here.
Because of specific requirements, reasons, or preferences, some customers need to self-manage a Microsoft AD directory on-premises or in the cloud.
AWS offers options to have their fully managed Microsoft Windows file servers (Amazon FSx for Windows File Server) join a self-managed Microsoft Active Directory.
In this post, I will provide an example of launching an FSx for Windows File Server and joining a self-managed domain using Terraform.
This article won’t go into details on the following items as they are presumed to already be created.
Requirements:
- self-managed Microsoft AD directory
- the fully qualified, distinguished name (FQDN) of the organisational unit (OU) within your self-managed AD directory that the Windows File Server instance will join; and
- valid DNS servers and networking configuration (VPC/Subnets) that allows traffic from the file system to the domain controller.
In addition, I recommend to go through the steps “Validating your Active Directory configuration” from AWS Documentation at the following link to validate self-managed AD configuration before starting creation of the FSx filesystem:
On the file _variables.tf, we will provide the details for the self-managed AD, including IPs, DNS Name, Organisational Unit, and Domain Username and Password:
_variables.tf
variable "ad_directory_name" {
type = string
default = "example.com"
}
variable "ad_directory_ip1" {
type = string
default = "XXX.XXX.XXX.XXX"
}
variable "ad_directory_ip2" {
type = string
default = "XXX.XXX.XXX.XXX"
}
variable "fsx_name" {
type = string
default = "fsxblogpost"
}
variable "domain_ou_path" {
type = string
default = "OU=Domain Controllers,DC=example,DC=com"
}
variable "domain_fsx_username" {
type = string
default = "fsx"
}
variable "domain_fsx_password" {
type = string
default = "placeholder"
}
variable "fsx_deployment_type" {
type = string
default = "SINGLE_AZ_1"
}
variable "fsx_subnet_ids" {
type = list(string)
default = ["subnet-XXXXXXXXXXXX"]
}
variable "vpc_id" {
type = string
default = "vpc-XXXXXXXXXXXX"
}
variable "fsx_deployment_type" {
type = string
default = "SINGLE_AZ_1"
}
variable "fsx_subnet_ids" {
type = list(string)
default = ["subnet-XXXXXXXXXXXX"]
}
variable "vpc_id" {
type = string
default = "vpc-XXXXXXXXXXXX"
}
The file fsx.tf is where we will effectively create FSx filesystem, and also KMS encryption key and KMS Key policy. The KMS key is optional, however I strongly recommend having the filesystem encrypted.
fsx.tf
data "aws_iam_policy_document" "fsx_kms" {
statement {
sid = "Allow FSx to encrypt storage"
actions = ["kms:GenerateDataKey"]
resources = ["*"]
principals {
type = "Service"
identifiers = ["fsx.amazonaws.com"]
}
}
statement {
sid = "Allow account to manage key"
actions = ["kms:*"]
resources = ["arn:aws:kms:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:key/*"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"]
}
}
}
resource "aws_kms_key" "fsx" {
description = "FSx Key"
deletion_window_in_days = 7
policy = data.aws_iam_policy_document.fsx_kms.json
}
resource "aws_fsx_windows_file_system" "fsx" {
kms_key_id = aws_kms_key.fsx.arn
storage_capacity = 100
subnet_ids = var.fsx_subnet_ids
throughput_capacity = 32
security_group_ids = [aws_security_group.fsx_sg.id]
deployment_type = var.fsx_deployment_type
self_managed_active_directory {
dns_ips = [var.ad_directory_ip1, var.ad_directory_ip2]
domain_name = var.ad_directory_name
username = var.domain_fsx_username
password = var.domain_fsx_password
organizational_unit_distinguished_name = var.domain_ou_path
}
}
resource "aws_security_group" "fsx_sg" {
name = "${var.fsx_name}-fsx-sg"
description = "SG for FSx"
vpc_id = data.aws_vpc.selected.id
tags = {
Name = "${var.fsx_name}-fsx-sg"
}
}
resource "aws_security_group_rule" "fsx_default_egress" {
description = "Traffic to internet"
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
security_group_id = aws_security_group.fsx_sg.id
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "fsx_access_from_vpc" {
type = "ingress"data "aws_iam_policy_document" "fsx_kms" {
statement {
sid = "Allow FSx to encrypt storage"
actions = ["kms:GenerateDataKey"]
resources = ["*"]
principals {
type = "Service"
identifiers = ["fsx.amazonaws.com"]
}
}
statement {
sid = "Allow account to manage key"
actions = ["kms:*"]
resources = ["arn:aws:kms:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:key/*"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"]
}
}
}
resource "aws_kms_key" "fsx" {
description = "FSx Key"
deletion_window_in_days = 7
policy = data.aws_iam_policy_document.fsx_kms.json
}
resource "aws_fsx_windows_file_system" "fsx" {
kms_key_id = aws_kms_key.fsx.arn
storage_capacity = 100
subnet_ids = var.fsx_subnet_ids
throughput_capacity = 32
security_group_ids = [aws_security_group.fsx_sg.id]
deployment_type = var.fsx_deployment_type
self_managed_active_directory {
dns_ips = [var.ad_directory_ip1, var.ad_directory_ip2]
domain_name = var.ad_directory_name
username = var.domain_fsx_username
password = var.domain_fsx_password
organizational_unit_distinguished_name = var.domain_ou_path
}
}
resource "aws_security_group" "fsx_sg" {
name = "${var.fsx_name}-fsx-sg"
description = "SG for FSx"
vpc_id = data.aws_vpc.selected.id
tags = {
Name = "-${var.fsx_name}-fsx-sg"
}
}
resource "aws_security_group_rule" "fsx_default_egress" {
description = "Traffic to internet"
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
security_group_id = aws_security_group.fsx_sg.id
cidr_blocks = ["0.0.0.0/0"]
}
resource "aws_security_group_rule" "fsx_access_from_vpc" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
security_group_id = aws_security_group.fsx_sg.id
cidr_blocks = [data.aws_vpc.selected.cidr_block]
}
from_port = 0
to_port = 0
protocol = "-1"
security_group_id = aws_security_group.fsx_sg.id
cidr_blocks = [data.aws_vpc.selected.cidr_block]
}
Once you apply the scripts on Terraform, it should take around 15 minutes for the resources to be created:
aws_fsx_windows_file_system.fsx: Creation complete after 15m54s [id=fs-05701e8e6ad3fbe24]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
You should see the FSx created and in Available state on AWS Console, which means FSx was able to join the self-managed domain:

Conclusion
I hope the instructions and terraform scripts provided can make your life easier when launching FSx for Windows File Server and joining a self-managed domain using Terraform.
When recently working on a project, I noticed there weren’t many examples online, so I decided to write this blog post to help others.
I would encourage you to open an issue or feature request on the github repo in case you need any additional help when using the scripts.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas
AWS OpenTelemetry for centralised observability in a multi-account architecture

As a cloud consulting company, we at DNX work directly with many Fintechs and SaaS companies using AWS.
Many of these companies need to meet high levels of compliance providing their software in a single-tenant architecture, where each of their customers has their own AWS account to guarantee full isolation between customers.
This raises a challenge for managing and monitoring these individual accounts, which can reach the hundreds.
The Business Challenge
The challenge is to build a centralised observability solution that would aggregate metrics, logs, and alarms (known as the three pillars of observability) with the data flowing privately within AWS.
In this example, we will consider a client where each tenant is using a stack with ECS Fargate with AppMesh and wishes to centralise the observability of these stacks in a cost-effective way.
Objectives
The SaaS Observability proposal includes the following main objectives:
- Reduced Operation hours (having a centralised panel across all customers)
- Reduced Cost (having a observability backend stored in a centralised account)
- Quick response to alarms (automation to trigger alarms based on metrics across customers)
- Plugable (ability to add the observability strategy in the current stack)
The Solution
We started breaking the problem into 3 parts.
Tracing
We decided to try Jaeger, which is an open source, end-to-end distributed tracing. All we needed was to deploy a sidecar container with Jaeger running along with AppMesh envoy and configured 3 parameters for Envoy container configuration to send the tracing data to Jaeger. This works out of the box.
ENABLE_ENVOY_JAEGER_TRACING = 1
JAEGER_TRACER_PORT = 9411
JAEGER_TRACER_ADDRESS = 127.0.0.1
But another problem was coming; Jaeger supports different storage backends including memory, Cassandra, Elasticsearch, Kafka, and more. The recommended backend is Elasticsearch and we were also always looking to use the most managed services from AWS. Meaning that an Amazon Elasticsearch Service cluster should be deployed in the main account to store all tracing data from each tenant, but because we are trying to send requests between two different accounts, we ended up with a question; how do we make this link happen, given that there is no VPC endpoint support for ES into two accounts?
The answer was to create a proxy using a private API Gateway and a Lambda on the border of the main account. The lambda will take the request coming from Jaeger and just proxy to Elaticsearch adding AWS credentials headers to the request. On the tenant account we configured a VPC Endpoint to this API Gateway and limited access to specific VPCs (only tenant VPC can make requests to API Gateway).
Moving to the metrics we decided to go with AWS Distro for OpenTelemetry Collector (AWS OTel Collector), which is an AWS supported version of the upstream OpenTelemetry Collector and is distributed by Amazon. It supports the selected components from the OpenTelemetry community. It is fully compatible with AWS computing platforms including EC2, ECS, and EKS. It enables users to send telemetry data to Amazon Managed Service for Prometheus as well as the other supported backends.
Metrics
Like we did with Jaeger, we deployed aws-otel-collector as a sidecar container along with ECS services. The configuration was stored inside a SSM parameter folling the documentation https://aws-otel.github.io/docs/setup/ecs/config-through-ssm.
aws-otel-collector configuration
Usually a basic configuration is required to set up receivers, processors, service, and exporters.
For receivers, we used awsecscontainermetrics (memory, network, storage and CPU usage metrics), otlp, and statsd. Remember to open ports on the container definition or otherwise services will not be able to send metrics to the collector.
```yaml
receivers:
awsecscontainermetrics:
collection_interval: 20s
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:55681"
statsd:
endpoint: "0.0.0.0:8125"
aggregation_interval: 60s
```
The processors specification section enables us to apply filters and tell how the metrics should be treated.
```yaml
processors:
batch/metrics:
timeout: 60s
filter:
metrics:
include:
match_type: strict
metric_names:
- ecs.task.memory.utilized
- ecs.task.memory.reserved
- ecs.task.cpu.utilized
- ecs.task.cpu.reserved
- ecs.task.network.rate.rx
- ecs.task.network.rate.tx
- ecs.task.storage.read_bytes
- ecs.task.storage.write_bytes
```
The exporters is the most important configuration because is where we will use awsprometheusremotewrite to send the metrics to the main account.
```yaml
exporters:
awsprometheusremotewrite:
endpoint: "https://aps-workspaces.{{region}}.amazonaws.com/workspaces/{{workspace_id}}/api/v1/remote_write"
namespace: "tenant_name"
aws_auth:
service: aps
region: "{{region}}"
role_arn: "arn:aws:iam::{{main_account_id}}:role/{{amp_role}}"
logging:
loglevel: debug
```
Notice that the role_arn we are pointing to a role on the main account, and when aws-otel-collector starts, it will assume the role that gives permissions to remote write to the Amazon Managed Prometheus. To do that, we need to:
-
- Assume role permissions to the trust relationship for the Task Role to the tenant ecs.
```json { "Version": "2008-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ```
- The Task Role policy should point to the role that aws-otel-collector will try to assume inside the main account.
```json { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::{{main_account_id}}:role/{{amp_role}}" } } ```
- {{amp_role}} should contain permissions to reach the AMP cluster.
```json { "Statement": [ { "Action": [ "aps:RemoteWrite", "aps:GetSeries", "aps:GetLabels", "aps:GetMetricMetadata" ], "Resource": "*", "Effect": "Allow", "Sid": "Prometheus" } ] } ```
The last Otel configuration is the service that makes it possible to route what each receiver, processor, and exporter will do.
```yaml service: extensions: [health_check] pipelines: metrics/otlp: receivers: [otlp] processors: [batch/metrics] exporters: [logging, awsprometheusremotewrite] metrics/statsd: receivers: [statsd] processors: [batch/metrics] exporters: [logging, awsprometheusremotewrite] metrics/ecs: receivers: [awsecscontainermetrics] processors: [filter] exporters: [logging, awsprometheusremotewrite] ```
- Assume role permissions to the trust relationship for the Task Role to the tenant ecs.
The diagram for the metrics solution:

Logs
For the logs we did a simple trick inside AWS that is enabling cross-account cross-Region CloudWatch logs.
Checkout https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Cross-Account-Cross-Region.html for more information.
Grafana
Once the otel-collector and jaeger were running, the last step was to deploy and set up Amazon Managed Service for Grafana inside the main account.
The deployment is simple but requires AWS SSO configured in the root account.
To setup you can go to:
- AWS DataSources
- Amazon Manager Service for Prometheus
- Select region and click Add data source
Then the visualisation we have in the dashboard is:
Conclusion
With the new observability stack built by DNX, they can now achieve a centralised storage and dashboard for metrics, tracing, and logs in an elastic and highly available way on AWS. The uncoupled solution enables hybrid configurations. This can accelerate development by finding improvements and bugs with alarms. We calculate that this architecture would bring a considerable cost reduction, reducing the usage of CloudWatch and XRay up to 70%!
Well, this is the magic that the CDK provides us. As there is a library behind all the methods and functions, it sees all the dependencies and automatically creates the missing resources for us, connecting them so that everything has a connection with as little access as possible, leaving what is necessary for the correct function between the resources. For example, the instance’s Security Group. As we marked that the EC2 instance listens on port 80, only port 80 will be added to the Security Group as an ingress value.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas
How to deploy an ALB + ASG + EC2 using AWS CDK and TypeScript

Have you heard about Cloud Development Kit or CDK? [Yes, No]
What is AWS CDK?
The AWS Cloud Development Kit (CDK), lets you define your cloud Infrastructure as Code (IaC) in one of five supported programming languages. It is intended for moderately to highly experienced AWS users.
In this blog post, you will see how to create your CDK Construct and why this should be done.
Infrastructure as Code
To use CDK, we should know first what is Infrastructure as Code (IaC). If you never heard about it before, you can view some documentation about the concepts behind it here: (https://containersonaws.com/introduction/infrastructure-as-code/#:~:text=) To summarise, IaC manages infrastructure (Machine, Load Balancers, Network, Services) using configuration files. So basically, instead of going to the console and creating all the resources that your application requires, we write a few lines of code, and it provides everything for us.
You’re probably thinking, ‘But this is nothing new. There are tools like Terraform, Cloud Formation, Ansible, or even bash script to do this simply and clearly.’ And yes, you are right, and they play their role very well. The only difference between these and the CDK is that the CDK allows you to use your expertise in programming languages to create code infrastructure by provisioning resources using AWS CloudFormation. AWS CDK supports TypeScript, JavaScript, Python, Java, C#/.Net, and Go. Additionally, developers can use one of the supported programming languages to define reusable cloud components known as Constructs, and today we are going to build a superpower EC2 Construct.
Let’s code!
How to Create CDK Constructs
First of all, we need to set up our environment. In this case, I will use a docker image using the same principles from 3Musketeers (if you don’t know what this is, I recommend you have a look, it is pretty nice 😉).
Dockerfile
ARG AWS_CDK_VERSION=1.111.0 FROM node:12-alpine RUN apk -v --no-cache --update add \ python3 \ ca-certificates \ groff \ less \ bash \ make \ curl \ wget \ zip \ git \ && \ update-ca-certificates && \ pip3 install awscli && \ npm install -g aws-cdk@${AWS_CDK_VERSION} && \ rm -rf /var/cache/apk/* WORKDIR /work CMD ["cdk"]
Let’s build the image:
$ docker build -t my-cdk-image:1.11.0 .
Now, let’s get into the docker container. As the Docker is stateless, we are going to share our folder using volumes:
$ docker run --rm -it -v $(pwd):/work my-cdk-image:1.11.0 bash
Create the CDK project
1.First let’s create a project folder called cdk-ec2-construct:
$ mkdir cdk-ec2-construct
$ cd cdk-ec2-construct
2. Now create your CDK application:
$ cdk init app --language=typescript Applying project template app for typescript # Welcome to your CDK TypeScript project! This is a blank project for TypeScript development with CDK. The `cdk.json` file tells the CDK Toolkit how to execute your app. ## Useful commands * `npm run build` compile typescript to js * `npm run watch` watch for changes and compile * `npm run test` perform the jest unit tests * `cdk deploy` deploy this stack to your default AWS account/region * `cdk diff` compare deployed stack with current state * `cdk synth` emits the synthesized CloudFormation template Executing npm install... ✅ All done!
Exploring the files, we can see that the CLI has done a massive step for us, creating all the structure of folders and initial base files.
We will find our stack file in:
/lib/cdk-ec2-construct-stack.ts
And the main entrypoint of the application is in:
/bin/cdk-ec2-construct.ts
Let’s start creating our Construct (aka module) which we will use within our Stack to make as much EC2 we want.
First, let’s create our Construct file /lib/cdk-ec2-construct.ts:
export class CdkEc2Construct extends cdk.Construct {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
}
}
Now, as we are using Typescript, we are going to write down our interface to our props.
interface ICdkEc2Props {
VpcId: string;
ImageName: string;
CertificateArn: string;
InstanceType: string;
InstanceIAMRoleArn: string;
InstancePort: number;
HealthCheckPath: string;
HealthCheckPort: string;
HealthCheckHttpCodes: string;
}
Getting some real data from our Account
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcId: props.VpcId
})
const ami = ec2.MachineImage.lookup({
name: props.ImageName
})
Creating the Load Balancer
this.loadBalancer = new elbv2.ApplicationLoadBalancer(this, `ApplicationLoadBalancerPublic`, {
vpc,
internetFacing: true
})
const httpsListener = this.loadBalancer.addListener('ALBListenerHttps', {
certificates: elbv2.ListenerCertificate.fromArn(props.CertificateArn)),
protocol: elbv2.ApplicationProtocol.HTTPS,
port: 443
})
Creating the Auto Scaling Group
const autoScalingGroup = new autoscaling.AutoScalingGroup(this, 'AutoScalingGroup', {
vpc,
instanceType: new ec2.InstanceType(props.InstanceType),
machineImage: ami,
allowAllOutbound: true,
role: iam.Role.fromRoleArn(this, 'IamRoleEc2Instance', props.InstanceIAMRoleArn),
healthCheck: autoscaling.HealthCheck.ec2()
})
Including scripts in the user data:
autoScalingGroup.addUserData('sudo yum install -y https://s3.region.amazonaws.com/amazon-ssm-region/latest/linux_amd64/amazon-ssm-agent.rpm')
autoScalingGroup.addUserData('sudo systemctl enable amazon-ssm-agent')
autoScalingGroup.addUserData('sudo systemctl start amazon-ssm-agent')
autoScalingGroup.addUserData('echo "Hello Wolrd" > /var/www/html/index.html')
Now that we have almost everything in place, we need to create the connection between our Load Balancer and our Auto Scaling group, and we can do that by adding a Target Group to our Load Balancer.
httpsListener.addTargets('TargetGroup', {
port: props.InstancePort,
protocol: elbv2.ApplicationProtocol.HTTP,
targets: [autoScalingGroup], //Reference of our Austo Scaling group.
healthCheck: {
path: props.HealthCheckPath,
port: props.HealthCheckPort,
healthyHttpCodes: props.HealthCheckHttpCodes
}
})
Also, we will expose our Load Balancer as read-only, so we will be able to access it from our Stack.
export class CdkEc2Construct extends cdk.Construct {
readonly loadBalancer: elbv2.ApplicationLoadBalancer
constructor(scope: cdk.Construct, id: string, props: ICdkEc2Props) {
.
.
.
}
}
Now, our construct should be look like this:
import * as cdk from '@aws-cdk/core'
import * as ec2 from '@aws-cdk/aws-ec2'
import * as elbv2 from '@aws-cdk/aws-elasticloadbalancingv2'
import * as targets from '@aws-cdk/aws-elasticloadbalancingv2-targets'
import * as autoscaling from '@aws-cdk/aws-autoscaling'
import * as acm from '@aws-cdk/aws-certificatemanager'
import * as iam from '@aws-cdk/aws-iam'
interface ICdkEc2Props {
VpcId: string;
ImageName: string;
CertificateArn: string;
InstanceType: string;
InstanceIAMRoleArn: string;
InstancePort: number;
HealthCheckPath: string;
HealthCheckPort: string;
HealthCheckHttpCodes: string;
}
export class CdkEc2Construct extends cdk.Construct {
readonly loadBalancer: elbv2.ApplicationLoadBalancer
constructor(scope: cdk.Construct, id: string, props: ICdkEc2Props) {
super(scope, id)
const vpc = ec2.Vpc.fromLookup(this, 'VPC', {
vpcId: props.VpcId
})
const ami = ec2.MachineImage.lookup({
name: props.ImageName
})
this.loadBalancer = new elbv2.ApplicationLoadBalancer(this, `ApplicationLoadBalancerPublic`, {
vpc,
internetFacing: true
})
const httpsListener = this.loadBalancer.addListener('ALBListenerHttps', {
certificates: elbv2.ListenerCertificate.fromArn(props.CertificateArn),
protocol: elbv2.ApplicationProtocol.HTTPS,
port: 443,
sslPolicy: elbv2.SslPolicy.TLS12
})
const autoScalingGroup = new autoscaling.AutoScalingGroup(this, 'AutoScalingGroup', {
vpc,
instanceType: new ec2.InstanceType(props.InstanceType),
machineImage: ami,
allowAllOutbound: true,
role: iam.Role.fromRoleArn(this, 'IamRoleEc2Instance', props.InstanceIAMRoleArn),
healthCheck: autoscaling.HealthCheck.ec2(),
})
autoScalingGroup.addUserData('sudo yum install -y https://s3.region.amazonaws.com/amazon-ssm-region/latest/linux_amd64/amazon-ssm-agent.rpm')
autoScalingGroup.addUserData('sudo systemctl enable amazon-ssm-agent')
autoScalingGroup.addUserData('sudo systemctl start amazon-ssm-agent')
autoScalingGroup.addUserData('echo "Hello Wolrd" > /var/www/html/index.html')
httpsListener.addTargets('TargetGroup', {
port: props.InstancePort,
protocol: elbv2.ApplicationProtocol.HTTP,
targets: [autoScalingGroup],
healthCheck: {
path: props.HealthCheckPath,
port: props.HealthCheckPort,
healthyHttpCodes: props.HealthCheckHttpCodes
}
})
}
}
We have built our construct, let’s create our Stack. For that, we will edit the file: /lib/cdk-ec2-construct-stack.ts
import * as cdk from '@aws-cdk/core'
import * as route53 from '@aws-cdk/aws-route53';
import * as route53Targets from '@aws-cdk/aws-route53-targets';
import { CdkEc2Construct } from '../lib/cdk-ec2-construct.ts';
export class SampleAppStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id)
const app = new CdkEc2Construct(this, 'EC2Test', {
VpcId: "vpc-123456890123";
ImageName: "Amazon 2 Linux";
CertificateArn: "rn:aws:acm:us-east-1:123456789:certificate/be12312-ecad-3123-1231s-123ias9123";
InstanceType: "t3.micro";
InstanceIAMRoleArn: "arn:aws:iam::123456789:role/ec2-role";
InstancePort: 80;
HealthCheckPath: "/";
HealthCheckPort: "80";
HealthCheckHttpCodes: "200";
})
const route53_hosted_zone = route53.HostedZone.fromLookup(this, 'MyZone', {
domainName: 'labs2.dnx.host'
})
new route53.ARecord(this, 'AliasRecord', {
zone: route53_hosted_zone,
target: route53.RecordTarget.fromAlias(new alias.LoadBalancerTarget(app.loadBalancer)),
recordName: 'cdk.labs2.dnx.host'
})
}
}
We should now be able to deploy our Stack. To do that, we just need to run a simple command. Then the framework will take care of everything for us, build the code, create a Cloud Formation file, deploy the Cloud Formation, and monitor it for us.
$ cdk deploy
Wrapping up
If you already know Terraform or Cloud Formation, you may be wondering, ‘But that’s it? Isn’t it missing resources? Where are the Security Groups? Where are all the extra settings needed to deploy a framework like this?’.
Well, this is the magic that the CDK provides us. As there is a library behind all the methods and functions, it sees all the dependencies and automatically creates the missing resources for us, connecting them so that everything has a connection with as little access as possible, leaving what is necessary for the correct function between the resources. For example, the instance’s Security Group. As we marked that the EC2 instance listens on port 80, only port 80 will be added to the Security Group as an ingress value.
Na DNX Brasil, rabalhamos para trazer uma melhor experiência em nuvem e aplicações para empresas nativas digitais. Trabalhamos com foco em AWS, Well-Architected Solutions, Containers, ECS, Kubernetes, Integração Contínua/Entrega Contínua e Malha de Serviços. Estamos sempre em busca de profissionais experiêntes em cloud computing para nosso time, focando em conceitos cloud-native. Confira nossos projetos open-souce em https://github.com/DNXLabs e siga-nos no Twitter, Linkedin or YouTube.
Sem spam - apenas novidades, atualizações e informações técnicas.Tenha informações das últimas previsões e atualizações tecnológicas