Blog

Harold Smith III is Building Faster Ways to Fix Things
Mortensen Designs Mortensen Designs

Harold Smith III is Building Faster Ways to Fix Things

As Monkton’s CEO and co-founder, Harold Smith III interacts with everyone from executives to end users, developing ways to solve issues before they occur.

Harold has led Monkton from its first project, developing a mobile maintenance app for the U.S. Air Force, which went from kickoff to user acceptance testing in less than four months. Since then, he’s made it a priority to work with the best partner companies in the industry, valuing organizations committed to change and not afraid to take risks.

Read More
Build Faster Ways to Fix Things
Alana Wilbur Alana Wilbur

Build Faster Ways to Fix Things

Government agencies need to evolve and enhance their ability to solve challenges with forward-thinking processes. Monkton explores four approaches to speed up this evolution—organizational agility, collaborative partnerships, flexible regulatory frameworks, and informed decision making.

By embracing these strategies, agencies can more quickly adopt new technologies and approaches that better serve their citizens.

Read More
How Serverless Computing Can Transform Government
Alana Wilbur Alana Wilbur

How Serverless Computing Can Transform Government

Serverless architecture removes the burden of managing servers, allowing developers to focus solely on code and creating rapid capabilities, today.

Monkton explores how Serverless Computing can optimize cost, scalability, and efficiency in software development for government and businesses looking to streamline their operations and reduce infrastructure management complexities.

Read More
Combining the Power of Edge Computing with Monkton's Cloud Native Solutions
Alana Wilbur Alana Wilbur

Combining the Power of Edge Computing with Monkton's Cloud Native Solutions

Edge Computing—and its detailed significance for real-world applications—decentralizes data processing, reducing latency and improving real-time capabilities at the network Edge.

Monkton highlights the potential for Edge Computing in various industries, emphasizing its role in enhancing performance, security, data integrity, and efficiency in an increasingly interconnected world.

Read More
Rapid DoD Modernization: The MATTER IDIQ
Alana Wilbur Alana Wilbur

Rapid DoD Modernization: The MATTER IDIQ

Explore Monkton's MATTER IDIQ (Indefinite Delivery, Indefinite Quantity) contract, awarded in 2020, enabling the delivery of cutting-edge, mobile-first solutions to federal government agencies, civilian agencies, and the DoD.

With a generous funding ceiling of $500 million, Monkton aims to enhance mission operations, prioritizing security and user-friendly experiences for government agencies.

Read More
Fast, Flexible Federal Procurement: What Is an IDIQ?
Alana Wilbur Alana Wilbur

Fast, Flexible Federal Procurement: What Is an IDIQ?

Monkton highlights the significance of IDIQ contracts in enhancing flexibility and efficiency in government acquisitions.

Gain insight into how an IDIQ (Indefinite Delivery, Indefinite Quantity) contract streamlines procurement processes for government agencies by allowing them to order various quantities of goods or services over a specified period.

Read More
What is NSA NIAP?
Alana Wilbur Alana Wilbur

What is NSA NIAP?

NIAP, or the National Information Assurance Partnership, is the U.S. government’s neutral, third-party testing initiative. NIAP is a partnership between NIST and the NSA to assess the security of IT products through strict Common Criteria (CC) standards.

NIAP ensures all products and solutions meet globally recognized security standards, which is essential in the face of increasing cyber threats and remote work scenarios. Monkton's CEO, Harold Smith III, highlights the importance of NIAP in building confidence and safeguarding the digital landscape.

Read More
Automated Multi-Region deployments in AWS: Lambda
Harold Smith III Harold Smith III

Automated Multi-Region deployments in AWS: Lambda

Amazon's Lambda service is perhaps one of our favorites there is, because it lets you just hand over code to AWS and lets AWS deal with the complexities of executing it. Best of all, it is cost effective: instead of servers running 24/7 and wasting resources and money, it only charges you when your code needs to be executed. I'd even argue its more environmentally friendly, since you only burn what you use!

This does come at some "cost." Lambda executions can be more expensive when you aggregate them out compared to other computing. But, the benefit on the flip sides is, you don't have to worry about managing servers (Even ECS with EC2 you need to monitor servers), patching, etc. So, is that a cost you'd be willing to pay for?

Debugging can be a bit more difficult. Getting your application running and tested the first go can be a bit painful if something isn't working as expected. Once you get it cracked for the first time, the following are a breeze.

But, again—this all falls back to, making life easier. Our goal is to not have to manage servers, ever. Lambda (and to be far, Fargate) enable this.

Read More
AWS CodePipeline Random Musings
Harold Smith III Harold Smith III

AWS CodePipeline Random Musings

We have laid out the following accounts for our core DSOP architecture:

DSOP CodePipelines

DSOP CodeCommit

DSOP Tooling

Each of these provides us with different outcomes and tries to limit access to developers and ops. Traditionally, many would combine the CodeCommit and CodePipelines accounts into one. This is an ok strategy, but could potentially cause some issues with separation of duties. Our goal is to break that and enable CodeCommit and CodePipeline to reside in different accounts.

There are a lot of "Gotchas" in the process of developing Pipelines. For one, creating a pipeline that works on multiple branches is basically impossible with CodeCommit feeding directly into CodePipeline. You need a 1:1 pipeline for your branch. So, if you are using a branch development strategy, you will create a lot of pipelines. This becomes a tangled mess if you need to update those pipelines if they still exist.

Breaking our code into accounts for CodeCommit and CodePipeline, helped enable this. Our strategy follows below.

Read More
Automated Multi-Region deployments in AWS: DynamoDB
Harold Smith III Harold Smith III

Automated Multi-Region deployments in AWS: DynamoDB

Virtually all of our data storage we do is within DynamoDB. We have chosen DynamoDB over others because there is literally nothing to manage—it is Platform as a Service ("PaaS") up and down.

You define a table, indices, and you are off and running.

For those that don't know, DynamoDB is analogous to a "NoSQL" database. Think of it as a large hash table that provides single digit millisecond response times. It scales so well it is what Amazon (proper) uses to drive the Amazon Store.

In 2017, Amazon launched "Global Tables" for DynamoDB. Global Tables enable you to define a table in one to many regions. DynamoDB automatically syncs to other regions without you doing additional work.

Thus, you can easily have multi-region capabilities with virtually no overhead. We'll dig into DynamoDB and Global Tables in this article. We will focus only on Global Tables.

Read More
Automated Multi-Region deployments in AWS: Gotchas
Harold Smith III Harold Smith III

Automated Multi-Region deployments in AWS: Gotchas

"Gotcha" maybe a bit over the top, but perhaps "caveats" is a better term. Leveraging StackSets alone can cause some order of operation issues, as well as adding multi-region on top of it.

We will discuss these caveats more in depth in other articles, but wanted to touch on StackSets up front, since they underpin everything we will do.

With StackSets and applying them to OUs, automated deployment of them works like a charm, most of the time. As we laid out in the Intro, we deploy all of our IaC as StackSets into OU targets. We do this to automate deployments and ensure we have a consistent deployment throughout all of our Accounts for an application.

This also enables us to create private tenets for customers that only they access, with minimal overhead.

Our entire cloud journey is to remove overhead and reduce maintenance needs and build more awesome things.

Read More
Automated Multi-Region deployments in AWS
Harold Smith III Harold Smith III

Automated Multi-Region deployments in AWS

The tides have changed on resiliency of building applications that reside in the cloud. We were told for years that "Be Multi-Availability Zone" was the means to have resilient cloud apps. But, the outages that have hit every major Cloud Service Provider ("CSP") recently show that it isn't always a strategy if your aim is extremely high availability.

So, we need to think bigger. But, this comes at increased cost and increased complexity. The fact is, there just aren't a whole lot of organizations doing multi-region deployments—let alone ones talking about it. This series hopes to assist in filling that gap.

We decided to author a series of blog posts on how to build resilient cloud applications that can span multiple regions in AWS, specifically AWS GovCloud. Our goal here is uptime to both push data to and retrieve data from a cloud application. This series will touch on several things which we are focusing on, building Web Apps and Web APIs that process data.

Most of our applications use several AWS core technologies (listed below). We have made a concerted effort to migrate to pure Platform as a Service ("PaaS") where we can. We want to avoid IaaS totally, as it requires additional management of resources. We can't tell you how all of this will work with Lift and Shift, as our engineering is centered around using cloud native services.

The goal for us and the reason for the cloud is, let someone else do the hard work. For our cloud based solutions, we do not use Kubernetes ("k8s"), at all. We find the overhead to be too cumbersome when we can allow AWS to do all the management for us. When we cut over to edge computing k8s becomes a viable solution.

At a high level, we use the following services to build and deliver applications:

AWS Lambda and/or AWS ECS Fargate for compute

AWS DynamoDB for data storage (Global Tables)

AWS S3 for object storage

AWS Kinesis + AWS S3 for long term logging of applications to comply with DoD SRG and FedRAMP logging

Now, there are a lot of applications that may need to use more services. Things like Athena or QuickSite maybe necessary, but we consider (at least for the solutions we are building) for those to be ancillary services to the core applications. For instance, in these applications if you can't get to QuickSite to visualize some data for an hour—its not that big of a deal (at least for this solution). But, if you can't log data from the field real time, that is a big deal.

Read More
Custom CloudFormation Resource for looking up config data
Harold Smith III Harold Smith III

Custom CloudFormation Resource for looking up config data

This project, CloudFormation Lookup enables you to build CloudFormation templates that pull configuration data from DynamoDB as a dictionary. Consider the scenario where you are using CodeBuild to deploy apps into a series of AWS accounts you control. Each of those accounts may have differing configuration data, depending on the intent of the account. For instance, perhaps you deploy an application across segregated tenants for customers? Each of those tenets may have different configurations, like DNS host names.

This project can be found here on GitHub

As of right now with CloudFormation, there is no means to pull that data, on a per account basis. To solve this problem, we have developed a custom CloudFormation resource, that enables you to define resource, as below in your CloudFormation template.

Read More
Automating iOS App Development with CI/CD Pipelines with macOS Build Servers
Harold Smith III Harold Smith III

Automating iOS App Development with CI/CD Pipelines with macOS Build Servers

As part of our series on building iOS apps, we will walk through configuring a build server for doing so. This build server can also be used for building macOS apps as well.

This write up is intended to not solve all your CI/CD issues for building apps for iOS, but more of a "bare bones" build server that will help you scale your DevSecOps pipelines for mobile.

To be up front about this, automating builds on macOS has a few pain points. In the pursuit of building a more secure OS, macOS can tend to be on the difficult side for build automation.

For instance, configuring a "headless" build server with FileVault enabled is impossible at this point. So, you cannot VNC into a server sitting in a server rack without doing so locally. Setting up an "auto login" via macOS with FileVault also will not work, because FileVault does not allow that. So, one must take these issues into account.

Without logging in, you cannot (in this instance of this build server) run the GitLab Runners.

So, options can be limited depending on what you are attempting to do. To work around this, you may want to have your macOS boot volume not encrypted and store all your data in an encrypted volume. This will enable the macOS build server to book and auto-login to enable jobs to run.

For GitLab, you need no ingress point to access the build server, only egress to ping your GitLab repo. So, one could drop this box in a private subnet that has some outbound egress and be somewhat comfortable with the security around it.

Automating a lot of these steps hasn't been easy, there are a lot of password and confirmation prompts that require a user to do something.

Read More
Cross Account DynamoDB Access
Harold Smith III Harold Smith III

Cross Account DynamoDB Access

We at Monkton use DynamoDB a lot for storage. It is extremely fast and scalable. A lot of the work we do is in AWS GovCloud, so this post will be geared towards that, but easily portable to other regions. We spent some time digging around and being frustrated trying to get this to work and wanted to share lessons learned to avoid those headaches.

Defining the need

We are helping build a new set of services, part of our multi-account architecture is a centralized "Identity SaaS" service. While we have micro-services available in that account to read/write to the "Identity SaaS" service DynamoDB, we opted to read/write directly to it, for other trusted services and accounts. This was simply a performance choice on our end to speed things up. We wanted to avoid creating a HTTPS request, waiting for it to do its thing in DynamoDB, and return—when we could do it directly using the same logic.

Many considerations

Part of how to configure this is understanding where and what services we will be using. For this project, we are using Lambda and ECS Fargate to deploy backend services. For the purposes of this demo, we are looking at Fargate, but lessons apply to Lambda as well. Part of that is following "Best Practices" and deploying these services into VPCs with private subnets.

Read More
Automated Testing of iOS Apps in CI/CD Pipelines (Part One)
Harold Smith III Harold Smith III

Automated Testing of iOS Apps in CI/CD Pipelines (Part One)

This is a multipart series we are putting together to walk through automation of DevSecOps for mobile solutions. We are going to focus on iOS, but much of this is applicable to Android as well. Our goal is to leverage GitLab as the CI/CD engine and other services like AWS Device Farm, SonarQube, and NowSecure for testing. Finally, the app should pre-stage by self publishing to Apple's App Store for TestFlight publishing.

For as many mobile solutions that exist out there, the write ups and documentation that exists to automate testing, specifically UI testing is substandard to say the least. This post will lay out some of the techniques we leverage to automate the testing of mobile apps (iOS specifically) to perform fully automated UI testing.

iOS Testing in AWS Device Farm

We leverage AWS Device Farm to implement testing—the capabilities of Device Farm are fantastic, the difficult bit is the practical application of documentation.

Again, everyone talks about automated testing, but who is actually doing it?

We'll dive into AWS Device Farm later.

Read More
ICYMI: How to Sell Your Digital Transformation Vision
Alana Wilbur Alana Wilbur

ICYMI: How to Sell Your Digital Transformation Vision

Why do we need change? What is the problem we are solving, and who is it being solved for? What does change look and feel like to you? Clear and succinct communication is not only necessary for executive leaders, but also necessary for others to be champions for your vision. Read this blog from Monkton to learn more about how to sell your digital transformation vision.

Read More
Automatically configuring AWS GovCloud Accounts
Harold Smith III Harold Smith III

Automatically configuring AWS GovCloud Accounts

This technical article will walk through a CloudFormation template that will create a Step Function that creates AWS GovCloud accounts with AWS Organizations and automatically links them. Our end goal is to simply submit a JSON package like this:

{

"email": "some-email@example.com",

"name": "The Account Name"

}

And generate the AWS Organization and link them. This is a rather manual process if you do it by hand.

This CloudFormation template provides two main components:

A configured S3 bucket and KMS key that enable child AWS Organizations to pull from the bucket

A Step Function that automatically creates and links AWS organizations

This script is intended for creating AWS GovCloud accounts, but can be modified for creating standard AWS accounts. Note, this will create the requisite commercial AWS accounts that GovCloud accounts are tied to.

We have included this notice because this CloudFormation template is deployed into the root AWS GovCloud account you own.

Read More
NGINX Auto Configure from S3
Harold Smith III Harold Smith III

NGINX Auto Configure from S3

This technical article will break down how to automatically configure a custom build of NGINX (using Alpine Linux) that runs in Fargate.

Why? Well, we want to enable encrypted data in transit through the stack of the AWS Fargate solution we are deploying. Our entry point is an AWS Application Load Balancer accepting traffic on port 443 for TLS communication. We have an ACM certificate stored in our Account that we have referenced and use to configure that.

From there, we have a Task running in a Service/Cluster within Fargate. This task is a RESTful Web Service. Our desire is not to configure that task to process TLS itself, due to unnecessary changes to the Containers.

So, what we will do is leverage NGINX as a reverse proxy and use S3 to automatically configure NGINX on the fly as the Container is launched! We accomplish this by extending the NGINX Alpine Linux container, adding a script to download the configuration from S3 upon launch, and voila done.

Read More
Mobilizing for USDA Inspectors
Alana Wilbur Alana Wilbur

Mobilizing for USDA Inspectors

Upon entering a grocery store, the general public is typically not pondering whether they feel protected by the safety, efficacy, and security of the food supply presented to them, but that is in fact what the FDA is responsible for. When it comes to meat, labels now inform us of how the animal was fed, the conditions they were raised in, and a myriad of other miscellaneous facts that manufacturers capitalize on in order to gain consumer loyalty. However, when dealing specifically with field-based meat inspectors, how data is generated is never a thought – not even an afterthought.

Having so many nuanced compliance regulations within agriculture, mobility means always knowing what is necessary to complete inspections. USDA inspectors are the prime example of hurry up and wait—whether it be improper paper documentation, waiting for a form sign off, or lag time in getting meat over the border because of regulatory laws, inspectors need mobile apps to simply know how to do their job. Mobile solutions provide clarity on when, where, and how meat inspections can occur safely and securely, while still getting the product to its end destination in a timely manner.

Read More