Blog
Monkton Starter Guide: Functions as a Service
FaaS (Function as a Service) is a Cloud Computing service that gives developers a platform to build, run, and manage applications—without the burden of maintaining the infrastructure where it operates. Find out more about FaaS, including how it relates to Serverless with a simple breakdown from Monkton.
Modernizing Federal Operations with Monkton EdgeMX and Hypersonic
Discover how Monkton Hypersonic helped build and deliver Monkton EdgeMX, a modern cloud solution, which can help erase technical debt in Federal and enterprise landscapes.
Liza Smith Leads Monkton’s Marketing and Creative Efforts
Liza’s mission is to build conversations and integrated promotions around advanced technologies and custom-built products, while amplifying Monkton’s unique brand story—not just providing education on what’s possible, but inspiring and driving action through authentic human perspectives and experiences.
Monkton’s 3-step Approach to Data-informed Decision Making
Government decisions impact millions. How can agencies ensure informed decisions? Data-driven strategies, Cloud-Based Edge Capable (CBEC) architectures, and a 'code-is-code' approach ensure speed, adaptability, and proactive problem-solving from Monkton. These tips will help you make more informed decisions on every mission—and take action when it’s needed most.
The 2024 Advanced Guide to Edge Computing for Government Efficiency
To fix things faster for the federal government, Monkton takes an agile, Edge-based approach.
We know not every solution needs Edge capabilities, but we also know missions live on the Edge and mission needs can rapidly change. By delivering CBEC solutions to our partners and warfighters at the onset of every solution, customers have the Edge advantage baked in from day one and can easily build for future capabilities. Leading with CBEC also breaks the obsolete tech trend of building one-off custom solutions.
Edge Computing for Tactical Edge Environments
Edge Computing solutions enhance data security and real-time access, benefiting sectors like warfighters, agriculture, and healthcare.
More tasks can be completed in real-time in the field even when connectivity is lost, saving time and money. Learn how Monkton’s secure apps can streamline government processes in tactical settings, ensuring data access in low-connectivity environments, authenticating every transaction, boosting efficiency, and addressing challenges.
Zero Trust is the New Security Standard
Monkton’s patented technology, Anchorage, uses hardware-based cryptography to secure sensitive data transmitted between devices in an IoT environment. It ensures the authenticity of device data, making it essential for securing sensitive IoT data transmission, verifying that devices are exactly what they claim to be.
Harold Smith III is Building Faster Ways to Fix Things
As Monkton’s CEO and co-founder, Harold Smith III interacts with everyone from executives to end users, developing ways to solve issues before they occur.
Harold has led Monkton from its first project, developing a mobile maintenance app for the U.S. Air Force, which went from kickoff to user acceptance testing in less than four months. Since then, he’s made it a priority to work with the best partner companies in the industry, valuing organizations committed to change and not afraid to take risks.
Build Faster Ways to Fix Things
Government agencies need to evolve and enhance their ability to solve challenges with forward-thinking processes. Monkton explores four approaches to speed up this evolution—organizational agility, collaborative partnerships, flexible regulatory frameworks, and informed decision making.
By embracing these strategies, agencies can more quickly adopt new technologies and approaches that better serve their citizens.
How Serverless Computing Can Transform Government
Serverless architecture removes the burden of managing servers, allowing developers to focus solely on code and creating rapid capabilities, today.
Monkton explores how Serverless Computing can optimize cost, scalability, and efficiency in software development for government and businesses looking to streamline their operations and reduce infrastructure management complexities.
Combining the Power of Edge Computing with Monkton's Cloud Native Solutions
Edge Computing—and its detailed significance for real-world applications—decentralizes data processing, reducing latency and improving real-time capabilities at the network Edge.
Monkton highlights the potential for Edge Computing in various industries, emphasizing its role in enhancing performance, security, data integrity, and efficiency in an increasingly interconnected world.
Rapid DoD Modernization: The MATTER IDIQ
Explore Monkton's MATTER IDIQ (Indefinite Delivery, Indefinite Quantity) contract, awarded in 2020, enabling the delivery of cutting-edge, mobile-first solutions to federal government agencies, civilian agencies, and the DoD.
With a generous funding ceiling of $500 million, Monkton aims to enhance mission operations, prioritizing security and user-friendly experiences for government agencies.
Fast, Flexible Federal Procurement: What Is an IDIQ?
Monkton highlights the significance of IDIQ contracts in enhancing flexibility and efficiency in government acquisitions.
Gain insight into how an IDIQ (Indefinite Delivery, Indefinite Quantity) contract streamlines procurement processes for government agencies by allowing them to order various quantities of goods or services over a specified period.
What is NSA NIAP?
NIAP, or the National Information Assurance Partnership, is the U.S. government’s neutral, third-party testing initiative. NIAP is a partnership between NIST and the NSA to assess the security of IT products through strict Common Criteria (CC) standards.
NIAP ensures all products and solutions meet globally recognized security standards, which is essential in the face of increasing cyber threats and remote work scenarios. Monkton's CEO, Harold Smith III, highlights the importance of NIAP in building confidence and safeguarding the digital landscape.
Automated Multi-Region deployments in AWS: Lambda
Amazon's Lambda service is perhaps one of our favorites there is, because it lets you just hand over code to AWS and lets AWS deal with the complexities of executing it. Best of all, it is cost effective: instead of servers running 24/7 and wasting resources and money, it only charges you when your code needs to be executed. I'd even argue its more environmentally friendly, since you only burn what you use!
This does come at some "cost." Lambda executions can be more expensive when you aggregate them out compared to other computing. But, the benefit on the flip sides is, you don't have to worry about managing servers (Even ECS with EC2 you need to monitor servers), patching, etc. So, is that a cost you'd be willing to pay for?
Debugging can be a bit more difficult. Getting your application running and tested the first go can be a bit painful if something isn't working as expected. Once you get it cracked for the first time, the following are a breeze.
But, again—this all falls back to, making life easier. Our goal is to not have to manage servers, ever. Lambda (and to be far, Fargate) enable this.
AWS CodePipeline Random Musings
We have laid out the following accounts for our core DSOP architecture:
DSOP CodePipelines
DSOP CodeCommit
DSOP Tooling
Each of these provides us with different outcomes and tries to limit access to developers and ops. Traditionally, many would combine the CodeCommit and CodePipelines accounts into one. This is an ok strategy, but could potentially cause some issues with separation of duties. Our goal is to break that and enable CodeCommit and CodePipeline to reside in different accounts.
There are a lot of "Gotchas" in the process of developing Pipelines. For one, creating a pipeline that works on multiple branches is basically impossible with CodeCommit feeding directly into CodePipeline. You need a 1:1 pipeline for your branch. So, if you are using a branch development strategy, you will create a lot of pipelines. This becomes a tangled mess if you need to update those pipelines if they still exist.
Breaking our code into accounts for CodeCommit and CodePipeline, helped enable this. Our strategy follows below.
Automated Multi-Region deployments in AWS: DynamoDB
Virtually all of our data storage we do is within DynamoDB. We have chosen DynamoDB over others because there is literally nothing to manage—it is Platform as a Service ("PaaS") up and down.
You define a table, indices, and you are off and running.
For those that don't know, DynamoDB is analogous to a "NoSQL" database. Think of it as a large hash table that provides single digit millisecond response times. It scales so well it is what Amazon (proper) uses to drive the Amazon Store.
In 2017, Amazon launched "Global Tables" for DynamoDB. Global Tables enable you to define a table in one to many regions. DynamoDB automatically syncs to other regions without you doing additional work.
Thus, you can easily have multi-region capabilities with virtually no overhead. We'll dig into DynamoDB and Global Tables in this article. We will focus only on Global Tables.
Automated Multi-Region deployments in AWS: Gotchas
"Gotcha" maybe a bit over the top, but perhaps "caveats" is a better term. Leveraging StackSets alone can cause some order of operation issues, as well as adding multi-region on top of it.
We will discuss these caveats more in depth in other articles, but wanted to touch on StackSets up front, since they underpin everything we will do.
With StackSets and applying them to OUs, automated deployment of them works like a charm, most of the time. As we laid out in the Intro, we deploy all of our IaC as StackSets into OU targets. We do this to automate deployments and ensure we have a consistent deployment throughout all of our Accounts for an application.
This also enables us to create private tenets for customers that only they access, with minimal overhead.
Our entire cloud journey is to remove overhead and reduce maintenance needs and build more awesome things.
Automated Multi-Region deployments in AWS
The tides have changed on resiliency of building applications that reside in the cloud. We were told for years that "Be Multi-Availability Zone" was the means to have resilient cloud apps. But, the outages that have hit every major Cloud Service Provider ("CSP") recently show that it isn't always a strategy if your aim is extremely high availability.
So, we need to think bigger. But, this comes at increased cost and increased complexity. The fact is, there just aren't a whole lot of organizations doing multi-region deployments—let alone ones talking about it. This series hopes to assist in filling that gap.
We decided to author a series of blog posts on how to build resilient cloud applications that can span multiple regions in AWS, specifically AWS GovCloud. Our goal here is uptime to both push data to and retrieve data from a cloud application. This series will touch on several things which we are focusing on, building Web Apps and Web APIs that process data.
Most of our applications use several AWS core technologies (listed below). We have made a concerted effort to migrate to pure Platform as a Service ("PaaS") where we can. We want to avoid IaaS totally, as it requires additional management of resources. We can't tell you how all of this will work with Lift and Shift, as our engineering is centered around using cloud native services.
The goal for us and the reason for the cloud is, let someone else do the hard work. For our cloud based solutions, we do not use Kubernetes ("k8s"), at all. We find the overhead to be too cumbersome when we can allow AWS to do all the management for us. When we cut over to edge computing k8s becomes a viable solution.
At a high level, we use the following services to build and deliver applications:
AWS Lambda and/or AWS ECS Fargate for compute
AWS DynamoDB for data storage (Global Tables)
AWS S3 for object storage
AWS Kinesis + AWS S3 for long term logging of applications to comply with DoD SRG and FedRAMP logging
Now, there are a lot of applications that may need to use more services. Things like Athena or QuickSite maybe necessary, but we consider (at least for the solutions we are building) for those to be ancillary services to the core applications. For instance, in these applications if you can't get to QuickSite to visualize some data for an hour—its not that big of a deal (at least for this solution). But, if you can't log data from the field real time, that is a big deal.
Custom CloudFormation Resource for looking up config data
This project, CloudFormation Lookup enables you to build CloudFormation templates that pull configuration data from DynamoDB as a dictionary. Consider the scenario where you are using CodeBuild to deploy apps into a series of AWS accounts you control. Each of those accounts may have differing configuration data, depending on the intent of the account. For instance, perhaps you deploy an application across segregated tenants for customers? Each of those tenets may have different configurations, like DNS host names.
This project can be found here on GitHub
As of right now with CloudFormation, there is no means to pull that data, on a per account basis. To solve this problem, we have developed a custom CloudFormation resource, that enables you to define resource, as below in your CloudFormation template.