Showing posts with label aws. Show all posts
Showing posts with label aws. Show all posts

Friday, February 21, 2020

IAM Least Privilege Permissions

There are multiple parts to writing IAM least privilege policies. This document attempts to touch on the best ways to do each of these tasks. The assumption is that you’ve never written an IAM policy before, which is probably false but starting there is probably useful for a newbie developer.

Know what you need

Only you as a dev will know what data your code needs access to. For example: At a minimum you’ll need to know that your new feature plans to read patient records, identify the non-encrypted ones, encrypt them, delete the old records and store the new encrypted records, while maintaining an audit trail. No tool can tell you this, right?

Also, you’ll need to know which AWS service can do all those things. This too is not something any tool can automatically tell you. You need to go Google and read all AWS documentation to understand what tools AWS offers to do the stuff above. So then you get to a point where you say:
  • My records are in DynamoDB. I need read access to DynamoDB.
  • I need to look at the metadata of each record to see if it's encrypted. So I need read access to all metadata.
  • Then I need to encrypt them. I need a key to do this. I need to be able to use an existing KMS key OR be able to create a new one to encrypt my data.
  • I need to delete the old records. I need write access to DynamoDB.
  • I need to store the new records. I don’t need any more access, as I already got write access in the previous step.
  • Oh wait. Not all tables - just Table A :)
  • I need to log all my actions somewhere. I need write access to Cloudtrail.

How do I implement the above?

  • Now you know you’re going to probably have a StepFunction or a Lambda function where that code will live. Which means there’s an execution role that comes into play. It’s a good idea to use a new role for each function coz:
    • It’s cleaner and you know exactly what that function and only that function needs
    • It’s easier to change and not worry what else will break in some other function that shares that role
    • Security will whine less as you’ll probably follow least privilege when you assign permissions
  • Okay good. You create a new execution role and now need to assign permissions. You need to map all those Read/Write statements that made sense intuitively into an IAM policy, which is what controls access to all the roles.
  • You add a lot of permissions, see that it works, dial down a bit, see it still works, dial down some more and see it break and repeat this increasingly painful and irritating exercise till everything works. Of course then security complains that it's not least privilege, gives you a link to what’s expected and rejects the request. The link’s somewhat helpful but not a lot, and you end up repeating all that crap again till they’re happy.
  • You get better at this over time but at one point feel you’re wasting a lot of time tweaking policies, when you should be writing code instead. And then wonder if there is a better way to write IAM policies.

Some ideas to get better

  • Definitely use a PolicyGenerator. Don’t write a policy by hand.
    • AWS Console: You need to know exactly what you want. If you do, it works great
    • AWS PolicyGen: Older version of Console but with a simpler interface. Not sure its maintained
    • PolicySentry: This looks like a fantastic tool that does a lot of the policy generation for you. It is worth spending the time learning all its detailed options.
      • If you know which resource you want to control access to, use a CRUD template
      • If you know which actions you need, use an Actions template
  • To clean up policies generated by PolicySentry, remember what you really need and refer to the AWS docs. You’ll need to do this as you almost certainly do not need everything PolicySentry gives you. Pay heed to the Dependent Actions column as well and don’t forget to grant access to those actions. Here is a sample table for CloudTrail.
  • Create as many templates as possible for services that you often use and make them easily reusable or to reference, across your engineering team.
  • Use a linter such as Parliament to check your policy once it’s generated for obvious typos as well as some security misconfigurations. Integrate it into your pipeline.
  • Use the AWS policy simulator to verify if your execution role has and doesn’t have access to the expected resources. Think of this as a confirmation of your existing policies. The good thing with this is it doesn’t make any changes to your actual stack.
Examples for tools above:
Policy Sentry templates

CRUD:
policy_sentry create-template --name TestRole --output-file crud.yml --template-type crud


Edit the YML template that gets generated, tweaking it to remove whatever you don't want.










































Policy Sentry query commands


policy_sentry query action-table --service cloudwatch --wildcard-only
policy_sentry query arn-table --service dynamodb
policy_sentry query condition-table --service lambda

Parliament sample bad policies

Blank resource field where a "*" or an ARN is needed:


















Mismatched condition where the condition is not valid for the chosen action:



Friday, May 24, 2019

AWS - Networking Services

VPC: This is the DMZ/Vlan/Segmentation equivalent for the cloud. You can create a VPC, create subnets inside the VPC and then assign EC2 or RDS instances (or anything that needs an IP address) addresses inside individual subnets. You can then set ACLs on the VPC or individual subnets (in addition to security groups on the instances itself) to control inbound and outbound communication. You can have private and public (internet facing assets) subnets in a VPC. You can have these things called private VPC endpoints for public services such as KMS (cryptography), that ensure that traffic to KMS, instead of being sent over the Internet is sent exclusively over the AWS network. This is one of those services that you will almost certainly use, if you are on the cloud so do be familiar with it. :)

CloudFront: It is usually a common practice to use a CDN to cache static content in locations closest to the user (edge of the network) so round trips to the web server and DB server can be avoided. Now though, even dynamic content is retrieved by edge locations close to the destination servers and served to the end user. AWS Cloudfront claims to take a look at the requests coming in and making decisions on what dynamic content to serve to whom.

Cloudfront is also integrated with Web App Firewalls and DDOS protection services to protect services against malicious attacks. It additionally integrates with Lambda (run functions based on specific events), handles cookies (possibly for authenticated requests) and ACM so that a specific certificate is shown to the end user. Here is a good article about how CDNs work, along with a nice diagram at the bottom.

Route53:This is AWS's DNS service. It allows users to register their domains, configure DNS routes so that users can reach their application as well as check the health of web servers that are registered with Route 53.

API Gateway: This allows users to create HTTP REST & WebSocket APIs for any functionality they want to implement. You can integrate the API with HTTP (Query string parameters), call a Lambda function when an API is called, integrate it with other AWS services and then return a response to the end user.

Direct Connect:  This establishes a physical, link between the end user network and an Amazon location that supports Direct Connect. For this purpose fiber-optic cables that support either 1 Gbps or 10 Gbps must be used and the customer network devices must meet certain requirements. The main purpose of this service is to speed up data transfer between an on-premise network and an AWS service by bypassing a lot of the public Internet. This can be public like S3 or privately hosted inside a customer VPC. The other key factor is that this is apparently much cheaper than accessing S3 or VPCs over the Internet. Here's one such implementation.

App Mesh: Microservice architectures are quite common these days. The greater the number of microservices though, the greater is the management overhead from a monitoring perspective. Once there are applications already running somewhere (EC2 for example), App Mesh, built on Envoy can be configured such that traffic to every single micro-service of the application first passes through App Mesh. Rules configured on AppMesh can then determine the next steps to be taken. This is better than installing software on the OS of every microservice host and have them communicate to diagnose problems.

Cloud Map: This allows you to create user-friendly names for all your application resources and store this map. This can all be automated so as soon as a new container is created or a new instance is spawned due to more traffic, its IP address can be registered in CloudMap. When some micro-service needs to talk to another service, it'll look it up in CloudMap. This hence means that you no longer need to maintain a configuration file with locations of your assets - you can just look them up in CloudMap.

Global Accelerator:  Global accelerators once configured provide the user with a static IP address, mapped to several other servers. The traffic that hits the global accelerators will be redirected over routes in the AWS network to hosts close to the user's location and that have less load, so that the overall availability and performance of the applications improves. The aim is that traffic doesn't hit nodes that are not performing that well.

Thursday, May 23, 2019

AWS - Migration Services

Application Discovery Service: This one's to find out what offline servers you have and make a list of all that to then display them in the console online. For VMware VCenter hosts there's an AWS VM you have to install that'll do the discovery. Alternatively you can install an agent on every offline host you want tracked online. The last way is to fill out a template with a lot of data and import it into the console.

Database Migration Service: This is pretty self explanatory in that it allows you to migrate from an AWS data store to another AWS data store (support for Aurora, MySQL and plenty others) or to/from an on-premise instance. You can't do on-premise to on-premise :). The source database can apparently remain live throughout the migration which AWS claims (and probably is - idk) is a great advantage.

Server Migration Service: Just like the previous service helps migrate on-premise databases, this one helps migrate on-premise servers in VMWare, Hyper V and interestingly Azure to AWS. A VM is downloaded and deployed in VMware Vsphere. This then (when you say so) starts collecting the servers that you've deployed in VSphere and deploys it as Amazon Machine Images (AMI) to the cloud. These images can then be tested by creating new EC2 instances using these AMIs to see if they're functional before deploying them to production.

AWS Transfer for Sftp: This is quite simply just a managed Sftp server service that AWS has. The aim is to tempt people away from managing their own SFtp servers offline and migrate data to the cloud. It supports password and public key auth, and stores data in S3 buckets. All normal SSH/SFTP clients should work out of the box. Authentication can be managed either via IAM or via your own custom authentication mechanisms.

AWS Snowball: This is an appliance that you can ship to your data-center, copy all the data (upto 80 (Snowball) -100 (Snowball Edge) TB) to it over your local network and then ship the box back to AWS. AWS take that box and then import all the data into S3. The key win here is that you don't need to buy lots of hardware to do the transfer but can use AWS's own appliance instead. Also it saves a ton of bandwidth because you're doing local transfers instead of over the internet.

Datasync: Contrary to Snowball, Datasync transfers data to/from customer NFS servers to/from S3 or EFS over the network at high speeds using a custom AWS Datasync protocol (claim is upto 10 Gbps). Alternatively they can choose to go from NFS in the cloud to S3 also in the cloud. A DataSync agent is installed as a VSphere OVA in case of an on-premise server after which you add the various locations and configure them as sources or destinations. Finally a task starts and data is transferred between the 2 locations. Here's a nice blog demonstrating this.

AWS Migration Hub: This is sort of a 1 stop for starting off collection or data migration using the various other services that AWS has. Some of these were already mentioned above (Server and Database migration services). In addition there are some integrated migration tools (ATADATA ATAmotion, CloudEndure Live Migration etc - none of which I've heard of :)) that one can use when performing this migration. There is no additional cost to use this service - you pay for using the individual tools themselves.

Tuesday, May 21, 2019

AWS - Storage Services

S3: This is arguably (along with EC2) the most popular service that AWS offers. In short it allows users to store their files in it - behaving like an online file store. It has other uses such as hosting a website that has static content in it. Services very commonly store audit logs here and in short S3 is integrated with a large number of AWS services. S3 is a global service and has buckets whose names are unique - 2 users cannot create the same bucket. Files are stored inside the bucket and are called keys. For such a popular service - it does have fewer options (which are sufficient) via the AWS CLI. If you're starting to learn about AWS, this is the place to start.

EFS: This is an NFS file system that expands to the sizes of the files you are storing on it. You can use an NFS client on an EC2 Linux system to remotely mount and then read/write from/to the file system. They also have this interesting concept called lifecycle management which moves infrequently used files to a different class of EFS storage that costs less.

The GCP equivalent for this is FileStore.

FsX: This too in short is a file system that can be accessed remotely but it has been made keeping Windows systems in mind. Users who have Windows applications that need access to a lot of data over the network via SMB mapped network drives, are the targets. Linux systems too can access these mapped drives using a package called cifs-utils. It additionally also supports applications that use Lustre, a filesystem that targets applications that require a lot of computation.

S3 Glacier: If you have a large number of files that you do not want to delete (like old pictures) but do not use often S3 Glacier is the thing to use. The unit of storage for Glacier is a vault which is sort of equivalent to a bucket in S3. Only creation and deletion of vaults is through the console; everything else happens via the CLI or SDK. Additionally it also claims to be extremely low cost, which I'm not saying anything about :)


Storage Gateway: If there is an existing data-center where you already have a large number of applications that talk to databases, scaling this can become hard quickly if you have a lot of traffic. The AWS Storage Gateway is a virtual machine appliance (ESXi), an on-premise 1U hardware appliance (buy on Amazon) or even as an EC2 appliance. Once it's activated, the appliance will pick up all your data from the datacenter stores and put it on to S3, Glacier or EBS. Now you can just point your application to the new stores via an NFS client and it should work seamlessly. Here is a blog that walks you through a sample process. Additionally it allows backup applications to directly hit the gateway (configurable as a tape gateway) and backup directly to AWS S3 or Glacier.


AWS Backup: This service allows you to backup data from EC2, RDS and a few other services to S3 and then move that data to Glacier (I think) after a certain time. You can configure backup plans to decide what gets backed up (by tagging resources), when, whether its encrypted or not and when the backup is deleted. As of now it only supports a few services, but it's reasonable to assume that once it becomes more popular there'll be more services that are added to this.

Thursday, May 16, 2019

AWS - Compute - Container Services

Here is an image from the Docker website that describes how containers work.



Teams are increasingly building their workflows around Docker containers. Amazon has a few services that make this easier. This post briefly discusses each of these services.

ECR: This is a repository of pre-built images that you can build on your machine and then upload to AWS. So for example: You can build a Ubuntu image with a LAMP stack and any other custom packages and upload it to ECR. When other AWS services need to use that image for some other purpose, it is easily available.

ECS: Once the Docker images you built earlier are uploaded to ECR, one can use these images on EC2 instances to perform whatever computing tasks were specific to that container. This is where ECS comes in. Users can direct ECS to run specific containers that it then picks up, identifies EC2 instances they can be run on (creates a cluster of these) and then does so.

Once the cluster is ready, a task definition needs to be created. This defines how the containers are run (what port, which image, how much memory, how many CPUs and so on). When the task definition is actually used, a task is created and run on the cluster of EC2 images (each is called an ECS container instance) that were originally created.

An ECS agent is additionally installed on each ECS container instance that communicates with the AWS ECS service daemons itself; these agents respond to start/stop requests made by ECS.

The equivalent product on GCP is Kubernetes.

EKS: Kubernetes on Google has an architecture where there is a Kubernetes master node (the controller) and a number of worker nodes (equivalent to ECS agents on Docker containers) that send information about the state of each job to the master. The master then (similar to ECS) sends information about its various tasks that are running, to the Kubernetes daemon itself which uses it for some controlling purposes. Here is a diagram that illustrates this:



EKS on Amazon allows the Kubernetes master to be configured inside the AWS environment and allow it to communicate with deployments elsewhere, while simultaenously interacting with ELB, IAM and other AWS services.

Batch: If one has a job that one wants to schedule and run periodically while automatically scaling up or down resources as and when a job completes or takes up more memory/resources - AWS Batch is a good idea. AWS Batch internally uses ECS and hence Docker containers on EC2/Spot instances to run the jobs. Here is a nice guide that goes into an example of using Batch in a bit more detail.

Tuesday, May 14, 2019

AWS - Compute Services

This blog summarizes some of the AWS Compute services. I deliberately do not cover the ones that deal with containers, as I plan to blog separately about those. I'm looking at Google Cloud side by side from now on so I'll keep updating these posts just to mention if there is an equivalent. When I get to Azure, I'll do the same there as well :)

EC2: EC2 is one of the most popular services that AWS has. It basically allows you to spin up virtual machines with a variety of operating systems (Linux, Windows and possibly others) and gives you a root account on it. You can then SSH into it using key authentication and manage the system. What you want to use it for is completely up to you: Host a website, crack passwords as a pen-tester, test some software or really anything else.

The GCP equivalent for EC2 is Compute Engine.

Lightsail: Lightsail is very similar to EC2 except it comes with pre-installed software such as Wordpress or a LAMP stack as well and you have to pay a little money to own the server. The plus here is that it's easier for users who are non-technical to use Lightsail, compared to EC2 where you have to do everything yourself. In other words it is Amazon's VPS solution.

Lambda: This is AWS's Function-as-a-Service solution. In other words you write code and upload it to Lambda. You don't necessarily have to worry about where you'll host your code and how you'll handle incoming requests. You can configure triggers in each of these other services and then have Lambda act when the trigger is activated. For example: You can create a bunch of REST APIs and have the back-end requests handled by a Lambda function, upload files to S3 and have something happen each time a specific file is uploaded or do more detailed log analysis each time an event is logged to Cloudwatch. Lambda is integrated with a large number of AWS services so it is well worth learning it and using it better.

The GCP equivalent for Lambda is Functions.

Elastic Beanstalk: If you have some code that you've built locally and want to quickly deploy it without worrying about the underlying infrastructure you'd use to do it and don't want to spend a lot of time tweaking it - Beanstalk is the way to go. You can for example choose Python as a runtime environment, upload your Python code and let AWS then take over. AWS will create roles, security groups and EC2 instances that are needed (among anything else) and deploy your application so it is then easily accessible. If you need additional components such as databases or want to modify the existing configuration, these can be added later to the environment.

The GCP equivalent for Elastic Beanstalk is App Engine.

Serverless Apps Repository: This is a large repository of applications that have been created by users and uploaded for use by the community. One can grab these applications and deploy it in one's own AWS account. The requisite resources are then created by deploying a SAM template. The applications can be used as is or modified/code-reviewed before actually using it. If you change your mind, you can delete the CloudFormation template - this will delete all the AWS resources that were created during deployment.

Tuesday, September 18, 2018

AWS - Developer Tools

This post summarizes the AWS services that are used to help you write code and reliably build, test and deploy it faster that things would be manually. The overall concept of doing all this automatically is usually summarized as Continuous Integration Continuous Deployment. Here is a simple post that nicely explains these concepts.

If you don't want to read any more the tl;dr is this:

* Write code using AWS Cloud 9 
* Debug code using AWS XRay
* Store code using AWS Code Commit
* Build and test code using AWS Code Build
* Deploy code using AWS Code Deploy
* Watch task progression at runtime from a single interface using AWS Code 
   Pipeline
* Use an integrated dashboard for all your tools including issue tracking using
   AWS Code Star.

If you're not familiar with Git, I'd strongly recommend reading a little about it before proceeding and playing with all these shiny new AWS tools. A great source is this chapter from the ProGit book. Once that's done, come back here. It's fine to read through this post as well, even without Git knowledge - it's just easier with that background knowledge.

Cloud 9 IDE

Once you have an idea in mind and want to write software to actualize it, you need a place to write it. A simple text editor works just fine, but as your programs get more complex an IDE is really helpful. A couple you might be familiar with are Eclipse and IntelliJ. However, since this post is about AWS, I must mention the Cloud9 IDE. It is a browser based IDE that gives you the familiar environment. I haven't played with it too much, but it's good to know there is a web-based option now.


XRay

This looks like a code-profiler to me. I did not use it so do not have much to say about it. But I'd think the way to use it, will be to write your code and use this to figure out which calls are really slow and see if you can optimize your code further. All the rest I did try out and can confirm they are all very cool tools. So read on.

Code Commit

Once you finish writing all your code, you need a place to store the code. This is where all the VCS come in. Git is what everyone use these days. The AWS equivalent of Git is CodeCommit. It's so similar that you do not need to learn any new commands. Once you've set your repository up, all the old Git commands work perfectly well. You can add files, commit them and push them to your Code Commit repository.

All you need to do is install Git on your machine, create a key pair and configure your IAM user to use this to authenticate to Code Commit. Clicking the "Connect" button inside the interface gives you instructions per platform if you get stuck.

The coolest thing here is that you can create triggers that'll run as soon as you push code to your repository. Maybe you want to build, test and deploy your code to your test environment as soon as every single commit is pushed. You can do that here by setting up a Lambda function that will be called as soon a commit is made. Which nicely flows into Code Build..

Code Build

Once you have a workflow going where you can write code in an IDE and push commits to a CodeCommit repository, the next step is to make sure that your code builds properly. This is where CodeBuild comes in. All you do is point CodeBuild to the Code Commit repository where you stored your code and tell it where you want to dump any output artifacts of the program (usually S3).

It supports branches too, so you can tell it which branch to pull code from in Code Commit. You select your runtime environment, which you need to build code in (Java/Python/whatever), configure a bunch of other options and then build your project. The result is whatever you get after you hit Code - Build in whatever IDE you use.

The big advantages here are that you do have to spend very little time configuring your software development environment. Also, like I touched upon a bit in the Code Commit section, you could have that Lambda function you wrote as a CodeCommit trigger automatically run Code Build against your code each time a commit is made.

Code Deploy

Once the code is compiled, tests are run and your entire project is built, the last step is usually to deploy it to a web server so your users can then access it. That's where Code Deploy comes in. You can configure it so it uses the build output (with a deployable project) and puts it onto every web server you want to have it on.

You have options of using a load balancer as well, if you want traffic to be evenly distributed. Once deployment is complete, the output should appear on all the servers in that group.

Again, remember you can further extend your Lambda function to build and deploy now as soon as a commit hits Code Commit. Pretty cool :)

Code Pipeline

Code Pipeline isn't something new but it certainly makes life much easier. It helps though if you understand the the 3 previous services I talked briefly about earlier - since the screens in Code Pipeline deal with these 3 services and ask you for input. So I'd recommend understanding those Code Commit, Code Build and Code Deploy really well before using Code Pipeline.

Pipeline basically is a wizard for the other 3 services. So it'll prompt you to tell it where your code is (Code Commit) , what to build it with (Code Build) and what to deploy it with (Coce Deploy). If you already have roles and resources set up successfully when you played with the other 3 services - this should feel very intuitive when you do it. A couple of great tutorials are here and here. Also, a nice writeup on how someone automated the whole process is here.

The coolest thing about Pipeline is that you can see everything, stage by stage and where each stage is once you create it. For example: Once your code is pushed to Code Commit (as usual) and you have the Pipeline dashboard open, you can actually see each stage succeeding or failing, after which you can troubleshoot accordingly.

CodeStar

Managers should love this one. I used it just a bit but it has this fantastic looking dashboard that gives you a unified view of every single service that you are using. So in short, it has links to C9, CC, CB, CD and CP. So if you didn't cheat and did everything step by step :) you should see all your commits, builds and pipelines by clicking on the buttons on the fancy dashboard that is CodeStar.

The additional feature here is integration with Jira and Github where you can see all your issues as well.

So in short CodeStar is a one stop shop if you've bought into the AWS development environment and want to be tied into it for years to come, while parting with your money bit by bit :)

Saturday, September 8, 2018

Confused Deputy

The confused deputy problem is one of the best named issues. Not for any deep philosophical reason, but just because it is truly confusing :). To me anyway, but then, most things are confusing to me, until I spend way-above-normal amounts of time re-reading and re-writing it in my own words. The link above (AWS) is an excellent resource, which I learnt most of it from, so go there first - and if you find that confusing, come over here and I'll try and explain it in my own words. As always, there's nothing wonderfully new here - just my attempt to make sure I remember, have fun writing and hopefully help anyone else along the way.

Let us just keep it simple here. The 3 people in question are Alice, Bob and Eve. Alice has software called MyBackup hosted on the cloud that lets you back up your images that are stored in the service called MyImages. Each time you use Alice's software you have to pay her 100$. Sure that's ridiculous, but stick with me. For some reason Bob thinks this is a great idea and pings Alice to use this service.

Alice creates an account and gives him a unique string called BobAliceBackup1987. She says that all Bob needs to do is to login when he wants to backup, paste the string into a text box on the website and click "Create Backup". This will automatically (details are not important here) let Alice into Bob's account and copy them all to her secret storage box that is very hard to hack and send Bob an Email when it is all done. Don't think about how lame this system is at this point :).

Eve now hears that Bob is using this service and likes it a lot. She subscribes to the service too and gets her key EveAliceBackup1991. Everything is good and everyone is happy.

One day Bob and Eve have a fight and stop talking to each other. Eve feels that Bob is wrong and wants to teach him a lesson. Frustrated, she logs into MyBackup to look at her backups. (WTF who even does this??). While typing in her "secret string" she suddenly wonders if she can make Bob spend his Britney Spears concert money on Backups instead. Can she predict Bob's key? Will Alice find out? Only one way to find out...

She guesses Bob's key (what a shock :/) and sends that key to Alice. Alice hasn't spent much time developing any kind of authorization models, so all she sees is a string come in and think - well there's another 100$ for me :). She just assumes (pay attention here) that whoever sends the string is the owner of the string and actually wants to back their images up. And she backs Bob's images up, 20 times in a row without thinking that something's wrong. Bob gets back at night (no there are no Instant Mobile alerts here for payment debits) and finds out he has backed his stupid car_bumper_dented images up 20 times. Alice is no help, she has proof he sent a string...and sure enough when Bob logs in and checks backup history he sees 20 requests too. Meanwhile Eve feels vindicated. Eventually she might get caught, eventually Bob might get his money back and eventually Alice will learn to write better software but that's beside the point. And yes, it's a made up example but one that hopefully helps you understand the point of the attack better.

In a nutshell, confused deputy occurs when a service with multiple users makes a decision based on user input that is predictable without asking for further authorization. In AWS world, the predictable input is a Role ARN that a service can assume in your account to do something in it. While it looks really big, it is not considered secret and if someone guesses it, they can make a service do things in your account - without your permission. Does that make sense? I hope so. But if not...

... go and read that excellent AWS blog again and see if it makes more sense.

Wednesday, August 22, 2018

Serverless Development

Just another post to solidify concepts in my mind. The Serverless word is often used these days in conjunction with development. All it really means is that you do not have to spend time configuring any servers. No Apache, Tomcat, MySQL. No configuration of any sort. You can just spend your time writing code (Lambda functions). Mostly anyway :)

The most common use of this philosophy is in conjunction with AWS. As in, you create a configuration file called serverless.yml that follows CloudFormation syntax. This basically means you create a config file offline with references to all the AWS resources that you think you will need (you can always add later) and then upload that file to CloudFormation.

CF then looks through the entire file and creates all those resources, policies, users, records, functions, plugins and in short whatever you mentioned there. You can now launch a client and hit the deploy URL and can invoke all the methods you wrote in your Lambda function.

There are some clear instructions on how to deploy a Hello World as well as how one can write an entire Flask application with DynamoDb state locally and then push it all online to AWS with a simple sls deploy command.

All you need to make sure is that you have serverless installed, your AWS credentials configured and access to the console (easy to verify things) and things will go very smoothly. Of course there are going to be costs to all this - so make sure you do all that research before getting seduced by this awesome technology :)