Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Super Fast Dynamic Websites with CloudFront, ReactJS, and NodeJS - Part 1


CloudFront should be an essential component of any web based application deployment. It not only instantly provides super low-latency performance, it also dramatically reduces server costs while providing maximum server uptime.

Creating low latency static websites with CloudFront is a relatively simple process. You simply upload your site to S3 and create a CloudFront distribution for it. This is great for HTML5 websites and static blogs such as Jeckyl. But what about dynamic sites that need real time information presented to the end user? A different strategy is clearly required. Much has been published about different methods of caching dynamic websites but, I will present the most common sense and reliable technique to achieve this end.

Server Side v Browser Side Rendering

If you read any book on NodeJS you will no doubt find plenty of examples of rendering Jade templates with Express on the server. If you are still doing this, then you are wasting valuable server resources. A far better option is to render on the browser side. There are a number of frameworks specifically for this, the most popular being Facebook's ReactJS and Google's AngularJS (see State of JS report). I personally use ReactJS and the example will be in ReactJS, but either is fine.

Creating your site using ReactJS or AngularJS and uploading it to your NodeJS public directory will shift the rendering of your site from your server to the client's browser. Users of your app will no longer be waiting for rendered pages and will see pages appear with the click of a button.

You can now create a CloudFront distribution for your ReactJS or AngularJS site.

Although pages may be rendered instantly in the browser, any dynamic data required for the pages will be cached by CloudFront. We most probably do not want our dynamic data cached. We will still need a solution for delivering this data to the browser.


Handling Dynamic Data


Although there are many elaborate techniques published for handling dynamic data with CloudFront, the best way is to deliver this data without caching at all from CloudFront.

Not all HTTP methods are cached by CloudFront, only responses to GET and HEAD requests (although you can also configure CloudFront to cache responses to OPTIONS requests). If we use a different HTTP method, such as POST, PUT or DELETE  the request will not be cached by Cloudfront. CloudFront will simply proxy these requests back to our server.

Our EC2 NodeJS server can now be used to respond to requests for dynamic data by creating an API for our application that responds to POST requests from the client browser.




Some of you might be wondering why I haven't used serverless technology such as AWS Lambda or API Gateway. Rest assured I will be posting another series using this but, I consider EC2 as the preferred technology for most applications. First of all, costs are rarely mentioned in the serverless discussion. If you have an application that has significant traffic, the conventional EC2/ELB architecture will be the most cost effective. Secondly, many modern web applications are utilising websocket connections. Connections like this are possible with EC2 directly and also behind an ELB when utilizing proxy protocol. This is not possible with serverless technology as connections are short lived.

In the next post in this series we will set up our NodeJS server on EC2, create a CloudFront distribution and, create our API for handling dynamic data.

Be sure to subscribe to the blog so that you can get the latest updates.

For more AWS training and tutorials check out backspace.academy

Attention all exam preppers! AWS Answers



AWS has just released a new AWS Answers page that is essential reading for those preparing for the certification exams.
It provides a great overview of AWS architecture considerations.

YAML and CloudFormation. Yippee!!!

YAML: YAML Ain't Markup Language


I spend a heck of a lot of time coding and, like many devops guys, love Coffeescript, Jade, Stylus and YAML. No chasing missing semicolons, commas and curly braces. I just write clean code how it should be and, at least twice as fast.

JSON, like plain javascript, is a lot cleaner, quicker and easier to read when you remove all those curly braces, commas etc. YAML does just that!

AWS just announced support for YAML with CloudFormation templates. I would thoroughly recommend you check it out and start using YAML. It will make big difference to your productivity and, your templates will be much easier to read understand.

YAML, like Coffeescript, Jade and Stylus, makes use of indenting in code to eliminate the need for braces and commas. When you're learning YAML, you can use a JSON to YAML converter (eg http://www.json2yaml.com) to convert your existing JSON to YAML.

(Very) Basics of YAML

Collections using Indentation eliminate the need for braces and commas with Objects:

JSON 
"WebsiteConfiguration": {
"IndexDocument": "index.html",
"ErrorDocument": "error.html"
}

YAML
WebsiteConfiguration:
  IndexDocument: index.html
  ErrorDocument: error.html

Sequences with Dashes eliminate the need for square brackets and commas with Arrays:

JSON 
[
"S3Bucket",
"DomainName"
]

YAML
  - S3Bucket
  - DomainName

Full Example

Here is a full example I created for S3. I'll let you be the judge which one is better!

JSON:

{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Sample Template",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicRead",
"WebsiteConfiguration": {
"IndexDocument": "index.html",
"ErrorDocument": "error.html"
}
},
"DeletionPolicy": "Retain"
}
},
"Outputs": {
"WebsiteURL": {
"Value": {
"Fn::GetAtt": [
"S3Bucket",
"WebsiteURL"
]
},
"Description": "URL for website hosted on S3"
},
"S3BucketSecureURL": {
"Value": {
"Fn::Join": [
"",
[
"https://",
{
"Fn::GetAtt": [
"S3Bucket",
"DomainName"
]
}
]
]
},
"Description": "Name of S3 bucket to hold website content"
}
}
}

YAML:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Sample Template
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: PublicRead
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html
    DeletionPolicy: Retain
Outputs:
  WebsiteURL:
    Value:
      Fn::GetAtt:
      - S3Bucket
      - WebsiteURL
    Description: URL for website hosted on S3
  S3BucketSecureURL:
    Value:
      Fn::Join:
      - ''
      - - https://
        - Fn::GetAtt:
          - S3Bucket
          - DomainName
    Description: Name of S3 bucket to hold website content



Shared Responsibility 2 - Using Dynamic CSS Selectors to stop the bots.


In my last post I talked about techniques to stop malicious web automation services at the source before they reach AWS infrastructure. Now we will get our hands dirty with some code to put it into action. Don't worry if you are not an experienced coder, you should still be able to follow along.

How do Bot scripts work?

A rendered web page contains a Document Object Model (DOM). The DOM defines all the elements on the page such as forms and input fields. Bots mimic a real user that enters information in fields, clicks on buttons etc. To do this the bot needs to identify the relevant elements in the DOM. DOM elements are identified using CSS selectors. Bot scripts consist of a series of steps that detail CSS selectors and what action to perform on them.

The DOM structure and elements of a page can be quickly identified using a browser. Pressing F12 in your browser will launch developer tools with this information:


To see specific details of a DOM element simply right click on the element on the page and select 'inspect':


This will open up the developer tools with the element identified. You can get the CSS selector for the element easily by again right clicking on the element in the developer tools:



Note that this will be only one representation of the element as a CSS selector (generally the shortest one). There are a number of ways an element can be defined as a CSS selector including:

  • id name
  • input name
  • class names
  • DOM traversal e.g. defining its chain of parent elements in the DOM
  • Text inside the element using Jquery ':contains'.

Dynamic CSS Selectors

To make life difficult to develop bot scripts you can use dynamic CSS selectors. Instead of creating the same CSS selectors each time your page is rendered, you can look at changing these randomly each time.

When using NodeJS and Express this is quite straightforward as your are already rendering pages on the server. Simply introduce some code to mix this up a bit.

Let's Start


First of all set up an EC2 instance with NodeJS and Express set up to render pages. If you are unsure you can view the video below:

https://vimeo.com/145017165

To save you typing, the code is available at Gist (also Blogger tends to screw up code when it is published).

Now let's change index.js to create a simple login form.

Point your browser to the public IP address of your instance to check everything is ok. e.g. xxx.xxx.xxx.xxx:8080

Now change the index.js file to include a dynamicCSS function :

app.get('/', function(request, response) {
  response.send(dynamicCSS())
})

function dynamicCSS(){
 x = ''
 x += '
'; x += '

Please Login

; x += ''; x += ''; x += '' x += '
'; return x }


Now do npm start at the command line of your ec2 instance and refresh the browser page. You will now see our very simple login form:


The problem with this form is that it is really easy to identify the dom elements required to login. The id, name, placeholder all refer to username or password.

Now let's change our code and introduce dynamically created CSS selectors.


var loginElements = {
 username: '',
 password: ''
 }

function dynamicCSS(){
 var username = randomString()
 var password = randomString()
 loginElements.username = username
 loginElements.password = password 
 x = ''
 x += '
' x += '

Please Login

' x += '' x += '' x += '' x += '
' return x } function randomString(){ chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz'.split('') chars.sort(function() { return 0.5 - Math.random() }) return chars.splice(0, 8).toString().replace(/,/g, '') }



This now generates a random string for the id and name tags of the input elements. This makes it not possible to use these in a reliable bot script. If you do npm start again and view the the view the element in developer tools you can see the random strings.

We now need to look at the other ways our elements can be identified as CSS selectors. As you can see the text "username" and "password" is still used in the placeholders and input type tag. Also the DOM structure itself doesn't change dynamically, making it possible to reference the element through traversing the DOM structure.

We will address both problems by creating random decoy input elements with the same parameters. The CSS position property will allow us to stack them on top of each other so that the decoy elements are not visible on the page:

app.get('/', function(request, response) {
  response.send(dynamicCSS())
})

var loginElements = {
  username: '',
  password: ''
}

function dynamicCSS(){
  var username, password
  x = ''
  x += '
' x += '

Please Login

' y = Math.floor((Math.random()*5)) + 2 for (var a=0; a' x += '' loginElements.username = username loginElements.password = password } x += '' x += '
' return x } function randonString(){ chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz'.split('') chars.sort(function() { return 0.5 - Math.random() }) return chars.splice(0, 8).toString().replace(/,/g, '') }


Now when you view the DOM in your browser developer tools,  you can see the decoy input elements created underneath the real input element. If you refresh your browser you will see a different number of elements created each time (between 1 and 5 created).



The bot creator can no longer use the username and password placeholders or input types to identify the elements. They can also not use the DOM structure to traverse through the DOM as this is changing also. As pointed out by a reader of this post (thanks Vadim!), you should also put some random inputs after to handle jquery ":last". A good place would be underneath your logo.
y = Math.floor((Math.random()*5)) + 2
for (var a=0; a'
  x += ''
  loginElements.username = username
  loginElements.password = password
}
for (var a=0; a'
  x += ''
  document.getElementById(username).style.visibility = "hidden";
  document.getElementById(password).style.visibility = "hidden";
}

The next thing a bot script can do is click on an x-y position on the screen. We can handle this by randomly changing the position of the elements.

var loginElements = {
 username: '',
 password: ''
 }

function dynamicCSS(){
  var username, password
  x = ''
  if ((Math.random()*2) > 1)
    x += ''
  else
    x += ''
  x += '
' x += '

Please Login

' y = Math.floor((Math.random()*5)) + 2 for (var a=0; a' x += '' loginElements.username = username loginElements.password = password } x += '' x += '
' return x } function randomString(){ chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz'.split('') chars.sort(function() { return 0.5 - Math.random() }) return chars.splice(0, 8).toString().replace(/,/g, '') }


The position of the input elements is now random. This currently only has two positions but you can elaborate on this to create many possible combinations of positions. You may also make your login form inside a modal window that changes position on the screen.

If want to go further you can look having two login forms, username followed by password. Or even better, randomly change between the two.

We have now addressed the possible techniques a bot creator can use to identify your input elements and login to your site.

Congratulations, you made it to the end!

What's next?

In my next post I will introduce techniques to identify bots and then look at launching a counter attack on the bot to crash it after it has been positively identified.

Be sure to subscribe to the blog so that you can get the latest updates.

For more AWS training and tutorials check out backspace.academy

Welcome aboard India!

Welcome aboard India!

AWS Announces New Asia Pacific (Mumbai) Region





At last India has its own region with two availability zones. Much overdue but sure to be a popular decision. The following services are available in the new region:

    AWS Certificate Manager (ACM)
    AWS CloudFormation
    Amazon CloudFront
    AWS CloudTrail
    Amazon CloudWatch
    AWS CodeDeploy
    AWS Config
    AWS Direct Connect
    Amazon DynamoDB
    AWS Elastic Beanstalk
    Amazon ElastiCache
    Amazon Elasticsearch Service
    Amazon EMR
    Amazon Glacier
    AWS Identity and Access Management (IAM)
    AWS Import/Export Snowball
    AWS Key Management Service (KMS)
    Amazon Kinesis
    AWS Marketplace
    AWS OpsWorks
    Amazon Redshift
    Amazon Relational Database Service (RDS) – all database engines including Amazon Aurora
    Amazon Route 53
    Amazon Simple Notification Service (SNS)
    Amazon Simple Queue Service (SQS)
    Amazon Simple Storage Service (S3)
    Amazon Simple Workflow Service (SWF)
    AWS Support
    AWS Trusted Advisor
    VM Import/Export

The available  services will no doubt be expanded so be sure to check for more details at:

New Course AWS Certified SysOps Administrator!



The much awaited AWS Certified SysOps Adminstrator Course has been released. Available with the AWS Certified Associate course. All existing members will have access!

BackSpace Academy

Pre-Warming of EBS Volumes is not necessary


Amazon Web Services AWS EBS

A number of people have asked me about pre-warming of new EBS volumes. I do realise that there are a lot of courses and exam dumps out there stating this is necessary. In fact it is not necessary with new volumes and if you answer this incorrectly you will lose valuable marks on the exam.

The only situation where preparation is required before access is with volumes that were restored from a snapshot:

"New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block." Initializing Amazon EBS Volumes

When in doubt read the docs

BackSpace Academy

Amazon Aurora Cross-Region Read Replicas

Amazon Aurora

Watch out for this on the exam!

Just announced by AWS Cross-Region Read Replicas for Amazon Aurora. You can now create Aurora read replicas in another region to the master. Creating the new read replica also creates an Aurora cluster that can contain up to 15 more read replicas!

We will be updating the course material with the changes. In the meantime, more details in the docs: Replicating Amazon Aurora DB Clusters Across AWS Regions.

BackSpace Academy 

New videos for AWS Certified Associate Courses

BackSpace Academy AWS Certified Associate Course


We have just created more new videos for the AWS Certified Associate course:
Amazon DynamoDB Core Knowledge  (New)
Amazon Simple Queue Service (SQS) Core Knowledge  (New)
Amazon Simple Notification Service (SNS) Core Knowledge  (New)

BackSpace Academy 

New Course Videos added

BackSpace Academy AWS Certification Videos

We have just updated some existing videos and also created new videos for the AWS Certified Associate course:
AWS Virtual Private Cloud (VPC) Core Knowledge  (New)
AWS Relational Database Service (RDS) Core Knowledge (New)
AWS Elastic Beanstalk Core Knowledge (New)
AWS OpsWorks Core Knowledge (New)
Amazon EC2 Core Knowledge (Updated)

BackSpace Academy 

AWS Certificate Manager rolling out to new regions

AWS Certificate Manager


Previously you would need buy your SSL certificates outside of AWS and then convert them to the format for AWS and then upload to your ELB. Life is much easier now with AWS Certficate Manager that provides this service along with the cetificates for free! How cool is that?

The service has been rolled out to most regions so you may get a question on it in the exam.

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. With AWS Certificate Manager, you can quickly request a certificate, deploy it on AWS resources such as Elastic Load Balancers or Amazon CloudFront distributions, and let AWS Certificate Manager handle certificate renewals. SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources you create to run your application.

More details at https://aws.amazon.com/certificate-manager/ 

BackSpace Academy 

ECS Auto Service Scaling

Watch for this on the exam! An ECS tutorial will be released with the SysOps videos.

Amazon EC2 Container Service Supports Automatic Service Scaling

Amazon EC2 Container Service (Amazon ECS) can now automatically scale container-based applications by dynamically growing and shrinking the number of tasks run by an Amazon ECS service.
Previously, when your application experienced a load spike you had to manually scale the number of tasks in your Amazon ECS service.
Now, you can automatically scale an Amazon ECS service based on any Amazon CloudWatch metric. For example, you can use CloudWatch metrics published by Amazon ECS, such as each service’s average CPU and memory usage. You can also use CloudWatch metrics published by other services or use custom metrics that are specific to your application. For example, a web service could increase the number of tasks based on Elastic Load Balancing metrics like SurgeQueueLength, while a batch job could increase the number of tasks based on Amazon SQS metrics like ApproximateNumberOfMessagesVisible.

BackSpace Academy 

 

New EC2 instance X1

Watch out for this on the exam!

X1 instances, the largest Amazon EC2 memory-optimized instance with 2 TB of memory



X1 instances extend the elasticity, simplicity, and cost savings of the AWS cloud to enterprise-grade applications with large dataset requirements. X1 instances are ideal for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high performance computing (HPC) applications. X1 instances are certified by SAP to run production environments of the next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.
X1 instances offer 2 TB of DDR4 based memory, 8x the memory offered by any other Amazon EC2 instance. Each X1 instance is powered by four Intel® Xeon® E7 8880 v3 (Haswell) processors and offers 128 vCPUs. In addition, X1 instances offer 10 Gbps of dedicated bandwidth to Amazon Elastic Block Store (Amazon EBS) and are EBS-optimized by default at no additional cost.

BackSpace Academy 

AWS Certified SysOps Exam Engine


We have just added the Exam Engine for the AWS Certified SysOps Associate course!

The exam engine is included with the Associate courses. All existing paid customers will receive access to the exam engine.

The course videos and material will be released in the next week.

BackSpace Academy 

2016 Updated Courses!






Major changes to the format of our courses! One payment enables access to all associate courses and exam engines.
Course have been updated to 2016 and format has been changed to make study easier. Core AWS subjects that are relevant to all streams are in a separate course and then the specific subjects for the three streams are in separate courses.
Next week we will be adding the SysOps Administrator exam engine followed by the course material the following week. This will also be added to any existing customers for free.

BackSpace Academy 

What a month in AWS!

It is certainly hard keeping up with AWS releases! Here are some of the highlights:

Amazon RDS Cross-Account Snapshot Sharing.

Watch out for this one on the certification exam!
Regular database snapshots have always been a part of any good AWS administrator’s routine. Now the service is even better with the ability to share snapshots across different accounts.
Organisations should have multiple separate linked accounts for a number of reasons; security, separation from production environments, cost visibility etc. Now you can take snapshots of your production environment and copy to a development account for testing without any risk.

EC2 Run Command.

Watch out for this one on the certification exam!
This new feature will help you to administer your instances (no matter how many you have) in a manner that is both easy and secure.
This greatly increases security by allowing commands to be run remotely using the console without having to login through a bastion host.

EC2 Spot Blocks

Watch out for this one on the certification exam!
Now you can create spot instances that run for a fixed period of time from 1 to 6 hours.

MariaDB on AWS RDS

Watch out for this one on the certification exam!
We now have another database in the RDS suite. MariaDB is a fork from MySQL and can provide some additional capabilities.

AWS WAF - Web Application Firewall

Watch out for this one on the certification exam!
Another tool in your AWS security arsenal. Deploy custom and application-specific rules in minutes that block common attack patterns, such as SQL injection or cross-site scripting.

 

Amazon Inspector – Released in preview

Amazon Inspector is an automated security assessment service. This allows you to inspect your applications for a range of security vulnerabilities.

Amazon Kinesis Firehose

Load streaming data quickly and easily into Amazon S3 and Amazon Redshift to enable real time analytics.

Amazon QuickSight. Status – Released in preview

Amazon’s new Business Intelligence tools for analysis of data from Amazon EMR, Amazon RDS, Amazon DynamoDB, Amazon Kinesis, Amazon S3 and Amazon Redshift. QuickSight utilises SPICE (Super Fast In-Memory Calculation Engine) to return results from large datasets in rapid time.

AWS IoT – Released in beta

AWS IoT (Internet of Things) provide cloud services for embedded devices. Tiny devices can use AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, and Amazon DynamoDB etc to provide powerful capabilities for many applications.
Many embedded processor suppliers including Intel, Microchip PIC, TI, BeagleBone, Avnet, Marvell, MediaTek, Renesas, Dragonboard and Seeeduino provide starter kits to get you started.

AWS Mobile Hub – Released in beta

This service streamlines the process of creating mobile IOS and Android apps that use AWS services.


We will be updating our backspace.academy certification courses to reflect all the changes.

Study on the go with the new BackSpace Academy mobile site!


Due to popular demand we are introducing two platforms for doing the BackSpace AWS Certification prep courses. The BackSpace Academy mobile site for IOS and Android.

When you go to https://user.backspace.academy you will be automatically directed to the mobile site if you are using a mobile phone.

Great for studying practice exams on the go!

**New Course Release** AWS Certified Developer Associate Level

We have just released our latest course AWS Certified Developer Associate Level!

With a focus on not only answering questions correctly, but on learning how to build the next generation of Cloud connected apps using the JavaScript SDK in the browser and NodeJS SDK on the server.

Advanced hands-on video labs include:

  • Setting up a NodeJS Development Environment on AWS EC2
  • Creating a Low Cost Sync Database for JavaScript Applications with AWS
  • Programming and Deployment using AWS CloudFormation
  • Programming Amazon SQS and SNS using the AWS NodeJS SDK
  • Programming AWS DynamoDB using the AWS NodeJS SDK
  • Programming AWS ElastiCache Redis using the AWS NodeJS SDK
  • Programming AWS Lambda
Professionally created lab notes for all labs.

Expert system based exam engine with a question pool of over 800 questions!

Full coverage and testing of all knowledge required for certification.

Check it out now at backspace.academy !

New S3 Storage Class

AWS has just announced another storage class type for S3.

Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the object level and can exist in the same bucket as Standard, allowing you to use lifecycle policies to automatically transition objects between storage classes without any application changes.

The following table summarizes the durability and availability offered by each of the storage classes.

Storage ClassDurability (designed for)Availability (designed for)Other Considerations
STANDARD
99.999999999%
99.99%
None
STANDARD_IA
99.999999999%
99.9%
There is a retrieval fee associated with STANDARD_IA objects which makes it most suitable for infrequently accessed data. For pricing information, see Amazon S3 Pricing.
GLACIER
99.999999999%
99.99% (after you restore objects)
GLACIER objects are not available for real-time access. You must first restore archived objects before you can access them and restoring objects can take 3-4 hours. For more information, see Restoring Archived Objects.
RRS
99.99%
99.99%
None

Our certification courses will be updated this week with the changes.

Important changes by AWS to Auto Scaling Policies

Today AWS announced the introduction of new auto scaling policies with steps. This is a significant change as no longer does auto scaling need to be a single step response to a CloudWatch alarm. You can now have many steps enabling small changes in capacity to be made in response to small changes in demand and likewise for large changes. The result is highly reactive and smooth response to demand.
We have updated our documentation "Lab Notes - Highly Available and Fault Tolerant Architecture for Web Applications inside a VPC" to v1.02 to reflect this change. Please make sure you understand this before sitting the AWS certification exam.