Billing
Security Group : Free
VPC
Instance
NACL: Bill
Application Load Balancer: Bill
Target Groups: Bill
Elastic IP Address: Bill
Network Load Balancer Bill
Classic Load Balancer: Free
RDS: Bill (By default the RAM will take 20GM memory).
S3: Bill
S3 Rules (Lifecycle Rules): Bill
NFS & EFS: Bill
Enable automatic backups: Bill (it is under EFS section)
Enable Encryption of data at rest: Bill (it is under EFS section)
CloudWatch Enable: Bill
https://digitalcloud.training/amazon-vpc/
======================================
ec2-13-232-238-249.ap-south-1.compute.amazonaws.com
Administrator
slZw1I?66IDglY5;!uw6?MwF@fU@psA.
AWS : Amazon webservices ..
It is one of the cloud service provider in the market.
AWS is managed by amzaon.com
cloud computing: services / resources → online → OnDemand → through → internet.
services / resources → servers, database, backup , storage , network ..etc
cloud →1 Linux server ( ec2 instance ) → less than one minute.
1 Linux ec2 instance → 2 years →bill pay → pay as you go.
30 % and 70 %
why cloud computing??
Datacentres : Group of physical servers → manage →single place ==.ONPREMISE INFRASTRUCTURE
Buildings , space , hardware , cabling , switches , hubs , routers , manpower , power supply , field engineers ,o.s , application , database , storage , backup ..etc. → APP → EU.
1. time
2. money
Physical linux server →minimum 3 months
70 % and 30 %
AWS , Azure , GCP , OC , IC , SC , AC , RC ,..etc
Cloud service models : 3 types.
1. IAAS : Infrastructure as a service.→Admins →infrastructure → network , storage , servers , backup ,database..
2. PAAS : Platform as a service.→Developers → code →java , .net , python.
3. SAAS : Software as a service → Endusers →money pay → app ( client) use. → 24/7 →online
4. DAAS : Desktop as a service
IAAS → ADMIN.
Types of clouds: 3 types.
1. public cloud : A cloud which is directly exposed to internet then it is called public cloud.
2. private cloud : A cloud which is not directly exposed to internet then it is called private cloud
3. Hybrid cloud : It is a combination of both public and private cloud then it is called hybrid cloud.
AWS :
1. Region : it is a geographical location in the cloud.→logical data centers.
2. Availability zone → High availability ( HA) →physical / local data centers
group of local data centers is called Region.
region →27 regions.
Availability zone’s →91 availability zones.
AWS : Key Components :
VPC
Internet gateway(IGW)
Subnets
Routing tables
Security group.
1. VPC : Virtual private cloud. (VPC is free of cost.)
It is an isolated network and Unique in the cloud.
VPC is Region specific.
Every Region has one default VPC. but do not delete this default VPC. Not able to do the task in that particular region if we delete Default VPC.
Suppose if we accidentally delete this VPC, we will create support case to AWS people.
we will create 5 VPC’s per Region (in AWS free-tier account ).We can create n Number of VPC in licensed account(in real time)
VPC has a CIDR notation →/16.
2. Internet Gateway : (Internet gateway is the free of cost.)
It is the gateway to all end-users to access application.
Intergate way is the Region specific.
Every Region has one default Internet gateway, do not delete this default one, you are not able to do the things in this Region, if we delete this default one .
Internet gateway also has a CIDR notation is 0.0.0.0/0 (0.0.0.0/0 means anybody can access this application)
We will create multiple internet gateways under one VPC. By default it is Detached status only.
Internet gateway is Attached (Changed to Attached status) to VPC and Routing Tables.
3. Subnets :(Subnets are free of cost.)
It is a smaller network inside VPC.
We will create multiple Subnets under one VPC.
Subnets are Availability Zone specific.
Subnets are also having CIDR notation →/24
Subnets are Attached to Routing Tables.
Every Region has multiple Subnets.
Every Region has default Subnets, we do not delete this Default one.
4. Routing Tables :(Routing Tables are free of cost.)
It is virtual Router in the cloud.
The main purpose of Routing Tables is to communicate with the different Networks.
Routing Tables are Region specific.
Every region has one default Routing Table →do not delete this.
whenever we create a vpc then automatically aws implicitly creates a Routing Table that is called Main Routing Table.
we will also create our own Routing tables this is called Custom Routing tables.
Routing Tables are attached to internet gateway and subnets.
* Qus :- Difference Between Default Routing Table , Main Routing Table and Custom Routing Table ?
Routing Tables :
1. Default Routing Table :- Whenever we create a AWS account, by Default one Routing table is created that is called Default Routing Table.
2. Main Routing Table :- Whenever we Create VPC, by default AWS created one Routing Table for every VPC that is called Main Routing Table.
3. Custom Routing Table:- Whenever we create our own Routing table that is called Custom Routing table
5. Security Group:
It is a virtual firewall at EC2 instance level.
It contains set of rules ( ssh , http, https, MySQL ,alltraffic...etc) every Rule/ Application has their OWN port number.
In Security Group the Source having 3 fields
- Anywhere, it is used in Freeware.
- Custom: In Real time
- My IP: not used in specific.
Security Groups are Region specific.
Security Groups are free of cost.
Every Region has one default Security Group but do not delete this.
Security Groups has Inbound Rules and Outbound Rules.
Public IP Address:
it is used for EC2 instance to login and application given to end-user.
it is visible in AWS console dash board only..
Whenever we start and stop the EC2 instance, the Public IPAddress will be changed automatically, because it is dynamic.
Public IP is dynamic : application is not going to End User.it is impact on business(if we start and stop the EC2 Instance ).
To overcome this in real time we are using Elastic IPAddress..
Private IPAddress: It is used to internal communication.
it is visible in AWS console dash board and EC2 instance.
Whenever we start and stop the EC2 instance, the Private IPAddress will not be changed, because it is static.
Elastic IPAddress ( real time ): It is similar to public IPAddress.
EC2 instance to login and application given to enduser.
it is visible in AWS console dashboard only..
Whenever we start and stop the EC2 instance, Elastic IPAddress will not be changed, because it is static.
Elastic IPaddress is purchasable one( billable only)..
public IP address : 18.179.42.0 / 18.181.197.67
private IPaddress : 50.20.9.151 / 50.20.9.151
Elastic IP : 54.95.201.23 / 54.95.201.23
Public Subnet :
A subnet which is directly exposed to internet it is called Public Subnet.
IGW attached to Routing Table then this Subnet is called Public Subnet.
we have placed application instances and web servers In Public Subnet.
these instances are accessed all endusers, these instances are having public Ip / elastic IP.
These instances are accessed via public ip /elastic IP address..
Private Subnet :
A subnet which is not directly exposed to internet it is called Private Subnet.
IGW not attached to routing table then this subnet is called Private Subnet.
We have placed: database , backup , storage ..etc in Private Subnet
These instances are not accessible to enduser.
these instances are not having public Ip / Elastic IP..
These instances are not accessed via public or elastic IP.
These instances are accessed only through Private Ip address only..
NAT instance : Network Address Translation.
The main purpose of NAT instance is to grant or provide internet access to Private subnet.
In general database, backup, storage etc.. are placed in Private Subnet so instances are no need of internet access..
Whenever we want to Update or Upgrade database backup, storage.etc In Private Subnet then these instances are required internet access..
NAT instance thumb rule : NAT instance must be launched in Public Subnet.
inbound rules / inbound access / inbound traffic means whenever the internet is passed from IGW to EC2Instance
outbound rules / outbound access / outbound traffic Means whenever the internet is passed from EC2instance to IGW
NAT follows outbound rules.
NAT instance →launch instance → community AMI's →search → NAT → no. of NAT instances to display → choose any one → normal EC2 instance creation steps.
select NAT instance → actions → networking →change source/destination check →by default it is enable state. → we will do this disable → stop (check box →check).
→ first NAT instance →DB instance → SSH configuration. → now you are in DB instance.
ping google.com
ping gmail.com
ping fb.com
→ ping success...
Note : - There is no Public IP, The Instance which is in Private Subnet
Note :- NAT Follows OutBound Rules
Note : The NAT Instances must be launched in Public Subnet
NATGW: to grant or provide internet access to Private Subnet.
NATGW also follows outbound traffic..
Public Subnet : web servers instances
Private Subnet : database , storage , backup Instances, ... ( publicly not accessible). Generally, these instances are having Private IP access only not having Public IP access.
NATGW is high availability and it is maintained by AWS.
NAT Instance is not having high availability and it is maintained by US.
NAT Instance also we make it as high availability by using script --->>> HA .
NAT Instance must be launched in public subnet by using community AMI's, for this we need search as like NAT in community AMI
Note :- NATGW must be launched in Public Subnet.
Note :- NATGW is also follows OutBound Rules
NATGW is the same process of infrastructure creating of NAT Instance..
For Eg :
Create VPC and the VPC name as NGVPC and taken 30.20.0.0/16 series IP then create a IGW (the name as NGIGW) and it is attached to NGVPC then
create two subnets 1. NGPublicsubnet 2. NGPrivatesubnet
whenever we create VPC by default the Main RTB will be created and this RTB attached to IGW and NGPublicsubnet
Then create one normal EC2 instance under Public Subnet.
Then create a NATGW and it is must be in public subnet and it must be having Elastic IP
Note : here NATGW must having Elastic IP and no need of Elastic IP for NAT Instance.
Note : No need Elastic IP’s for NAT Instance.
Then create our own RTB and it called Custom RTB for this we will take 0.0.0.0/0 and attached to NATGW and also attached NGPrivate Subnet
create one more new instance → under privatesubnet ( name as storage) then disable the Autoassign PublicIP
finally : login into the public subnet instance then configure ssh configuration and login into storage instance as remote user with the help of private IP
ping google.com
ping success
==========
VPC Wizards
Wizards simplifies our VPC, IGW , Routing Tables , Subnets, Security group and NAT and NATGATEWAY configurations.
Currently we have 4 types of wizards******
VPC with public subnet
VPC with public and private subnet
VPC with Public and Private Subnets and Hardware VPN Access
VPC with a Private Subnet Only and Hardware VPN Access.
Hardware VPN Access :
VPN : virtual private network..
Hardware VPN Access : Network people → create VPN create → link generate → vpn link.
VPN link →the purpose of vpn link to connecting the client's network..
Every project has one vpn link..
How to access the vpn link ???******
1. Network people will send the mail with an URL
2. Click the URL link which is in Email and enter the username & Password which is given by Network People. And then enter RSA token 6 digit number
3. now you are in clients network..
username and password also provide by networking people by using email to us..
How to access / connect application instances in your organization ???*******
For this First we must be connect with VPN link..
Eg :- ramakrishna working in IBM and his client is DBS and this client is in singapore.
1. here ramakrishna need to login into the AWS account.
3. IAM Admin team create for ramakrishna they will create one aws account.
4. ramakrishna will login into the AWS account with username and passowrd. If the user is not able to login there is a second level security. (i.e MFA (Multi Factor Authentication).
This second level security having 2 ways..
1. Mobile number. OTP ( 6 digits number). once enter OTP then we are able to login AWS account.
2. we need to install mobile app to get google authenticator for login AWS account here we will get QR code then SCAN the QR code, here we will get 6 digits number, by using this number to login AWS account login.
All the pem files are located in Jumpserver
From Jumpserver we have to do the SSH configuration.
Important key point to connect Application Instances:
1. Jump servers / jump instances / bastion host used for security purpose
every project having 5 to 7 jump servers. And these servers are managed by N/W people.
2. login Application instances from these Jump servers then provide the application to EndUsers.
first you need to login into jump server, after that you need to login into application instances.
Eg : jumpserver IP = 192.168.5.10 through putty we will connect these jumpserver.
now you are in jump server. ==>> through ssh we will connect application instances.
ssh -i /tmp/central.pem ec2-user@appinstanceIP( elastic / private ) ==>> enter ==>>> now you are in application instance.
==========
VPC Peering :
The main purpose of vpc peering is to communicate with different networks..
Senario :
application team →30.20.0.0/16
admin team →60.20.0.0/16
By making peering between these two teams →files transfer and remote user only application will install..
VPC peering thumbrule :
1. Both VPC's CIDR notations should not be collide to each other.
2. VPC peering not supported transitive (but supported same sequence network peering) peering.
Eg : VPC1 →VPC2 →VPC3 →vpc4 supported same sequence.
Eg : VPC1 →VPC5→ VPC3→vpc7 Not supported Transitive order.
Here VPC1 should not communicate with VPC3
Here VPC2 should not communicate with VPC4
3. VPC peering→ here we need to specify who is the requester and accepter.
key point: Both VPC CIDR notations are interchanged in both main routing tables.
Then these two VPC's in between → VPC Peering connection established.
VPC peering will be doing in 3 ways..
1. same region.
2. different region.
3. different accounts / cross accounts..
=======
[root@ip-30-20-9-42 ec2-user]# history
1 ping 60.20.6.13
2 vi /tmp/plugins.pem
3 chmod 700 /tmp/plugins.pem
4 touch peer1
5 scp -i /tmp/plugins.pem peer1 ec2-user@60.20.6.13:/home/ec2-user
6 ls
7 ssh -i /tmp/plugins.pem ec2-user@60.20.6.13
8 history
[root@ip-30-20-9-42 ec2-user]#
[root@ip-30-20-9-42 ec2-user]#
=============
Transit gateway :
The main purpose of Transit gateway is to communicate with different networks.
Transit gateway thumb rule :
Both vpc's CIDR notations should not be collide to each other.
Transit gateway supports transitive peering.
Eg : - vpc1 → vpc2 →vpc3 → vpc4
create transit gateway → when ever you created TGW → then automatically transit gateway routing tables will create.
next we will TGW attachments for every vpc (infrastructure)
key point : ALL VPC CIDR notations are interchanged in ALL main routing tables..
============
VPC end points : with out having publicip though acesss other aws services by vpc end points.. ( automatically routing table →change)
private subnet → no internet access → NAT instance create →privatesubnet→ EC2 instance ( databse ) → s3 create buckets..
AWS →storage →s3 →simple storage service →buckets →create →objects→upload , download ,delete , rename..
aws s3 ls
aws s3 mb s3://s3ram2
EC2 instance --->>role add
role -->>> aws --->>> service1 (EC2)--->> comminicate with other service2 (s3) --->>then we need a role..
===========
VPC flowlogs: Insfratructure --->> app --->>>not going to EU --->>>> trouble shoot --->>> logs -->>. store -->> analysis
VPc --->>EC2 instance -->>app -->>install -->>EU
VPC -->> network issues --->>> logs --->>> generate -->> logs group --->>> AWS -->> s3 buckets
s3 -->> simple storage service
AWS --->> storage --->>s3 -->>> simple storage service ---> buckets --->>create -->> objects -->>upload , download ,delete , rename..
==============
remove_bucket: s3bhaskar006
[root@ip-10-20-10-208 ec2-user]# history
1 ping gmail.com
2 aws s3 ls
3 aws s3 ls
4 aws s3 ls
5 aws s3 mb s3://s3bhaskar006
6 aws s3 mb s3://s3bhaskar007
7 aws s3 rb s3://s3bhaskar007
8 aws s3 rb s3://s3bhaskar006
9 history
==========
**********How to provide security to the VPC.
two ways :
1. security group
2. NACL ( network access control list )
1. security group :
It is a virtual firewall at EC2 instance level.
It contains set of rules.
source have 3 options :
1. custom 2. Anywhere 3. MYIP
1. Custom : with in our organization network ( 10.20.5.0/16 )
2. Anywhere : all access our application
3.MYIP : wifi → IPaddress but perticular Ip : 117.208.194.37/32
Inbound access : internet →IGW to EC2 instance
Outbound access : internet →EC2 instance to IGW
Security groups are stateful.
Security groups are sub service of EC2 instance level..
NACL :
Network ACLs and its Characteristics
NACL is firewall acts at SUBNET level
When VPC is created NACL is implicitly created, this is called as Custom NACL
Default NACL allows all inbound and outbound traffic
Any subnet created is implicitly associated with default NACL
Multiple subnets can be associated with one NACL
One subnet can be associated with only one NACL
Every rule can be explicitly allowed or denied so for that reason we can called The NACL is Stateless
NACLs are stateless, i.e. inbound traffic is controlled by inbound and outbound traffic is controlled by outbound rule.
We have options to block an ip address/network using NACL
==============
AMI ==>>> Amazon machine image.
Definition: t is a template; it contains the OS and pre-defined applications / softwires are installed on it.
Purpose: To taking a Backup of EC2 Instance (Unfortunately If someone Terminate Instance)
by default, AWS provide lot of default AMI's. which are all AMI's are Public (which are created by AWS).
we will create our own AMI's those are called as a custom AMI's.
By default, all custom AMI's are Private. (which are created by our own).
How to create our own AMI ??
select EC2 instance ==>>> Actions ==>> images and templates ==>> create image ==>> Sreenivas ==>> AMI will created.
we will create N number of EC2 instances from one AMI.
when we create our own AMI then automatically creates one Snapshot.
we create number of AMI's from one Snapshots.
Default Snapshots are stored in S3 Buckets..
S3 buckets are stored in HA (High Availability zone)
we will also copy AMI's and Snapshots from one Region to another Region. this is called CRR or DR (Disaster Recovery Management)
EC2 instance==>> AMI
AMI ==>> EC2 instance
SNAPSHOT ==>>> AMI
Note :- we can be able to take a backup from 5 Layers.
Note : There is bill for AMI and Snapshots, Instance
Ques: is Snapshots are copied, if we copied AMI from one region to another region
Ans : Yes
Ques : is AMI's are copied, if we copied Snapshots from one region to another region
Ans : NO
Ques: Is Snapshot is deleted if we delete AMI ?
Ans: No.
================
EBS: Elastic Block Storage.
In Linux the memory will store as a block level storage.
Real time Scenario: -
Application team request raise to admin team ( ours) ===>> 500 gb volume ==>>> Linux EC2 123) ==> file system create ==>>> app5 ==>> mount point create ==>>> app install.
EBS thumbRule : EC2 instance and volume should be in same availability zone.
1. we need to create one EC2 instance in 2a ( AZ).
2. create volume ===>> 500gb ==>> in 2a ( AZ).
3. we will attach this volume to EC2 instance.
Device Naming conventions:
/dev/sda to /dev/sdp ==>> we will create 16 volumes for one EC2 Instance
/dev/sda to /dev/sde ==>> OS will internally use these first 5 volumes.
we will create extranal volumes from /dev/sdf to /dev/sdp... ( 11 volumes)
after login into the EC2 instance the Kernal rename from dev/sdf to dev/xvdf .
/dev/sdf ===>> /dev/xvdf
after login into the EC2 instance ==>> follow the below steps..
1. fdisk -l ( o.s control )
2. lsblk ===>>> kernel identification.
3. mkfs.ext4 /dev/xvdf ===>>> creating the file system.
4. mkdir app5 ==>> creating the directory..
5. attaching a directory to the file system is called mounting so now app5 is called mount point.
mount -t ext4 /dev/xvdf app5
6. cat /etc/mtab
7. we will make as this filesystem to be permanent.
vi /etc/fstab
device name mountpoint type of filesystem defaults 0 ( dump ) 0 ( checksequence)
/dev/xvdf /home/ec2-user/app5 ext4 defaults 0 0
esc shift:wq! ==>> save.
8. cd app5
ls
lost + found
touch {a..k}
reboot ..
==========
Note : EBS is persistent storage (permanent storage.)
[root@ip-172-31-44-137 ec2-user]# history
1 fdisk -l
2 lsblk
3 mkfs.ext4 /dev/xvdf
4 mkdir app5
5 mount -t ext4 /dev/xvdf app5/
6 cat /etc/mtab
7 vi /etc/fstab
8 ls
9 cd app5/
10 ls
11 touch {a..z}
12 ls
13 mkdir one two three four five sachin yuvi
14 ls
15 cd ..
16 history
[root@ip-172-31-44-137 ec2-user]#
[root@ip-172-31-44-137 ec2-user]#
================
EBS ==>> volume ==>>> backup ==>> by using Snapshot.
volume ==>> we will create Snapshot from volume.
Snapshot ==>>> we will create volume from Snapshot.
we will take backup not only volume but also taking entire EC2 instance.
and also, we will break EBS thumbrule by using Snapshot.
KeyPoint : we will increase the volume size, but we cannot decrease the size of volume.
EBS: we also taking Scheduled wise backup. by using Life Cycle Manager...
Default Snapshot are stored in S3 buckets.
==========
[root@ip-172-31-6-230 app200]# history
1 fdisk -l
2 lsblk
3 mkdir app100
4 mount -t ext4 /dev/xvdf app100/
5 vi /etc/fstab
6 ls
7 cd app100/
8 ls
9 cd ..
10 fdisk -l
11 lsblk
12 mkfs.ext4 /dev/xvdg
13 mkdir app200
14 mount -t ext4 /dev/xvdg app200
15 cat /etc/mtab
16 vi /etc/fstab
17 ls
18 cd app200/
19 ls
20 mkdir rama bharagavi pavan srinivas sehwag
21 ls
22 touch {1..20}
23 ls
24 history
[root@ip-172-31-6-230 app200]#
[root@ip-172-31-6-230 app200]#
Ques : What are the fields in vi /etc/fstab*********
Ans : device name mountpoint typeoffilesystem defaults 0 0
here first 0 is called Dump
2nd 0 is called Check Sequence
Eg : /dev/xvdf /home/ec2-user/app5 ext4 defaults 0 0
Device name: /dev/xvdf
Mountpoint: /ec2-user/app5
typeoffilesystem : ext4
Dump is checking the MountPoint while reboot the Linux System.
==============
******EBS Volume Types
1. General Purpose SSD (Solid State Disk)
2. Provisioned IOPS SSD
3. Throughput optimized HDD (High Definition Disk )
4. Cold HDD
5. Magnetic
============
***********Instance Store:( In real time we are not using this one because High billable)
It Is a temporary store, When we power off the Instance the data will be lost on this store
Storage cost is very very cheap compared with EBS.
Use these types to store temporary data.
=============
*****EC2 Instances (Purchase Options)
- On-Demand Instances
- Reserved Instances ( in Realtime we are using this Instance)
- Spot requests
- Dedicated Hosts
- Scheduled Instances
===============
**********Instance Types :
1. General purpose.
2. Computer optimized.
3. GPU optimized.
4. Memory optimized.
5. Storage optimized.
================
EC2 instance ( ex : t2.micro ===>. 1 cpu , 1 gb ram ) ==>> SBI app install ==>> EU
after some days ==>> SBI app ==>>> incoming traffic increase ==>> CPU / DISK / NETWORK utilization high ==>. EC2 instance hung state.
at that time our application not going to enduser ===>. client ==>>> business impact.
To over come above senario we will increase the EC2 instace size ( hard ware resources to increase) ===>>> vertical scaling.
******How to increase our EC2 instance size ??
*******How to change EC2 instance type ??
***********how to change you EC2 instance resize ???*********** Is it possible to Decrease the EBS Volme Size ? Ans : NO but we can Increase not Decrease.
1. you need to stop the EC2 instance.
2. you will increase your EC2 instance size .
EC2 Instance resizing
Instance resizing is a way to scale up or scale down our EC2 instances.
Note: We must stop the instance before resizing.
Select Instance → Actions → Instance Settings → Change Instance Type.
===============
How to protect our EC2 instances from accidental deletions??
1. select EC2 instance →. Actions → instance settings ==>>> change termination protection ==>> enable.
============================
EC2 Userdata :
Using this option we can run scripts at EC2 launch time. There are many use cases for this, for example we wanna configure our servers with chef/puppet we need chef/puppet agents on our machines this can be achieved using userdata.
Example: Using user data
1. Install apache server
2. Start and enable apache server
3. Deploy a sample html file on the apache server
Launch EC2 and at step 3 under user data paste this script
#! /bin/bash
yum install httpd -y
service httpd start
chkconfig httpd on
echo "<h1> User data example </h1>" > /var/www/html/index.html
Note: Do not explicitly mention sudo, all the scripts in user data runs internally using sudo.
==============
ELB : Elastic Load Balancer..
The main purpose of ELB is to distribute the incoming traffic to our application.
ELB maintains the high availability of our application.
ELB thumb Rule :EC2 instances must be in different availability zones.
Here u try to rake two zones.
EC2 ( 1a availability zone) here i try to install orders app
EC2 ( 1b availability zone) here i try to install payments app
ELB supports both intranet and internet facing.
intranet : it can use within the Organization level.
internet : it can use outside organizational level.
ELB is Region specific.
Security group.is a security for ELB
ELB supports both HTTP and HTTPs protocol.
http ==>> 80
https ==>> 443 (In Realtime we are using HTTPS).
https having with security for this we can upload SSL certificate.
******* How to add SSL Certificate link in ELB
AnS : In AWS The aws control manager request raises to cyber security team to generate ssl certificate link then Cyber Security team reply mail to AWS Control Manager then aws control manager. forward that email to us. This is called ELB supports SSL certificate termination.
ELB does health check on healthy instances only ..
ELB does not health check on un healthy instances.
Health check means incoming traffic to distribute
Healthy instances mean instances up and running state.
Un Healthy instances mean instances down and not running state.
If ELB finds an unhealthy instance then automatically ELB taken out of the rotation.
If ELB finds an unhealthy instance to be healthy instance then ELB bring them automatically into rotation.
ELB can be divided into 3 types.
1. Classic Load Balancer ( CLB)
2. Application Load Balancer ( ALB)
3. Network Load Balancer ( NLB).
4. Gateway Load balancer(GLB)
==========================
1. Classic Load Balancer ( CLB)
1. we need to take two instances in different availability zones.
One is ec2 ( 1a ) as a Orders app
One is ec2 ( 1b ) as a Payments app
2. we need to attach these two instances to CLB.
3. then we will get DNS link and paste it in browser and refresh it to get orders / Payments..
4. IP and Port-based Route Mapping happen in CLB
=========================================
in General, these Classic Load Balancer we can use for Monolithic (Staticic Application).
ELB ==>>> cross zone load balancing should be enable..
Note : Security Group should be matching which is in EC2 Instance and ELB
Note : Stickiness should be Disabled state to distribute the load
Note : Log should be stored in S3 bucket by using these logs we can trouble shoot the issues but here the ELB and S3 should be in same Region
Note : We can able to Migrate Lao balancer from Classic to Application Load balancer but not Application to Classic
Note : target Group is created automatically whenever we migrate the Classic LB to Application LB
====================================================
2. Application load balancer ( ALB )
in Application Load Balancer Path based routing / routemaping.in General, these Application Load Balancer we can use for Microservices (Dynamic Application).
we need to take two instances in different availability zones.
1. ec2 ( 1a) ===>>> orders app install
we need to login ec2 instance manually ===>>> install http application ===>> yum install -y httpd
service httpd start ( starting the service of httpd )
cd /var/www/html
mkdir orders
cd orders
vi index.html ==>> orders html code
public Ip :80/orders ===>> path
2. ec2 ( 1b )===>>> payments app install
we need to login ec2 instance manually ===>>> install http application ===>> yum install -y httpd
service httpd start ( starting the service of httpd )
cd /var/www/html
mkdir payments
cd payments
vi index.html ==>> payments html code
public Ip :80/payments ===>> path
====================================================
ALB : we need to create target groups
Each application has their own target group
orders ( 1a) ===>> target group
payments ( 1b ) ===>> target group
Attaching this target groups to ALB.
ALB : ==>>> perticular path ===>>> we will apply the conditions based on the target groups.
Specify ==>> default target group ===>> which target group ( orders / payments )
if /orders* then forward to orders target group. ===> adding the rules.
if /payments* then forward to payments target group. ===> adding the rules.
ALB ===>>>create ===> DNS LINK/orders ==>>> orders app ==>>EU.
DNS LINK/payments ==>>> payments app ==>>EU.
======================================
ALB history :
[root@ip-172-31-30-198 orders]# history
1 yum install -y httpd
2 service httpd start
3 cd /var/www/html/
4 ls
5 mkdir orders
6 cd orders/
7 vi index.html
8 history
[root@ip-172-31-30-198 orders]#
[root@ip-172-31-30-198 orders]#
===============================
*****CLB vs ALB: Difference between Classic Load Balancer and Application Load balancer
***** ALB vs NLB; Difference between Network Load Balancer and Application Load balancer
CLB :
1. IP and Port based route mapping
2. No target groups
3. There are no rules and conditions.
4. It is 4 th layer (Transport Laer) in OSI model
5. Able to do route the map with DNS link
ALB :
1. Path based route mapping
2. Target groups are available here and apply the conditions for every target group
3. rules and conditions are applied here.
4. It is the 1st layer i(Application Laer)n OSI model.
5. Able to do route the map with DNS link/with particular path.
Note : we can able to convert the Classic Load Balancer to Application Load Balancer but not able to convert the Application Load Balancer to Classic Load Balancer
Note : In real time we can map Route56 instead of DNS to connect the applications
=========================
NLB :
Here Ip and Port based Route mapping
In Network Load Balancer we have to create only one Target Group for all Applications
There is no Rules and Conditions in Network Load Balancer,
It's mandatory to take Elastic IP address for Every Application in Network Load Balancer.
=========================
S3 : simple storage service.
The purpose of s3 is storage.
S3 : object level storage.
S3 : objects are stored as Key, Value pair.
S3 is global specific.
S3 : objects are stored in Buckets.
we will create 100 buckets per region.
27 * 100 = 2700 Buckets are available in one aws a/c.
Each Bucket has the storage limit is 10 TB.
2700 * 10 =27000 TB space available in one aws a/c.
For Objects In S3 Bucket we can able to do upload, download, delete, rename, encryption, make public, copy, move, folders....etc.
Buckets ==>>>> objects ===>>> maintaining the versioning.
Buckets ==>>> versioning should be enabled.
S3 has the storage classes. ********
there are 4 types of storage classes in S3.
1. Standard: Regularly.: it is default storage class.
2. Standard IA: Infrequent access: the objects are in every 3 months / 6 months / ..etc.
3. Reduced Redundancy: yearly once or every two years once.
4. Glacier: Artifact’s: used for backup, after some time these objects are automatically deleted.
Purpose of Storage Class: Storage class Optimize however The Objects moving from one Storage class to another Storage class.
Eg : Transfer the 1Gb data from Standard Regularly storage class to Standard IA class
We will apply the Rules on Storage classes the rules are Life Cycle Rules
S3 has lifecycle rules :
It is used for the objects are moving from one storage class to another storage class to optimize storage cost on the S3 storage classes.
S3 Buckets are maintaining the Versioning.
S3 : CRR / DR these two for High Availability in between regions
CRR : Cross Region Replication / DR (Disaster Recovery management).
Singapore region ( bucket1) and Sydney region ( bucket2)
I will add some files in Singapore region bucket then automatically reflect in Sydney region bucket.
We need to specify the Source and Destination.
If any object is upload in Source bucket the same object is reflected in Destination Bucket.
But If any object is upload in Destination bucket the same object is not reflected in Source Bucket
CRR : to maintain the High availability and backup..
If we want apply these CRR we need to apply below rules.
1. versioning should be enabled for both buckets.
2. both buckets should be made public.
3. RTC Replication time control) should be enable
4. we need to create a Role: if we want to communicate bucket1with bucket2 then we need to create Role.
S3 : static website hosting ..
Note : The Default Snapshots are stored in S3 Bucket
Senario:
Developers request raise to IAS Admin team ( ours)
can please test this application ( sbi -->> HL) is statically hosted or not in dev environment.
DEV / QA / UAT / PROD... environments.
S3 : important key point is here API's play a key role in s3.
API's : 2 types
SOAP
RESTFULL.
default snapshots are stored in s3 buckets.
S3 buckets used in VPC End Points , VPC flowlogs, ELB logs..
S3 : how much data transfer speed in every region.
S3 ==>>> objects level ===>> locking..
S3 ==>>> objects level ===>> encrypt.
KMS: Key Management Service. used for Object level Locking.
Cloud Trails: used for User tracking history.
by using S3 we can be able to check Data transfer speed in 27 regions
also, we can be able to check data transfer in Edge locations.
============================
EFS : Elastic File System.
In Linux we have configured SSH configuration and have created 2 instances to transfer the file (through SCP command) and also install applications by login with remote user.
But same thing in EFS
Here No need to configure SSH to transfer the files between two instances.
Without SSH configuration and SCP commands, we can be able to transfer the files between 2 Instances.
Note: By using EFS we can be able to transfer the data but not able install applications
EFS: in Linux it is called as NFS (Network File System) and here we have to create two Linux servers and we need to create a network level mount point in two instances.
EFS: Elastic File System ===>>> Network File System.(NFS) here we can create Network level Mount Point
Whenever we create a common mount point in 2 instances (network level mount point) then we need to create common directory in two instances.
first ec2 (create a ramakrishna directory in 1st instance) --->> second ec2 (create a ramakrishna directory in 2nd instance)
xyz --->>automatically goes to second ec2 instance xyz
EFS : security for ELB is Security Groups..
We can be able to do encrypt the data By Using EFS
Note : here no need to configure SSH and no need to use SCP command because here we are using Network level Mount Point
EFS : thumb rule
1. EC2 instances must be in different availability Zones.
2. After login to Ec2 instances we have to follow some common steps like below.
1. EFS package install
2. create one directory ===>> mkdir ramakrishna it is used to set Mount Point===>> 2.1 ===>> create elastic file system.
3. mount point create. (Network level mount point) and it is attached to both Instances.
4. cd ramakrishna -->> touch {1..9}
second instance ===>> cd ramakrishna
ls
9 files are visible in 2nd directory which are crested in 1st directory. viseversa
==================
EFS history :
[root@ip-172-31-2-241 ramakrishna]# history
1 yum install -y amazon-efs-utils
2 mkdir /ramakrishna
3 sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-7ae54e42.efs.ap-southeast-2.amazonaws.com:/ /ramakrishna
4 df -h
5 cd /ramakrishna/
6 touch {a..z}
7 mkdir one sachin yuvi two
8 ls
9 history
[root@ip-172-31-2-241 ramakrishna]#
[root@ip-172-31-2-241 ramakrishna]#
===========================
===>> FAQ : Difference between EBS vs S3 vs EFS *******
EBS : block level storage ( ec2 instance and volume should be same AZ)
1. mount points .
2. SNAPSHOTS
mount points ==>>> application install
3. data encryption is available.
4. No lifecycle rules ===>>> backup ==>> lifecycle manager.
5. EBS ==>> volume types.
6. No storage classes available here
=========================
S3 : object level storage
1. Buckets (No need of EC2 instances.)
2. CRR (Cross Region Replication)
3. These S3 objects are stored in BUCKETS and Objects are stored as like Key, Values with any kind of format like image, file, text, pdf, .dat, etc…
3. data encryption is available.
4. lifecycle rules are available.
5. Default snapshots are stored S3 buckets.
6. to see the data transfer speed in all regions.
7. S3 has storage classes
8. we do not install application here
9. static websites are hosting here.
========================
EFS : Network level storage ( EC2 instances must be different availability zones)
1. Network level mount points.
2. Backup also here ===>> if one instance terminated ===>> another instance ==>> backup available.
because of common mount point.
3. storing the files only.
3. data encryption is available.
4. lifecycle rules are available
5. we do not install application here
=========================
Autoscaling group :
The main purpose of Autoscaling group is to provide the Application High availability
Autoscaling group ==>>> scale our application
Autoscaling group : adding the no .of instances to our infrastructure -->> group
scaling types are two types
1. Vertical scaling : Increasing the instance size like CPU , RAM , Hard disk ..etc..
2. Horizontal scaling :Increase or Adding the no . of instances / servers to our Infrastructure.
Autoscaling group follows Horizontal scaling.
Autoscaling group ===>> based on the scale in and scale out policies ==>> Instances to add / terminate the instances to our Infrastructure.
scale in: adding the no. of instances to autoscaling group
scale out: terminating the no. of instances to autoscaling group
default metrics : CPU utilization , disk utilization , network utilization
Based on the default metrics we will apply the conditions.
conditions are sum , average , count , min , max , < , > ,<= ,>=
===============================================
example : flipcart megasale ==>> lot of users to hit the website ==>> incomming traffic increase ==>> cpu, disk , network utilization ===> increase ==>> instances hung state ==> App ==>> not going EU. ==>>> BUSiness impact.
To over come above senario ==>> Autoscaling group came into the picture.
flipcart megasale ==>> no.of users hit -->> Elastic loadbalancer ==>> cpu utlization > 70 % ==>> instance add .
flipcart megasale ==>> less no.of users hit -->> Elastic loadbalancer ==>> cpu utlization < 70 % ==>> instance teminate .
flipcar website ==>> incoming traffic ==>> increase ==>> auomatically instances add.
incoming traffic ==>> decrease ==>> auomatically instances terminate
===================================================
Autoscaling group ==>>> minimum requirements :
1. First we need to take one ec2 instance and install one application.
2. we need to take AMI or image to our created ec2 instance.
3. we need to create one load balancer ( CLB )
4. alerts ==>>> email ==> SNS ( simple notification service ) ==>> topic create and subscribe it.
5. we will create Launch configuration ==>> by using our created AMI.
Launch configuration ==>> normal ec2 instance creation steps.
6. We will create Autoscaling group.
Autoscaling group ===>>> 3 fields :
1. minimum no .of instance == ?? ==>> 2
2. Maximum no .of instaces == ?? ===>> 5
3. desired capcity == always availble instances in ASG. ==>>> this value must be minimum and maximum. ==>> 2
sbi app ==>> sbi.com ==>> ELB ===>> cpu/disk/network > 90 % ==>> 1 instance add
sbi app ==>> sbi.com ==>> ELB ===>> cpu/disk/network < 90 % ==>> 1 instance terminate.
===============
IAM : Identity Access Management
The purpose of IAM Activities is to provide security to the AWS resources / services..
1. Create the users
2. Create Groups
3. Provide the policies ( permissions)
4. Create the Roles
5. Identity providers ( Social websites like twitter , linked in , Facebook..etc)
6. MFA : Multifactor Authentication.
1. users :
How to create users in AWS ??
there are two types of users in AWS.
A. Admins: These users having AWS console access ( aws dashboard) : username (Email) and password. / MFA (Multi Factor Authentication)
B. Developers: These users having AWS CLI access (programmatic access), Username (access key) and password(ecreate key).
password ==>> LDAP / AD ==>>> free tier ==>>> custom password.==>>> we will create the password.
user create ==>>> link generate and excel sheet also generate ==>> email to us.
2. groups : No of users to adding the groups..
3. policies ( permissions)
read , write , read-only, full .. administrator etc..
AWS by default gives some policies ..
By using policies we will be providing security to the AWS resources / services..
we will create our own policies also. That is called custom policies..
To create custom policies we will write json script which is developed by developers..
4. Roles :
In AWS one service will communicate with another service then we need to create a role.
ec2 instance ( service1 ) and S3 ( service2 ) here if we communicate one service to another one then we need to create a role..
5. Identity providers :
we will integrate with AWS a/c ==>> social media ==>> twitter , fb , LinkedIn..etc
AWS ==>>> application ===>> business ==>>> online ==>> run ==>>> digital marketing..
6. MFA : multifactor authentication
1. mobile number ==>>> AWS a/c integrate ==>> login to AWS a/c every time we will get otp ==>>> enter ==>> now you are in aws a/c.
2. Google authenticator ==>> AWS a/c integrate ==>> login to aws a/c every time ==>> 6 digit==>>> enter ==>> now you are in aws a/c.
==================================
[root@ip-172-31-7-199 ec2-user]# history
1 aws s3 ls
2 aws s3 mb s3://demos31
3 aws s3 mb s3://demos33
4 aws s3 mb s3://s3demo123
5 aws s3 mb s3://s3bhargavi001
6 aws s3 mb s3://s3bhargavi002
7 aws s3 ls
8 aws s3 rb s3://s3bhargavi002
9 aws s3 rb s3://s3bhargavi001
10 aws s3 rb s3://demos30
11 history
[root@ip-172-31-7-199 ec2-user]#
[root@ip-172-31-7-199 ec2-user]#
=========================
RDS : Relational Database.
Database means collection of information or collection of data.
data will store as in the form table format.
Table data will be stored as Rows and Column wise.
RDS used for to store application logs in database.
In Aws Database provide by RDS and it is having 6 types.
(In Interview point we will tell currently we are working with MySQL.)
MySQL Port Number is 3306
RDS is a Client /Server architecture.
RDS thumb rule: RDS client and server must be in same availability zone.
RDS server creation steps :
Go to RDS screen -> Click on Create Database -> here showing two types and select any one of them.
Easy to Create or Standard to Create (select anyone normally we will go to Standard to Create one)-> select MySQL-> then select Template any one of them either Production or Dev/Test or free-tier (here select free one)
MySQL default version is 8
Default Username: admin
Password: LDAP / AD, free tier (this is custom password)
default MySQL ram size: 20GB.
default VPC , subnet, security group (all traffic)
database ==>> not accessible to public.
RDs we can set in autoscaling group, and able to take backup and Maintenace, and able to attach IAM roles, upgrade, snapshots.
create rds server.
endpoint link.
==========
we will create RDS client.
1. we need to create one normal ec2 instance and login into that ec2 instance.
2. we need to install mysql package
3. MySQL -h enpointlink -P 3306 -u admin -p
=============
RDS client history :
[root@ip-172-31-44-228 ec2-user]# history
1 yum install -y mysql
2 mysql -h bhargavidb123.cbxxszt3ceny.ap-southeast-2.rds.amazonaws.com -P 3306 -u admin -p
3 history
[root@ip-172-31-44-228 ec2-user]#
[root@ip-172-31-44-228 ec2-user]#
==========================
Monitoring tools are CloudWatch and New Relic, data Dog, etc...
1. In Aws Infrastructure monitors the cloud watch.
2. Cloud watch can monitor default metrics i.e., CPU, Network, Disk utilization. (we will get an alerts based on the threshold)
3. cloud watch does not monitor Memory and volumes. for that we need to install agent on that server using scripts. (Perl, shell, python scripts prepared by developers)
we will prepare a volume and memory metrics are called custom metrics.
4. Cloud watch has two types of Monitoring
1. Basic monitoring (in every 5 mins it will be monitoring AWS services, and it is default monitoring type, and it is free of cost.) free tier we are using this one.
2. Detailed Monitoring (in every 1 mins -it will be monitoring AWS services, and it is not default monitoring type and it is purchasable one.) in Realtime we are using this one.
Cloud watch monitors not only monitor default metrics but also monitor all AWS services like EC2, ELB, ASG, EFS, S3, RDS, etc...
in CloudWatch we will create an Alarms by using SNS (Topic, Subscribe.)
this Alarms used to get an alert, if any instance add / terminate / stop / launch to fail based on the threshold value.
==============
Application Monitoring: (the tools are NewRelic, Datadog )
1. It collects the application logs.
2. Availability of application.
3. It finds traffic of the request count of the application.
4. Heap and JVM size. (Memory Utilization metrics)
5. 503 and 403 errors.
6. These tools are third party tools, and we need to integrate with application server (EC2 Instance.)
=============================================
GoDaddy --->>> IP Address -->> domain provide.
xyz.com --- >> GoDaddy -->>> IP add -->>> xyz.com
DNS: Domain naming service.
for Domain Registration we will use Route53.
Route53 -->> AWS manager -->> domain register -->>> cyber security team -->> IP and domain name.
Dns -->> ip -->> host
host -->> ip.
===========================
CloudTrail: we will use it for Account Auditing Purpose. and also, we can call it as Issue Tracking Tool.
eg : user1 create a EC2, user2 Create a ELB and User3 create a RDS, and user N create a ...... Now we can check ALL user's Total history by using CloudTrail with the help of IAS Admin team.
Eg: if one user is deleting subnet from one EC2 Instance, by using CloudTrail we can be able to find out this specific user actions.
it is managed by AWS control Manager.
we can be able to store these CloudTrail logs in S3 Bucket
============================
Terraform :
Who do you create your infrastructure for?
We use Infrastructure as Code (IaC) tools for creating infrastructure
Using IaC we automate creation of infrastructure
Popular IaC tools are terraform and AWS cloudformation
Automation provides lots of benefits
Easy to create identical environments, like dev, qa, uat, prod
Before creation we can review the code and follow best practices.
We can reuse templates in other projects as well.
We can easily troubleshoot infrastructure related issues
Using terraform we describe all our desired state of infra in a configuration and then we execute it.
Terraform has its own language to write terraform scripts, but it's easy to learn.
Terraform supports lots of providers, like aws, azure, gcp, digital ocean, and many more..
=============================
echo $"export PATH=\$PATH:$(pwd)" >> ~/.bash_profile
source ~/.bash_profile
variable "vpc_cidr" {
default = "10.0.0.0/16"
}
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
instance_tenancy = "default"
tags = {
Name = "main-terraform"
CostCenter = "KHPAP-09876"
Banglore = "Banglore"
}
}
gopi
AKIAVI67CLQ72MFQZRXT ===>> AK
sGc0X3ECd7XlA3sVLwi/K1hVQ9GXG5CU1QHohKmP ===>> SK
https://www.terraform.io/downloads
==================
How to create AWS account ???
email address , mobile number , Debit card ( master / visa )
Google ===>> aws console login ==>> AWs management console ===>> sign into console ==>>> create new account ==>> email : xyz , password 1243 , conform password : 1234 ==>> Aws account name : ==>> continue..
AWS free-tier ( select ) ===>> personal ( select ) or proffesional/ business acount ===>>> address details ==> d.no , streect , pincode , state ,near landmark..
==>>> debit card details / credit card ==>> 16 digits number ==>> CVV ===>>> OTP ( 2/-) ===>>> do you have pancard : no ===>>
AWS ==> verification ==>> country ===>> India ==>> mobile number ===>> voice message / text message (select) ==>> 4 digits ( 5896 )
my role is : student
you are intestrd in : other..
AWs console login ==>> sign in ==>> email id with password ===>>
AWS account ==>> imediate / 24 hours..
================================================
Linux servers : we will create linux servers in AWS account
Linux servers to connect ===>> we need to two softwares or applications install in our laptop.
1. gitbash
2. putty
========================================
Linux : it is an operating system like windows..
Linux is process oriented operating system..
Datacenter : physical linux servers ===>> hardware ===>> o.s install ( linux ) ===>> application and databse installl ==>> APP ===>> EU.
AWS ==>> Cloud ===>>> AMI ( amazon machine image) ==>> Ec2 instance ==>>> application and databse installl ==>> APP ===>> EU.
Unix : operating system.
Unix 4 types of flavours :
1. sun solaris 2. Redhat Linux ( open source and free of cost ) 3. IBM-AIX 4. HP-UNIX..
Other remaining these three are enterpraise versions ===>> License purhase ===>>>> manadatory..
windows :
C:/ : operating system install ===>> Admin user.
GUI mode operations ( Graphical user interface) ===>> clicks.
files and folders
NTFS filesystem ( new technology file system)
Linux :
/ ==>>> root ===>>> operating system install ===>>> rootuser / parent user / super user / Admin user.
CLI mode operations : ( Command Line interface) ===>>> commands to type.
files and directories
ext2 , ext3 , ext4 ( latest ) ==>> file systems.
ext2 ===>> second extended file system.
ext3 ===>> third extended file system.
ext4 ===>> fourth extended file system.
========================================
ec2 : elastic cloud compute : ec2 ==>>> virtal machine. ==>>> ec2 instance.
AMI ==>> Amazon machine image
Every AMI has their own identification number ===>> AMI ID.
Every operarting sysrem has their own AMI. ==>> o.s install.
security group :
It is a virtual firewall at ec2 instance level..
it contains set of rules..
every application has their own port number..
A Security Group defines which traffic is allowed To or From EC2 Instances
all ports in between ==>>> 0 to 65535
ec2-user ( default user ) ===>> able to login we need to add a rule ==>> ssh ( mandatory)
ssh ==>> port number ==>>> 22.
http ==>> port number ==>> 80
mysql ==>>> database ==>> 3306
=========================================
when ever you created a ec2 intsance ==>> then automatically two Ip addresses will come.
1. public IP address : ec2 instance to login and application enduser to access.
this is visible only in AWS console dashboard.
2. private Ip address : it is used to internal communication.
this is visible in both AWS console dashboard and ec2 instance.
=========================================
keypair ===>>> ramakrishna ===>> download ===>>> extension ====>>> ramakrishna.pem ==>> pemfile.
pemfile conatins privatekey
afetr launching the ec2 instance ===>> defaultly create publickey key .
privatekey and publickey -->>> match --->>> default user able to login into ec2 instance ( ec2-user).
==========================
Linux basic commands :
Files and directory operations :
Files operations :
cat > filename
I am new to linux ..
ctrl + d ==> save.
ex: cat > ramakrishna
I am new to linux , devops , aws
ctrl + d ==>> save..
==>>>> list of files ==>> ls
file identification ==>> ls ==> file ==>>> white color.
ls -l ==>> first field ==>> - ( hyphen)
2. How to append the data ina file
cat >> filename
S3 and RDS ..
ctrl + d ==>> save
3. How to view inside data in a file.
cat filename
cat ramakrishna
4. How to copy file from one location to another location.
cp source destionation
note : destination must be a directory..
mkdir directoryname
mkdir sachin
ex: cp ramakrishna sachin
4. How to move file from one location to another location.
mv source destionation
note : destination must be a directory..
mkdir directoryname
mkdir yuvi
ex: mv ramakrishna yuvi
5. How to rename a file.
mv oldname newname
mv ramakrishna srinivas
6. How to create empty files.
touch filename
touch abc
touch {a..m}
7. file1 ==>>>data and file2 empty file ==>> file1 data copy to file2. ==>>> redirect.
cat file1 > file2
cat ramakrishna > abc
8. How to delete a file .
rm -rf filename
rm -rf ramakrishna.
=======================================
directory operations :
how to create a directory ??
mkdir directoryname
mkdir sachin
ls ==>> directory color ==>> blue.
ls -l ==>> first field ( d)
pwd ==>> present working directory..
cd ==>> change directory..
cd sachin
pwd
/home/ec2-user/sachin
mkdir rahul
cd rahul
pwd
/home/ec2-user/sachin/rahul
mkdir hardik
cd hardik
pwd
/home/ec2-user/sachin/rahul/hardik
mkdir lara
cd lara
pwd
/home/ec2-user/sachin/rahul/hardik/lara
cd ..
/home/ec2-user/sachin/rahul/hardik
cd ..
/home/ec2-user/sachin/rahul
cd ..
/home/ec2-user/sachin/
cd ..
/home/ec2-user/
mkdir -p /home/ec2-user/sachin/rahul/hardik/lara/ponting
cd /home/ec2-user/sachin/rahul/hardik/lara
pwd
/home/ec2-user/sachin/rahul/hardik/lara
cd ../../../../
pwd
/home/ec2-user/
================================
how to rename a directory
mv oldname newname
mv sachin dhoni
how to delete a directory
rm -rf directryname
rm -rf sachin
===================================================
filter commands :
files / directories / users / groups
useradd ramakrishna
useradd bhargavi
user related information ==>>> /etc/passwd
head : top 10 users to display ==>> head /etc/passwd
tail : below 10 users to display ==>> tail /etc/passwd
more : page by page ===>> more /etc/passwd ==>>> space button press ==>> last page =>> automatically exit..
less : page by page ==>> less /etc/passwd. ==>>> space button press ==>> last page ==>> not exit ==>>> q button press ==> quit
========================================
vi editor :
files ===>>> create , with in the files ==>> data ==>>> modify and delete by using vi editor..
vi editor has 3 types of modes..
1. CLI mode.
2. Insert mode.
3. Extended mode.
vi ramakrisha ====>> CLI mode
press " i " key ==>> insert mode.
I am new to Linux..
escape shift:wq! ===>>> save ===>>> extended mode.
cat ramakrishna
========================================
grep and find :
10 files
ramakrishna
ls -l | grep ramkrishna
ls -l | grep 123
ls -l | grep abc
ls -l | grep a
ls -l | grep A
i ==>> ignore case sensitive
ls -l | grep -i A
find :
find / -optins keyword
options :
1. files
2. directories
3. users
4. groups
5. inum ==>>> inode number ==>> 4 digit number.
find / -name ramakrishna
find / -name sachin
find / -user pavan
find / -group aws
find / -inum 1234
=================================================================
files and directory permissions : ===>> security
security ===>> userlevel , grouplevel , otherlevel..
ls -l
- ==>> file
d ==>> directory
c ==>> charecter file
b ==>> block file
l ==>> link file.
rw- ( userlevel) r-- (grouplevel ) r-- ( otherslevel)
r ==>>> read ===>> 4
w ==>> write ===>> 2
x ==>> execute ==>> 1
By using this command ==>> chmod command ==>> change modification.
2 types methods to giving the files and directory permissions.
1. symbolic method.
2. Absolute method.
======================
1. symbolic method.
file ==>>> bhargavi
userlevel 6 , grouplevel 3 , otherslevel ==>> 5
chmod u=rw,g=wx,o=rx bhargavi
sachin ==>>> 7 ( userlevel ) 6 ( group level ) 4 ( otherlevel )
chmod u=rwx,g=rw,o=r sachin
=========================================================
2. Absolute method.
yuvi ==>> 655
chmod 655 yuvi
chiru ==>> 666
chmod 666 chiru
abc ==>> only userlevel full permissions..
chmod 700 abc
xyz ==>> group level full permissions..
chmod 070 xyz
chmod 007 ponting..
=============================================
file full permissions : 666
directory full permissions : 777
default file permissions : 644
default directory permissions : 755
umask ==>> 022 / 0022
666 - 022 ==>> 644
777 - 022 ==>> 755
================================================
Booting process :
ex: windows ==>> power on button ==>> press ==>> password ====>> in between poweron button and passowrd process ==>> Booting process..
Linux ==> power on button ==>> press then booting process will starts.
Booting process has 6 stages :
1. BIOS : Basic input output system.
2. MBR : Master boot record.
3. GRUB : Grand unified bootloader.
4. KERNEL :
5.INIT : initialization.
6. RUNLEVELS :
1. BIOS : Basic input out system.
It will checks the system integrity check .
system integrity check ==>> system's hradware check ==>> motherboard , cpu , ram , harddisk ==>> properly working or not ??
2. MBR : Master boot record:
It contains the bootables files ..
MBR has 3 components
1. Primary bootloader. ==>> 446 bytes.
2. Partition table information. ==>> 64 bytes.
3. MBR validation check. ==>> 2 bytes.
MBR size ==>> 512 bytes.
3. GRUB : Grand unified bootloader.
GRUB contains the information
Root device inforamtion ===>> /dev/xvda
multiple kernel images ==>> 5 , 6 , 7 , 8 , 9
default time ===>> ???
timedout ===>> ???
grub contains one configuration file ===>> /boot/grub/grub.conf
vi /boot/grub/grub.conf
/boot/grub/grub.conf ==>>> this configuration file link to /etc/grub.conf.
4. KERNEL :
It is the mediator between o.s and hardware.
it is the heart of the operating.
It will manages devices information , multitasking , filesystem information.
5. INIT :
It is parent of all process.
each process has their own unique identification number.
process ==>> unique id ==>> process id ==>>> PID
init ==>> pid ==>> 1
root ==>> pid ==>> 0
init 0 ===>> Hung state. ( danger command.)
init 1 ===>> single user mode ( trouble shoot )
init 2 ===>> multiuser mode with out network ( networking related commands are not working)
init 3 ===>> multiuser mode with network ( networking related commands are working here) ==>> default init level
init 4 ===>> un used.
init 5 ==>>> X11 ( GUI mode )
init 6 ===>>> reboot ==>> danger command ===>> with respective people ==>> approval.
vi /etc/inittab
/etc/init.d ==>> scripts..
6. RUNLEVELS :
shell scripts ==>>> application install or backup ==>> scripts to put inside runlevels.
/etc/rc.d/rc0.d ==>> runlevel 0
/etc/rc.d/rc1.d ==>> runlevel 1
/etc/rc.d/rc2.d ==>> runlevel 2
/etc/rc.d/rc3.d ==>> runlevel 3 ==>>> default runlevel..
/etc/rc.d/rc4.d ==>> runlevel 4
/etc/rc.d/rc5.d ==>> runlevel 5
/etc/rc.d/rc6.d ==>> runlevel 6
vi /etc/rc.d/rc3.d/.backup.sh ==>> reboot ==>> you will get complete backup of linux server.
/etc/init.d ==>> scripts.. ==>>> app ==>> service ==>> manage.
=========================================
AWS ==>> runlevels ==>> alternative ==>> userdata ==>> script.
=========================================
Partitiong / filesystem creation :
deviding the hard disk into the no .of partitions..
500gb harddsik ===>> 10 parttions ==>> each partition has the size ==>> 50 gb..
Physical servers point of view :
device naming convensions :
/dev ==>> devices information.
/dev/sda ==>> SCSI
/dev/hda ==>> IDE
/dev/vda ==>> virtual disk..
4 , 8 , 12 , 16.
Each physical linux servers ==>> 16 hard disks attached one linux servers..
/dev/sda to /dev/sdp
/dev/sda to /dev/sde ==>>>> o.s internally used..
extranal we will attached to the physical linux server ==>>> /dev/sdf to /dev/sdp..
LInux ==>> file system types ==>> ext2 , ext3 , ext4 ( latest )
senario :
Application team ==>> request raise to linux admin team ==>> 500 gb ==>> disk space ( hard disk) ==>>>file sysyem ==>> app5 ==>> mount point ==>> application install.
Linux admin team ==>> request raise SAN ( storage area network ) team ==>> please attach 500 gb hard disk to lx123 ( linux server name ).
SAN team request raise to data center people ( field engineers ) ==>>> lax123 ==>attach to 500gb hard disk. ==>>> they will attach 500gb hard disk to linux server.
Linux admin team follows below steps..
1. fdisk -l ( o.s control )
2. partprobe /dev/sdf ==>>> kernel identification.
3. mkfs.ext4 /dev/sdf ==>> creating the file system.
4. mkdir app5
5. mounting : attaching a directory to the file system. it is called mount point.
mount -t ext4 /dev/sdf app5
6. cat /etc/mtab ==>> temparary mount points.
7. How to make permanate mount ??
vi /etc/fstab
devicename mountpoint typeoffilesystem defaults 0 (dump) 0 ( check sequence)
/dev/sdf /home/ec2-user/app5 ext4 defaults 0 0
esc shift:wq!
8. cd app5
ls
lost + found ==>> directory..
touch {a..e}
reboot
=========================================================
AWS cloud : EBS ==>> elastic block storage.
disk space ===>> volume
Application team ==>> request raise to linux admin team ==>> 500 gb ==>> volume ==>>> filesystem ==>>> app5 ==>> mount point ==>> application install.
EBS thumbrule :
Ec2 instance and volume should be in same availability zone.
Ec2 instance ==>> 1a ==>> AZ
volume ==>> same AZ ( 1a ) ==> 500 gb
we will attach this volume to ec2 instance
volumes ==>> 16 volumes to create one ec2 instance.
/dev/sda to /dev/sdp.
/dev/sda to /dev/sde ==>> o.s internelly used.
volume attach to ec2 instance ==>>> /dev/sdf to /dev/sdp. (11)
After login into the ec2 instance ==>> device naming convension to display diffrent. ==>>> /dev/xvdf to /dev/xvdp.
Linux admin team follows below steps..
1. fdisk -l ( o.s control )
2. lsblk ==>>> kernel identification.
3. mkfs.ext4 /dev/xvdf ==>> creating the file system.
4. mkdir app5
5. mounting : attaching a directory to the file system. it is called mount point.
mount -t ext4 /dev/xvdf app5
6. cat /etc/mtab ==>> temparary mount points.
7. How to make permanate mount ??
vi /etc/fstab
devicename mountpoint typeoffilesystem defaults 0 (dump) 0 ( check sequence)
/dev/xvdf /home/ec2-user/app5 ext4 defaults 0 0
esc shift:wq!
8. cd app5
ls
lost + found ==>> directory..
touch {a..e}
reboot
======================================
ebs history :
[root@ip-172-31-32-34 ec2-user]# history
1 fdisk -l
2 lsblk
3 mkfs.ext4 /dev/xvdf
4 mkdir app5
5 mount -t ext4 /dev/xvdf app5
6 cat /etc/mtab
7 vi /etc/fstab
8 df -h
9 cd app5/
10 ls
11 touch {a..z}
12 ls
13 cd ..
14 fdisk -l
15 lsblk
16 mkfs.ext4 /dev/xvdg
17 mkdir app6
18 mount -t ext4 /dev/xvdg app6
19 cat /etc/mtab
20 vi /etc/fstab
21 ls
22 cd app6
23 ls
24 touch {1..20}
25 ls
26 cd ..
27 history
[root@ip-172-31-32-34 ec2-user]#
[root@ip-172-31-32-34 ec2-user]# cat /etc/fstab
#
UUID=26620198-186a-404b-b9a1-12d957d7c826 / xfs defaults,noatime 1 1
/dev/xvdf /home/ec2-user/app5 ext4 defaults 0 0
/dev/xvdg /home/ec2-user/app6 ext4 defaults 0 0
[root@ip-172-31-32-34 ec2-user]#
[root@ip-172-31-32-34 ec2-user]#
=====================================================
Networking:
Networking means, two or more systems connected each other and the systems are in same Network.
systems are nothing but servers.
Physical servers point of view network means where we manage the group of servers to manage with single server is nothing but called as Data Centre or On-Premise infrastructure.
If Two servers are in same network we need some minimum requirements.
1. Two servers must be cabled with other.(the cable names are RJ45, CAT5).
2. Each servers has at least one NIC card ( Network Interface Card / Controller..)
3. Each NIC card has one IP Address and Subnetmask..
4. After login into physical servers==>> eth0 ===>> logic NIC name, Here one IP Address 192.168.0.1 along with one Subnetmask 255.255.255.0.Assign to NIC
Note :- Each Linux server at least one IP Address along with one Subnetmask.
NIC Slots Ex:-
NIC1 ==>> eth0 (it is nothing but Logica NIC name)
NIC2 ==>> eth1
NIC3 ==>> eth2
We have to assign these NIC slots Based on the hardware to attach it..
5. Then these two systems in same network.. and these systems communicate with each other.. here how we know the two systems are in same system means follow two steps.
a. Login to server1 then ping server2IPaddress. Here we can able to see ping sequence..
b. Login to server2 then ping server1IPaddress Here we can able to see ping sequence..
now we can conclude these two servers are in same network.
MTU : Memory Transfer Unit.
==============================================================
Networking advantages.
1. We can able to Transfer the files from one server to another server.
2. We can able to install the applications form one server to another server by Remote users.
We need configure the SSH configuration to do the above requirements
SSH : secure shell here the PORT number is 22
SSH : secure shell Advantages
1. We can able to Transfer the files server1 to server2 as Encrypted Format and also
2. we can able to transfer the files from Server2 to Server1 as Decrypted Format.
3. No one will hack the files/Systems if we use SSH.
4. SSH is a Password less authentication.
Eg:-
By using SSH we can connect server1 to server2 without password asking.
By using SSH we can connect server2 to server1 without password asking.
How to configure SSH configuration ??
central.pem ==>> privatekey.
server1 : central.pem ==>> privatekey ==>> copy.
1. vi /tmp/central.pem
paste the privatekey ==>> save
2. chmod 700 /tmp/central.pem
server2 : central.pem ==>> privatekey ==>> copy.
1. vi /tmp/central.pem
paste the privatekey ==>> save
2. chmod 700 /tmp/central.pem
===================================================
1. How to transfer files from one server to another server.
server1 to server2 ==>> files transfer
scp : secure copy
touch bhargavi
scp -i /tmp/central.pem filename ec2-user@server2IPaddress(public / private Ip):/home/ec2-user
scp -i /tmp/central.pem bhargavi ec2-user@50.20.10.5:/home/ec2-user
server2 to server1 ==>> files transfer
scp : secure copy
touch ramakrishna
scp -i /tmp/central.pem filename ec2-user@server1IPaddress(public / private Ip):/home/ec2-user
scp -i /tmp/central.pem ramakrishna ec2-user@60.20.10.5:/home/ec2-user
===============================================
2. How to login remote userly from one server to another server.
server1 to server2 ==>>> remoteuserly login.
ssh : secure shell
ssh -i /tmp/central.pem ec2-user@server2IPaddress(public / private Ip)
ssh -i /tmp/central.pem ec2-user@50.20.10.5 ==>> enter ==>> now you are in server2.
server2 to server1 ==>>> remoteuserly login.
ssh : secure shell
ssh -i /tmp/central.pem ec2-user@server1IPaddress(public / private Ip)
ssh -i /tmp/central.pem ec2-user@60.20.10.5 ==>> enter ==>> now you are in server1.
============================================================
ifconfig -a ==>> command
nic card logical name , up , running ,mtu ( memory tranfer unit )==>>9001
nIC ==>>> mac address , IPaddress and subnetmask..
lo : loop back address ==>> self ping ===>> 127( series)
Ipaddress ==>> privateIP.
==>> How to change / assign the IPaddress of linux server ??
cd /etc/sysconfig/network-scripts
ls
ifcfg-eth0 ifcfg-eth1
vi ifcfg-eth0
IPADDR=192.168.20.5
save
service network start
==>> How to change / assign the hostname of the linux server ??
vi /etc/sysconfig/network
hostame = xyz.com
save
service network start
============================================
hostname
xyz.com
=====================================================
[root@ip-172-31-46-139 network-scripts]# history
1 ping 54.250.156.121
2 ifconfig -a
3 ping 172.31.46.139
4 vi /tmp/kalpana123.pem
5 chmod 700 /tmp/kalpana123.pem
6 touch jyothsna
7 scp -i /tmp/kalpana123.pem jyothsna ec2-user@54.250.156.121:/home/ec2-user
8 ls
9 ifconfig -a
10 ssh -i /tmp/kalpana123.pem ec2-user@54.250.156.121
11 ifconfig -a
12 git --version
13 cd /etc/sysconfig/network-scripts/
14 ls
15 vi ifcfg-eth0
16 hostname
17 cat /etc/sysconfig/network
18 vi /etc/sysconfig/network
19 hostname
20 history
[root@ip-172-31-46-139 network-scripts]#
=====================================================
IP Address :
Each Linux servers(means Physical server) has one IPAddress along with one subnetmask..
Subnetmask always started with 255
In AWS we have two IPAddress for each server, one is Public and other one is Private
An IP Address is an 4 digit octal number (octal number means 8).
example IPAddress : 192.168.5.10 (here the app address having four digits or bits)
Each digit / bit calculated as Octal.
4 * 8 = 32 bits.. so each IPAddress having 32bits.
each bit or digit calculated as 2 power some thing. Eg : 23 , 24 etc..
Each bit/digit can be Binary format like 010110(Computer readable Format)
We will decide IPAddress class types based on the first bit.
Subnet : Large Network can be divided into smaller networks that is called Subnet.
Presently we are using IPV4 .
IPV6 is high cost billable.
IPAddress class types:
CLASS A : 0 to 127 ===>> 255.0.0.0 ==>> subnetmask ====>>> CIDR block ==>> /8
CLASS B : 128 to 191 ===>> 255.255.0.0 ==>> sunbetmask ==>> CIDR block ==>> /16 ===>> VPC
CLASS C : 192 to 223 ===>> 255.255.255.0 ==>> subnetmask ==>> CIDR block ==>> /24 ==>> subnet.
CLASS D : R&D
CLASS E : unused.
127 + 64 ===>> 191
191 + 32 ===>> 223
CIDR block / Notation : we will decide the CIDR Block/ Notation based on the subnetmask..
CIDR : classless interdomain route.
Subnets : A large Networks can be divided into smaller one is nothing but Subnet.
An IPAddress can be divided into two Portions.
based on these Portions, the IPAddress are released.
1. Network Portion (static / it is constant cannot be changed)
2. Host Portion (dynamic and changed it in every time)
1. Network Portion (static / constant) we will take first 2 bits or 3 bits.
2. Host Portion (dynamic and change) we will take last 2 bits or 1 bit.
our own network ==>>> how many IPAddresses will releases and In this network ==>> how many ec2 instances will create.??
ex: 30.50.10.40 ==>> SBI network
1. Network portion ( static / constant) ===>> first 2 bits ==>> 2 power 16
2. host portion ( dynamic and change) ==>> last 2 bits. ==>> 2 power 16 ===>> 500.
30.50.10.40 ==>> SBI network ==>>> in this network 500 Ipaddresse release ==>> 500 ec2 instances will create in this SBI network..
30.50.11.40
30.50.12.40
30.50.13.40
30.50.14.40
30.50.300.40
30.50.300.41
30.50.300.42
30.50.300.43
=============================================================
ex: 90.50.40.25 ==>> HDFC network
1. Network portion ( static / constant) ===>> first 3 bits ==>> 2 power 24
2. host portion ( dynamic and change) ==>> last 1 bits. ==>> 2 power 8 ===>> 256
90.50.40.25 ==>> HDFC network ==>>> in this network 256 Ipaddresse release ==>> 256 ec2 instances will create in this HDFC network..
90.50.40.26
90.50.40.27
90.50.40.28
90.50.40.29
90.50.40.30
90.50.40.281
=================================================
Package Administration / software management / package management.
windows ==>>> softwares like ==>> vlc media player , pdf , msooffice..
Linux ==>>> packages..
Package Adminsitration ==>> LINUX ==>> two types utilities..
1. RPM : Redhat package manager
2. YUM : Yellow dog update modifier.
LINUX : RPM and YUM ==>> packages ==>> install , uninstall , verify , information , update , upgrade.
update ==>>> linux version 5.2 ===>>> linux version 5.5 ===>> patching.
upgrade ==>>> Linux version 5 ===>> linux version 6 ==>> upgrade.
Physical servers point of view :
1. RPM : Redhat package manager
step 1 :Physical linux server ===>> cd / dvd disk ===>>> group of packages copied into cd / dvd disk.
Physical linux server ===>> cd / dvd disk ===>>> insert ==>>> all packages ==>> copy to any location of the physical server.
location ==>> /var/ftp/pub/packages.
step 2 : go to the exact path of the available packages.
cd /var/ftp/pub/packages ===>> mandatory.
rpm -ivh packagename
i ==>> install , v ==>> verbose , h ==>> hash prompt.
rpm -ivh httpd
rpm -uvh packagename
rpm -uvh httpd
rpm -qa packagename
rpm -qa httpd
rpm info packagename
rpm info httpd
rpm update
rpm upgrade
Key point : It will check the dependencys..
httpd install ==>> dependent ==>> java ==>> first you need to install java and after that you need to install httpd.
RPM : drawbacks ==>> 1. path 2. dependency checking.
To overcome the above drawbacks in RPM then YUM came into the picture.
1. YUM : Yellow dog update modifier
step 1 :Physical linux server ===>> cd / dvd disk ===>>> group of packages copied into cd / dvd disk.
Physical linux server ===>> cd / dvd disk ===>>> insert ==>>> all packages ==>> copy to any location of the physical server.
location ==>> /var/ftp/pub/packages.
Repositories ===>> group of packages managed place .
we will create our own repositories.
/etc/repos.d ===>> we will create repositories here.
repository extension must be name.repo
vi /etc/repos.d/bhargavi.repo
[bhargavi]
base url : http:///var/ftp/pub/packages
gpgcheck = 0
enabled =1
esc shift:wq!
==>> yum install packagename
yum install httpd ==>> y/d/n ===>> type y.
yum install -y httpd
yum remove packagename
yum remove httpd
yum list
yum info packagename
yum info httpd
yum update -y
yum upgrade -y
===============================================================
AWS ==>> cloud.
1. cd / dvd disk ==>> no need insert. ==>> these instances are virtual instances.
2. No need to create repositories.
yum install -y httpd ==>> online ==>> httpd site.
yum install -y git ==>> gitsite
yum install -y maven ==>> maven site
yum install -y docker ==>> docker site.
yum install -y tomcat ===>> tomcat site.
=======================================================
Managing installed packages..
service packagename status
service packagename start
service packagename stop
service packagename restart
service packagename reload
====================================
service httpd status
service httpd start
service httpd stop
service httpd restart
service httpd reload.
restart ===>> service ==>> stop and start
reload ===>> service ===>> httpd ==>>> install ==>>> internet issue ==>> 80 % install==>> remaining 20 % install ==>> stop and start.
========================================================================
The above only for one session.
chkconfig httpd on ==>>> application will always close to enduser.
chkconfig httpd off
========================================================================
[root@ip-172-31-4-161 ec2-user]# history
1 yum install httpd
2 service httpd status
3 service httpd start
4 service httpd status
5 cd /var/www/html/
6 ls
7 vi index.html
8 cd /home/ec2-user/
9 yum install -y docker
10 yum install -y git
11 yum list | grep jdk
12 yum install -y java-1.8.0-openjdk-devel.x86_64
13 yum install -y ansible
14 sudo amazon-linux-extras install ansible2 -y
15 history
16 yum install -y httpd
17 history
[root@ip-172-31-4-161 ec2-user]#
[root@ip-172-31-4-161 ec2-user]#
=====================================================
Job automation / job scheduling ..
Job ==>> task ==> perticular interval of time schedule ==>> job scheduling or job automation.
job scheduling ==>> two types of methods or jobs..
1. at job.
2. cron job.
1. at job : It is used to only once at a specified time.
at task of time.
step of task
ctrl + d ==>> save.
at now
mkdir sachin
ctrl + d ==>>> save.
at 10:30 am
ifconfig -a
ctrl + d ==>> save.
==>> when ever you created a job then automatically linux operating system gives a one unique id ==>>> job id.
list of jobs ==>> atq
at rm jobid ==>> delete the atjob
at rm 1234
===============
/etc/at.deny ==>>> bhargavi , pavan
/etc/at.allow ===>. ramakrishna , pavan
at , cron jobs are follows round robin algorithem.==>>> first in first out.
========================================================
cron jobs : It is used to repetative taks.. ====>> poll scm , build peridically ==>> jenkins.
crontab -e ==>>here we will create cron jobs and cronjobs has the fields..
min hours dayofmonth month dayofweek command / task / script.
* * 2 3 0 ./backup.sh
* ==>> all
*/2 ==>> every 2minits
*/5 ==>>> every 5 hours
*/4 ==>> evry 4 days
*/3 ==>> evry 3 months
*/0 ==>> evry sunday..
*/2-4
*/ 2,4,6
crontab -l
crontab -r
crontab -u
/etc/cron.deny ==>> vamsi , shekar
/etc/cron.allow ==>> rajendra , shekar
==================================================
Troubleshooting commands / performance tuning / health checkup commands.
1. ps ==>> how many processes currently running your system.
2. ps -elf ==>> it displys all processes..
3. bg ==>>> background running processes to display.
4. fg ==>>> foreground running processes to display.
5. ps -ef | grep smon ==>> currently running application in linux server.
6. ps -ef | grep pmon ==>> currently running database in linux server.
7. top ==>> process running , stop , uptime , load average , cpu , memory , swap ...etc..==>> exit ==>> press q button.
8. iostat ==>>> disk related information.
9. vmstat ==>> virtual memory statistics information.; free -m
10. uptime ==>> load average ===>> 3 fields ==>> 1m 5m 15m
11. netstat ==>> networking statistics information ; netstat -nr ==>>> routing table information.
12. sar ==>> system activity report.
=======================================================
[root@ip-172-31-14-39 ec2-user]#
[root@ip-172-31-14-39 ec2-user]# history
1 ps
2 ps -elf
3 bg
4 fg
5 ps -ef | grep smon
6 ps -ef | grep pmon
7 top
8 top
9 iostat
10 vmstat
11 free -m
12 netstat
13 netstat -nr
14 sar
15 uptime
16 history
[root@ip-172-31-14-39 ec2-user]#
[root@ip-172-31-14-39 ec2-user]#
=====================================
Create VPC :
1. Click on Create VPC.
2. Enter VPC Name
Enter CIDR block : 40.20.0.0/16 then click on Create VPC
Create Internet Gateway (IGW):-
Enter Valid name and click on create IGW
Change the state from Detached to Attached Status
Create a Subnet
When we create Subnet automatically we will get one Routing Table for that VPC
we need to specify the Name for this Routing Table then
associate the Subnets
Select our Specific Subnet and Click on Save Association
After that we need to assign the Routs for this Routing Table
Click on Add Route and Enter Destination as 0.0.0.0/0 and target is select as Intwrnat gateway
Click on Save Changes button
Infrastructure is completed. Then
Then Create a Instance.