US EAST region failure – AZURE outage – Sleepless Night

Sleepless Night –

Our application which was hosted in AZURE VMs stopped working suddenly and we were not able to access the VMs and even the support. The customer’s customer are having very tough time and there is a huge business impact. Our team is UP whole night waiting for the resolution from AZURE.

Current status of region failure in AZURE.  –

AZURE OutageThere is no resolution yet. When our customer asked BCP in different cloud platform other than AZURE, we were surprised.

But now it’s the reality.

We are also not getting the proper support from the Microsoft team

Sorry AZURE. You have still long way to go

Share this:

How to access S3 Bucket from application on Amazon EC2 without access credentials


  1. You know the use of “AWS S3” and how to access the S3 bucket through the application with the help of Secret Key/Access Key
  2. In this Blog, We will use S3 Bucket – “parthicloud-test” as the bucket name where the static images like photos are stored for the application
  3. Developers usually use the Access Key/Secret Key for accessing the S3 Bucket in the application through SDK’s or AWS API.
  4. Managing the Access Key/Secret Key and keeping it secure becomes pain of the Developers and the Administrators

Use case

Developers want to Read/Write/List files in the “parthicloud-test” – S3 bucket programmatically from an EC2 instance without managing or configuring  the AWS secret key/Access Key.


We can use IAM role to manage temporary credentials for applications that run on an EC2 instance. When we use a role, we don’t have to distribute long-term credentials to an EC2 instance. Role supplies temporary permissions that application can use when they make API calls to S3 storage.


  • Since role credentials are temporary and rotated automatically.
  • Developers don’t have to manage the credentials
  • We don’t have to worry about long-term security risks.
  • Flexibility to assign single role to multiple EC2 instances where application requires access to S3 storage
  • We can change the Role policy any time and the change is propagated automatically to all the instances.


  • IAM role cannot be assigned to an instance that is already running.
  • If we need to add a role to the running instance, We have the only option to create an image of the instance and then launch a new instance from the image with the desired role assigned.

How does it work?

Developer runs an application in EC2 instance that requires access to the S3 Bucket named “parthicloud-test”. AWS administrator creates the “ParthiCloud-S3” role. The role contains the policies that grant read/write/list permissions for the bucket.

When the application runs on the instance, it can use the role’s temporary credentials to access the parthicloud-test S3 bucket. AWS administrator doesn’t have to grant the developer permission to access the parthicloud-test bucket. The developer never has to share or manage credentials which is very risky in terms of security  compliance.

There is an other application running in EC2 instance which doesn’t have an IAM role attached. When the application in that instance tries to access the parthicloud-test bucket, Access will be denied because of secret Key/Access Key was missing. Refer the illustration below.


Lets discuss the steps involved in detail from Creating VPC, Subnet, S3 Bucket, IAM Role and Policy, launching an Instance with IAM role and access to S3 bucket from an instance

Step 1 – Create VPC

Let’s create a VPC with a single subnet for the illustration purpose.


Step 2 – Create Key Pair

Create Key Pair by providing the friendly name. It will be used for accessing the instances using putty.


<key pair name>.pem file will be downloaded. In our case it is ParthiCloud.pem. We need to have PuTTYgen to convert .PEM file to .PPK file.


Click File–>Load Private Key and then Click Save private Key. We can save the .PPK in the desired location. We also have an option to provide password for the .PPK file. Password can be assigned in Key passphrase box as given in the above image.

 Step 3 – Create S3 Bucket

Create a S3 bucket named “parthicloud-test” in US Standard Region.


Upload a test file – “Test.txt” in the S3 Bucket



Step 4 – Create IAM Policy and Role

Create policy to access S3 bucket. Select “Create Your Own Policy”


Enter Policy Name, Description and the Policy Document as given below

 "Version": "2012-10-17",
 "Statement": [
 "Effect": "Allow",
 "Action": [
 "Resource": [
 "Effect": "Allow",
 "Action": [
 "Resource": [




Create Role by giving  the name


Select Role Type as Amazon EC2


Then attach a policy – “ParthiCloud-S3-Policy”



Now IAM Role and Policy is ready. Let’s Launch the instance with IAM Role

Step 4 – Launching Instance

Launch a Ubuntu instance  – used Micro Instance for illustration.


Select the VPC, Subnet and IAM role which was created earlier.

Add Storage-


Tag Instance.  It will be very helpful during billing analysis.


Create Security Group. Don’t give, Instance can be accessed from anywhere which is not recommended.


Click Review & Launch. It will ask you select the Key pair, select the previously created Key pair and click Launch Instances.


ParthiCloud instance is launched.

Access Instance

Give ubuntu@<Elastic IP> in the Host Name, attach the Private key in the “Auth” section to connect to the instance.


Access S3 bucket from Instance

We had already uploaded the file named Test.txt in the parthicloud-test S3 bucket. Type the below command to verify the access and list the files in the bucket. We have not specified Access Key / Secret Key in the instance.

$ aws s3 ls s3://parthicloud-test


Let’s try to upload a file to S3 Bucket

$ aws s3 cp newfile.txt s3://parthicloud-test/ --region us-east-1


$ aws s3 ls s3://parthicloud-test


The new file was successfully uploaded to S3 bucket.

We had discussed in detail on how to use IAM policy in EC2 Instance where the application is running, which requires access to the S3 bucket.

Share this:

Amazon EC2 Dedicated Hosts – Competition to IBM softlayer Bare Metal servers

What is Amazon EC2 Dedicated Hosts?

Amazon announced the new variant of EC2 called Dedicated Hosts yesterday (6th Oct 2015).Amazon EC2 Dedicated Host is a physical server with EC2 Instance capacity fully dedicated to our use.  It helps us to reduce costs by allowing us to use our existing server-bound software licenses.

We can allocate a Dedicated Host in a Specific region and availability zone and for a particular type of EC2 instance.

Each Dedicated Host has a room for a predefined number of instances of a particular type. Say for example, if the specific host has room for ten m4.xlarge instances and we can launch up to ten m4.xlarge. It is very much similar to the virtualization we had done in Blade servers.

We will have predefined combination of CPU cores and memory – Instance types in Dedicated host unlike in pure play virtualization, we have an option to select CPU and Memory independently.

But not to forget, AWS offers state of art management console, API, CLI support to manage the Dedicated Host.

IBM Softlayer already has Bare-Metal servers and this EC2 Dedicated Hosts will be straight competition to that.  We will have to wait for the pricing information for the comparison

Licensing Benefit – Cut costs

It provides greater value add for the corporates who wants to migrate from on-premises to cloud. Dedicated Hosts allows to use the existing per-socket, per-core, or per-VM software licenses, including Microsoft windows Server, Microsoft SQL server, SUSE Linux Enterprise Server, or other software licenses that are bound to VMs, sockets, or physical cores.

Automatic Instance Placement

We have an option to launch instances onto a specific Dedicated Host, or you can let Amazon EC2 place the instances automatically. It helps us to address licensing and corporate compliance.


Affinity is one of the important feature in Amazon Dedicated Host which allows us to specify which Dedicated Host an instance will run on after it has been stopped and restarted. This ensures that the instance will always run on the same physical server even through planned interruptions. It helps in reduction in licensing costs that requires license affinity for a period of time (X no. of days). It can be maintained using instance placement scheme

Greater Visibility

It provides us the greater visibility on the number of sockets, physical cores in the Dedicated Host. It helps us to manage licensing of our own server-bound software that is licensed per-socket or per-core


We can use AWS config which records when the instances are launched, stopped or terminated on a Dedicated Host. It pairs this information with host-level information relevant to software licensing.

AWS config can be used as a data source for license reporting

Pricing Options

Amazon EC2 Dedicated Hosts will be available in Reserved and On-demand form. We will have to pay  regardless of whether we run instances on Dedicated Host or not.

So it’s important to do a home work in the assessing the requirement and workload before ordering the Dedicated Hosts. Please remember we are going to pay for the Giant server. AWS doesn’t care how many instances are you running in that Dedicated Host.  It’s up to us to have proper resource utilization.


We can easily bring our own machine images to AWS using VM import and vCenter portal


It is recommended for the ones who runs their infrastructure in on-premises,  have partnership with companies like Microsoft for Licensing and wish to migrate to cloud as part of their business strategy without diluting the software licenses procured already.


Share this:

Step by Step guide to install SSL in AWS ELB

SSL Installation in AWS ELB

Generate Private Key:

Generate a CSR in Microsoft IIS

1. Click Start, then Administrative Tools, then Internet Information Services (IIS) Manager.
2. Click on the server name.
3. From the center menu, double-click the “Server Certificates” button in the “Security” section (it is near the
bottom of the menu). 


4.  Next, from the “Actions” menu (on the right), click on “Create Certificate Request.” This will open the Request Certificate wizard.








  1. In the “Distinguished Name Properties” window, enter the information as follows:
  1. Common Name– The name through which the certificate will be accessed (usually the fully-qualified domain name, e.g., or
  2. Organization– The legally registered name of your organization/company.
  3. Organizational unit– The name of your department within the organization (frequently this entry will be listed as “IT,” “Web Security,” or is simply left blank).
  4. City/locality– The city in which your organization is located.
  5. State/province– The state in which your organization is located.








6. Click Next.
7. In the “Cryptographic Service Provider Properties” window, leave both the settings at their defaults (Microsoft RSA SChannel and 2048) and then click next.








8. Enter a filename for your CSR file. 

9. Remember the filename that you choose and the location to which you save it. You will need to open this file as a text file and copy the entire body of it (including the Begin and End Certificate Request tags) into the online order process when prompted.


Back Up Private Key

To back up a private key on Microsoft IIS 7.0 follow these instructions:

1. From your server, go to Start > Run and enter mmc in the text box. Click on the OK button.
2. From the Microsoft Management Console (MMC) menu bar, select Console > Add/Remove Snap-in.
3. Click on the Add button. Select Certificates from the list of snap-ins and then click on the Add button.


4. Select the Computer account option. Click on the Next button.

5. Select the Local computer (the computer this console is running on) option. Click on the Finish button.
6. Click on the Close button on the snap-in list window. Click on the OK button on the Add/Remove Snap-in window.
7. Click on Certificates from the left pane. Look for a folder called REQUEST or “Certificate Enrolment Request> Certificates


 8. Select the private key that you wish to backup. Right click on the file and choose > All Tasks > Export 


9. The certificate export wizard will start, please click Next to continue. In the next window select Yes, export the private key and click Next 

10. Leave the default settings selected and click Next.


11. Set a password on the private key backup file and click Next 
12.  Click on Browse and select a location where you want to save the private key Backup file to and then click Next  to continue. By default the file will be saved with a .pfx extension.
13. Click Finish, to complete the export process

Convert to RSA Private Key Format

The private key is backed up as a ‘.pfx’ file, which stands for Personal Information Exchange.

To convert it to RSA Private Key format supported by inSync:

  1. Download and install latest version of OpenSSL for windows
  2. Open the command prompt and run the following commands
openssl pkcs12 -in filename.pfx -nocerts -out key.pem
openssl rsa -in key.pem -out myserver.key

3. The private key will be saved as ‘myserver.key’.

4. Carefully protect the private key. Be sure to back up the private key, as there is no means to recover it, should it be lost.

Configure SSL in ELB

Select the desired load balancer from the list of available load balancers list in load balancer dashboard.

Click on “Listeners” tab of load balancer details page.

Click on Edit button in Listeners tab section to add HTTPS listener.









Click on Add button to add new listener (HTTPS).

Select protocol and port as shown in above screen shot.

Click on Change link for Cipher changes.







Select “Predefined Security Policy”. Make sure TLSv1 is disabled.

Click on Save.

Click on Change in “SSL Certificate”.







Select “Upload a new SSL Certificate” for Certificate Type.

Fill the following details.

  1. Certificate Name: Name of the certificate
  2. Private Key: RSA key generated in the above steps.
  3. Public Key Certificate: Received public key from SSL provider.

Certificate Chain: Intermediate and chain certificate provided by SSL provider.

SSL installation in AWS ELB is complete.

Share this:

OSSEC Agent installation in Linux Step by Step

OSSEC Agent Installation on Linux 

Step 1

Download the ossec agent and issue the below command

tar xf ossec-hids-2.8.1.tar.gz

 Step 2

It will be unpacked into a directory called ossec-hids-2.8.1.                 Go to that directory.

cd ossec-hids-2.8.1/

 Step 3

Then start the installation.

Select agent mode while OSSEC installation on  server machines and end hosts.





Step 4

Set the configuration path (/var/ossec is by default)




Step 5

Enter the IP address of the OSSEC server/manager (Example:




Step 6

Enable Integrity check feature of OSSEC in client mode.



Step 7

Enable the rootkit detection and active response features







Step 8

Press “Enter” button to start installation process.





Step 9

Following window shows the start/stop scripts and configuration path for OSSEC. Press “Enter” button to complete the installation process.







Step 10

Add Agent to Server and Extract Its Key

On the OSSEC server, start the process of adding the agent.


You will then be presented the options shown below. Choose “A” to add an agent.

(A)dd an agent (A).
(E)xtract key for an agent (E).
(L)ist already added agents (L).
(R)emove an agent (R).
(Q)uit.Choose your action: A,E,L,R or Q: A

Then you’ll be prompted to specify a name for the agent, its IP address, and an ID. Make the name unique, because it will help you in filtering alerts received from the server.

For the ID, you may accept the default by pressing ENTER.

When you enter all three fields, enter y to confirm.

- Adding a new agent (use '\q' to return to the main menu).  Please provide the following:   * A name for the new agent: agentUbuntu   * The IP Address of the new agent: your_agent_ip   *
An ID for the new agent[001]:Agent information:   ID:001
IP Address: Confirm adding it?(y/n): y

Agent added.

Step 11

 After that, you’ll be returned to the main menu. Now you have to extract the agent’s key, Make sure you copy it, because you’ll have to enter it for the agent.

... Choose your action: A,E,L,R or Q: e

Available agents:   ID: 001,

Name: agentUbuntu,


Provide the ID of the agent to extract the key (or '\q' to quit): 001
Agent key information for '001' is:MDAxIGFnZW50VWJ1bnyEwNjI5MjI4ODBhMDkzMzA4MR1IXXwNC4yMzYuMjIyLjI1MSBiMTI2U3MTI4YWYzYzg4M2YyNTRlYzM5M2FmNGVhNDYTIwNDE3NDI1NWVkYmQw **
Press ENTER to return to the main menu.

Step 12

After pressing ENTER, you’ll be returned to the main menu again. Type q to quit.

... Choose your action: A,E,L,R or Q: q 
** You must restart OSSEC for your changes to take effect. manage_agents: Exiting ..

Step 13

Import The Key From Server to Agent

This section has to be completed on the agent, and it involves importing (copying) the agent’s key extracted on the server and pasting it on the agent’s terminal. To start, change to root by typing:

sudo su

Then type:


You’ll be presented with these options:

   (I)mport key from the server (I).   (Q)uit.
Choose your action: I or Q: i

After typing the correct option, follow the directions to copy and paste the key generated from the server.

Agent information:   ID:001   Name:test   IP Address: Confirm adding it?(y/n): y

Added.** Press ENTER to return to the main menu.

Back to the main menu, type q to quit:

Choose your action: I or Q: q

This completes the agent installation in Linux.

Share this:

OSSEC Agent Installation in windows Step-by-Step

Installing OSSEC agent in a Windows server

Step 1

Create a new OSSEC key for the agent from the Server

Step 2

manage_agents on the OSSEC server

The server version of manage_agents provides an interface to:

  • add an OSSEC agent to the OSSEC server
  • extract the key for an agent already added to the OSSEC server
  • remove an agent from the OSSEC server
  • list all agents already added to the OSSEC server.

Step 3:

To add an agent type the below command


The manage_agents menu:


* OSSEC HIDS v2.5-SNP-100809 Agent manager.*
* The following options are available:*


(A)dd an agent (A).
(E)xtract key for an agent (E).
(L)ist already added agents (L).   
(R)emove an agent (R).   
Choose your action: A,E,L,R or Q:

Type the  letter and hit enter will initiate that function.

Step 4:

Adding an agent

To add an agent type a in the start screen:

Choose your action: A,E,L,R or Q: A 

You are then prompted to provide a name for the new agent. This can be the hostname or another string to identify the system. In this example the agent name will be agent1.

Adding a new agent (use '\q' to return to the main menu).  
Please provide the following:   * A name for the new agent: agent1

After that you have to specify the IP address for the agent

The IP Address of the new agent:

The last information you will be asked for is the ID you want to assign to the agent.

An ID for the new agent[001]:

As the final step in creating an agent,

you have to confirm adding the agent: Agent information:   ID:002   Name:agent1   IP Address:
Confirm adding it?(y/n): y

Agent added. After that manage_agents appends the agent information to /var/ossec/etc/client.keys and goes back to the start screen

Step 5:

Extracting the key for an agent

After adding an agent, a key is created. This key must be copied to the agent. To extract the key, use the e option in the manage_agents start screen. You will be given a list of all agents on the server. To extract the key for an agent, simply type in the agent ID. It is important to note that you have to enter all digits of the ID

Choose your action: A,E,L,R or Q: E
Available agents:   ID: 001, Name: agent1, IP:
Provide the ID of the agent to extract the key (or '\q' to quit): 001
Agent key information for '001' is:MDAyIGFnZW50MSAxOTIuMTY4LjIuMC8yNCBlNmY3N2RiMTdmMTJjZGRmZjg5YzA4ZDk5m

** Press ENTER to return to the main menu.

The key is encoded in the string (shortened for this example) MDAyIGFnZW50MSAxOTIuMTY4LjIuMC8yNCBlNmY3N2RiMTdmMTJjZGRmZjg5YzA4ZDk5Mm and includes information about the agent. This string can be added to the agent through the agent version of manage_agents.

Step 6:

Download the OSSEC agent for windows and kept in the place where we need to install















OSSEC-Agent-4 OSSEC-Agent-3 OSSEC-Agent-5
























Step 7








Step 7

In the OSSEC Server IP column give the IP address of the OSSEC Server

In the Authentication column give the key which we have extracted earlier.

Step 8








Click Save and press manage and restart the OSSEC.











Share this:

PCI-DSS v3.1 Compliance with Zero cost

ossec-hidsWhat is OSSEC?

OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting.

Can I address PCI-DSS v3.1 requirements using OSSEC?

Yes. By Installing and Configuring OSSEC in the VPC, we can easily address the below PCI-DSS v3.1 requirements with zero cost.

PCI-DSS v3.1 requirements related to OSSEC

PCI DSS Requirements Testing Procedures Guidance
10.2 Implement automated audit trails for all system components to reconstruct the following events: 10.2 Through interviews of responsible personnel, observation of audit logs, and examination of audit log settings, perform the following:


Generating audit trails of suspect activities alerts the system administrator, sends data to other monitoring mechanisms (like intrusion detection systems), and provides a history trail for post-incident follow-up. Logging of the following events enables an organization to identify and trace potentially malicious activities


10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).


10.5.5 Examine system settings, monitored files, and results from monitoring activities to verify the use of file-integrity monitoring or change-detection software on logs.


File-integrity monitoring or change-detection systems check for changes to critical files, and notify when such changes are noted. For file-integrity monitoring purposes, an entity usually monitors files that don’t regularly change, but when changed indicate a possible compromise.


10.6.1 Review the following at least daily:

·  All security events

· Logs of all system components that store, process, or transmit CHD and/or SAD, or that could impact the security of CHD and/or SAD

· Logs of all critical system components

· Logs of all servers and system components that perform security functions (for example, firewalls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers, e-commerce redirection servers, etc.).


10.6.1.a Examine security policies and procedures to verify that procedures are defined for reviewing the following at least daily, either manually or via log tools:

·         All security events

·          Logs of all system components that store, process, or transmit CHD and/or SAD, or that could impact the security of CHD and/or SAD

·         Logs of all critical system components

·         Logs of all servers and system components that perform security functions (for example, firewalls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers, e-commerce redirection servers, etc.)


Many breaches occur over days or months before being detected. Checking logs daily minimizes the amount of time and exposure of a potential breach.


Daily review of security events—for example, notifications or alerts that identify suspicious or anomalous activities—as well as logs from critical system components, and logs from systems that perform security functions, such as firewalls, IDS/IPS, file-integrity monitoring (FIM) systems, etc. is necessary to identify potential issues. Note that the determination of “security event” will vary for each organization and may include consideration for the type of technology, location, and function of the device. Organizations may also wish to maintain a baseline of “normal” traffic to help identify anomalous behavior.

10.6.1.b Observe processes and interview personnel to verify that the following are reviewed at least daily:

·  All security events

·  Logs of all system components that store, process, or transmit CHD and/or SAD, or that could impact the security of CHD and/or SAD

· Logs of all critical system components

· Logs of all servers and system components that perform security functions (for example, firewalls, intrusion-detection systems/intrusion-prevention systems (IDS/IPS), authentication servers, e-commerce redirection servers, etc.).

11.4 Use intrusion-detection and/or intrusion-prevention techniques to detect and/or prevent intrusions into the network. Monitor all traffic at the perimeter of the cardholder data environment as well as at critical points in the cardholder data environment, and alert personnel to suspected compromises.

Keep all intrusion-detection and prevention engines, baselines, and signatures up to date.

11.4.a Examine system configurations and network diagrams to verify that techniques (such as intrusion-detection systems and/or intrusion-prevention systems) are in place to monitor all traffic:

·  At the perimeter of the cardholder data environment

·  At critical points in the cardholder data environment.

Intrusion detection and/or intrusion prevention techniques (such as IDS/IPS) compare the traffic coming into the network with known “signatures” and/or behaviors of thousands of compromise types (hacker tools, Trojans, and other malware), and send alerts and/or stop the attempt as it happens. Without a proactive approach to unauthorized activity detection, attacks on (or misuse of) computer resources could go unnoticed in real time. Security alerts generated by these techniques should be monitored so that the attempted intrusions can be stopped.


11.4.b Examine system configurations and interview responsible personnel to confirm intrusion-detection and/or intrusion-prevention techniques alert personnel of suspected compromises.


11.4.c Examine IDS/IPS configurations and vendor documentation to verify intrusion-detection and/or intrusion-prevention techniques are configured, maintained, and updated per vendor instructions to ensure optimal protection.
11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

Note: For change-detection purposes, critical files are usually those that do not regularly change, but the modification of which could indicate a system compromise or risk of compromise. Change-detection mechanisms such as file-integrity monitoring products usually come pre-configured with critical files for the related operating system. Other critical files, such as those for custom applications, must be evaluated and defined by the entity (that is, the merchant or service provider).

11.5.a Verify the use of a change-detection mechanism within the cardholder data environment by observing system settings and monitored files, as well as reviewing results from monitoring activities.


Examples of files that should be monitored:

 · System executables

· Application executables

· Configuration and parameter files

· Centrally stored, historical or archived, log and audit files

· Additional critical files determined by entity (for example, through risk assessment or other means).


Change-detection solutions such as file-integrity monitoring (FIM) tools check for changes to critical files, and notify when such changes are detected. If not implemented properly and the output of the change-detection solution monitored, a malicious individual could alter configuration file contents, operating system programs, or application executables. Unauthorized changes, if undetected, could render existing security controls ineffective and/or result in cardholder data being stolen with no perceptible impact to normal processing.


11.5.b Verify the mechanism is configured to alert personnel to unauthorized modification of critical files, and to perform critical file comparisons at least weekly.


12.10.5 Include alerts from security monitoring systems, including but not limited to intrusion-detection, intrusion-prevention, firewalls, and file-integrity monitoring systems.


12.10.5 Verify through observation and review of processes that monitoring and responding to alerts from security monitoring systems, including detection of unauthorized wireless access points, are covered in the incident response plan.


These monitoring systems are designed to focus on potential risk to data, are critical in taking quick action to prevent a breach, and must be included in the incident-response processes.





Install necessary package

apt-get update

apt-get install build-essential inotify-tools

Step 2 — Download and Verify OSSEC

wget -U ossec

Step 3 — Install OSSEC

OSSEC can be installed in serveragentlocal or hybrid mode. The below installation steps is meant for monitoring the instances where OSSEC agent is installed.

Before installation can start, we have to expand the file

tar -zxf ossec-hids-2.8.1.tar.gz
cd ossec-hids-2.8.1

To see the contents of the directory that you’re now in, use the ls command by typing:

ls –l

we have to see these files and directories:

drwxrwxr-x  4  4096 Sep  8 21:03 active-response
-rw-rw-r--  1   542 Sep  8 21:03 BUGS
-rw-rw-r--  1   289 Sep  8 21:03 CONFIG
drwxrwxr-x  6  4096 Sep  8 21:03 contrib
-rw-rw-r--  1  3196 Sep  8 21:03 CONTRIBUTORS
drwxrwxr-x  4  4096 Sep  8 21:03 doc
drwxrwxr-x  4  4096 Sep  8 21:03 etc
-rw-rw-r--  1  1848 Sep  8 21:03 INSTALL
-rwxrwxr-x  1 32019 Sep  8 21:03
-rw-rw-r--  1 24710 Sep  8 21:03 LICENSE
-rw-rw-r--  1  1664 Sep  8 21:03
drwxrwxr-x 30  4096 Sep  8 21:03 src


To install OSSEC type the below command  ./ Select language is English, press ENTER. Otherwise, type the two letters for your language and press ENTER.

(en/br/cn/de/el/es/fr/hu/it/jp/nl/pl/ru/sr/tr) [en]:

After selecting the language, you should see this:

OSSEC HIDS v2.8 Installation Script –  You are about to start the installation process of the OSSEC HIDS. You must have a C compiler pre-installed in your system. If you have any questions or comments, please send an e-mail to (or   – System: Linux kuruji 3.13.0-36-generic  – User: root  – Host: kuruji   — Press ENTER to continue or Ctrl-C to abort. —

After pressing ENTER, you should get:

1- What kind of installation do you want (server, agent, local, hybrid or help)? local

Type local and press ENTER. You should get:

  - Local installation chosen. 2- Setting up the installation environment.   - Choose where to install the OSSEC HIDS [/var/ossec]:

Accept the default and press ENTER. After that, you’ll get:

    - Installation will be made at  /var/ossec . 3- Configuring the OSSEC HIDS.   - Do you want e-mail notification? (y/n) [y]:

Press ENTER.

  - What's your e-mail address?

Type the email address where you want to receive notifications from OSSEC.

  - We found your SMTP server as:  - Do you want to use it? (y/n) [y]: --- Using SMTP server:

Press ENTER unless you have specific SMTP server settings you want to use.

Now’s time to let OSSEC know what checks it should be running. In response to any prompt from the script, accept the default by pressing ENTER.

ENTER for the integrity check daemon.

- Do you want to run the integrity check daemon? (y/n) [y]: - Running syscheck (integrity check daemon).

ENTER for rootkit detection.

  - Do you want to run the rootkit detection engine? (y/n) [y]: - Running rootcheck (rootkit detection).

ENTER for active response.

  - Active response allows you to execute a specific command based on the events received.      Do you want to enable active response? (y/n) [y]:    Active response enabled.

Accept the defaults for firewall-drop response. Your output may show some IPv6 options – that’s fine.

  Do you want to enable the firewall-drop response? (y/n) [y]: - firewall-drop enabled (local) for levels >= 6    - Default white list for the active response:      -      -    - Do you want to add more IPs to the white list? (y/n)? [n]:

You may add your IP address here, but that’s not necessary.

OSSEC will now present a default list of files that it will monitor. Additional files can be added after installation, so press ENTER.

Step 4 — Start OSSEC

By default OSSEC is configured to start at boot, but the first time, you’ll have to start it manually.

If you want to check its current status, type:

/var/ossec/bin/ossec-control status

That tells you that none of OSSEC’s processes are running.

To start OSSEC, type:

/var/ossec/bin/ossec-control start

You should see it starting up:

Starting OSSEC HIDS v2.8 (by Trend Micro Inc.)…Started ossec-maild…Started ossec-execd…Started ossec-analysisd…Started ossec-logcollector…Started ossec-syscheckd…Started ossec-monitord…Completed.

If you check the status again, you should get confirmation that OSSEC is now running.

/var/ossec/bin/ossec-control status

This output shows that OSSEC is running:

ossec-monitord is running...ossec-logcollector is running...ossec-syscheckd is running...ossec-analysisd is running...ossec-maild is running...ossec-execd is running...

Right after starting OSSEC, you should get an email that reads like this:

OSSEC HIDS Notification.2014 Nov 30 11:15:38 Received From: ossec2->ossec-monitordRule: 502 fired (level 3) -> "Ossec server started."Portion of the log(s): ossec: Ossec started.

OSSEC is successfully Installed. We will see how to install and configure agents in the next post.







Share this:

What’s new in PCI-DSS v3.1 update

PCIPCI SSC has announced that the PCI DSS 3.1 update.

This update makes the following summary clarifications about the use of SSLv3 and TLS 1.0 in PCI relevant environments:

  • New implementations must use alternatives to SSL and early TLS.
  • Organizations with existing implementations of SSL and early TLS must have a risk mitigation and migration plan in place.
  • Prior to June 30, 2016, Approved Scanning Vendors (ASVs) may document receipt of an organizations risk mitigation and migration plan as an exception in the ASV Scan Report (in accordance with the ASV Program Guide).
  • Point of Sale (POS) or Point of Interaction (POI) devices that can be verified as not being susceptible to all known exploits of SSL and early TLS may continue to use these protocols as a security control after June 30, 2016.

April 15, 2015 brought us the much-anticipated release of the PCI DSS standard from the PCI Council.  As SSL and early TLS are no longer considered strong cryptography, this release describes how the industry is to move forward in regard to the use of SSL and early TLS versions and how current PCI DSS status is impacted.

The PCI DSS requirements that are directly affected by this update are:

  • Requirement 2.2.3: Implement additional security features for any required services, protocols, or daemons that are considered to be insecure;
  • Requirement 2.3: Encrypt all non-console administrative access using strong cryptography; and
  • Requirement 4.1: Use strong cryptography and security protocols to safeguard sensitive cardholder data during transmission over open, public networks.

Does this mean that if one has used SSL to address the above requirements, that one now fails these requirements?  No, it does not.  The updated standard allows for a timeframe in which SSL and early TLS can be phased out.  The updated standard specifies a deadline of June 30, 2016, after which SSL and early TLS must no longer be used.  However, a few caveats apply:

  • Prior to June 30, 2016, existing implementations that use SSL and/or early TLS must have a formal Risk Mitigation and Migration Plan in place.
  • Effective immediately, new implementations must not use SSL and/or early TLS.
  • POS POI terminals (and the SSL/TLS termination points to which they connect) verified as not susceptible to any known exploits for SSL and/or early TLS may continue using these weaker protocols after June 30, 2016.

What are SSL and early TLS?

SSL v3.0 has existed for at least 15 years.  It was superseded by TLS v1.0 which was in turn replaced by TLS v1.1 and TLS 1.2.  The problem is that the application that supported SSL v3.0 and TLS v1.0 did not remove support for these protocol versions.  They simply added support for TLS v1.1 and later versions.  This was done to maintain backward compatibility with consumer web browsers, POS terminals, and legacy applications that may not have been upgraded to support TLS v1.1+ versions.  This was fine until exploits that could not be fixed were discovered in SSL and early TLS.  Therefore, for the purpose of the updated PCI DSS 3.1 standard, SSL is defined as any SSL v3.0 or earlier and early TLS is defined as TLS v1.0.

What is “new” versus “existing” implementation?

Understanding the difference is critical because an “existing” implementation may continue to use the insecure protocols up until June 30, 2016, while a “new” implementation may not.  According to supplemental guidance from the PCI SSC, an “existing” implementation is one where there is a pre-existing dependency or use of the vulnerable protocols (SSLv3.0/TLSv1.0), and a “new” implementation is one where there is no existing dependency on the use of the vulnerable protocols.

Practically speaking, if an organization currently uses SSLv3.0/TLSv1.0 and these weaker protocols are required to continue operations, then the organization may continue using these protocols. However, it is essential to consider each case individually.  Consider the following examples:

Example 1:  A payment gateway supports an API that accepts transactions from terminals or POS software communicating over SSLv3.0/TLSv1.0.  This payment gateway can continue to use these weaker protocols until June 30, 2016.  Even if the payment gateway provider builds a new API that supports a stronger version of TLS, weaker protocols can continue to be used until the June deadline as long as continued operations depend on supporting legacy software and terminals that rely on the weaker protocols.

Example 2:  A payment gateway provides access to a virtual terminal interface or to a management portal interface.  Since an end-user’s web browser is not considered a “pre-existing” dependency, this payment gateway cannot continue to support SSLv3.0/TLSv1.0 and must migrate to stronger protocols at once.

In fact, for those who operate an e-Commerce site or portal, it is mandatory to update immediately to more secure protocols unless sufficient evidence shows that the application or server software must continue to support weaker protocols.

If weaker protocols must continue to be used, an organization must develop a Risk Mitigation and Migration Plan.  Only by developing this plan can an organization that continues to use the weaker SSLv3.0/TLSv1.0 meet the current PCI DSS 3.1 requirements.  The plan must:

  • detail each scenario where insecure protocols are used;
  • define the existing risk reduction controls to prevent and detect attacks;
  • describe methods in place to monitor for new vulnerabilities associated with the insecure protocols;
  • describe change control methods to ensure that the insecure protocols are not allowed to be implemented into new environments; and
  • Outline how the environment will be migrated to meet the June 30, 2016 deadline.

Why are POI terminal deployments exempt?

Point of Interaction (POI) devices that support the weaker SSLv3.0/TLSv1.0 are not subject to the June 30, 2016 deadline.  Weaker protocols can continue to be used because POI devices are not as prone to known vulnerability exploits.  Exploits generally require that a device support multiple client side connections, JavaScript, cookies, or that the end-user software be a web browser.  Since POI devices do not operate in this manner, they are not as susceptible to known attack vectors.  In addition, the device communications adhere to specified message types that limit exposure to replay attacks.  However, should new vulnerabilities be discovered, this exemption for POI devices can readily disappear.

What action should I take going forward?

If possible, upgrade or configure systems that only use TLS 1.1 or greater and disable fall back support to lower protocols.  It is important to note that proper configuration of TLS 1.1 or greater is critical and is fully outlined in the NIST 800-52 rev1 publication.  A key requirement is ensuring proper cipher suites are utilized.  These cipher suites are critical for mitigating attack scenarios where weaker cipher suites are exploited.

If one must continue to use weaker protocols, then monitoring the use of these weaker protocols is critical.  IPS/IDS or other alerting technologies (reverse-proxies) can detect multiple requests for protocol downgrades and flag an attack attempt.  In addition, firewalls and other network access control devices can limit access to services that support weak protocols.

Migrating away from SSLv3/TLS1.0 is not an easy task.   Otherwise, new releases of the protocols would have promptly replaced the older protocols. Instead, migration is complex and difficult because of the ubiquitous nature of SSL. It is found in firewalls, routers, POS terminals and applications that communicate across networks. SSL dependency is far-reaching, as this protocol is also found in devices and applications that extend outside the payment industry.  Although updating to stronger protocols may be daunting and time-consuming, this migration is necessary to ensure compliance with updated PCI standards and ultimately, to have a stronger defense against unauthorized network intrusions

What is the risk?

SSL/TLS encrypts a channel between two endpoints (for example, between a web browser and web server) to provide privacy and reliability of data transmitted over the communications channel. Since the release of SSL v3.0, several vulnerabilities have been identified, most recently in late 2014 when researchers published details on a security vulnerability (CVE-2014-3566) that may allow attackers to extract data from secure connections. More commonly referred to as POODLE (Padding Oracle On Downgraded Legacy Encryption), this vulnerability is a man-in-the-middle attack where it’s possible to decrypt an encrypted message secured by SSL v3.0.

The SSL protocol (all versions) cannot be fixed; there are no known methods to remediate vulnerabilities such as POODLE. SSL and early TLS no longer meet the security needs of entities implementing strong cryptography to protect payment data over public or untrusted communications channels. Additionally, modern web browsers will begin prohibiting SSL connections in the very near future, preventing users of these browsers from accessing web servers that have not migrated to a more modern protocol

How does the presence of early TLS impact ASV scan results?

SSL v3.0 and early TLS contain a number of vulnerabilities, some of which currently result in a score of 4.3 on the CVSS (Common Vulnerability Scoring System). The CVSS is defined by NVD (National Vulnerability Database) and is the scoring system ASVs are required to use. Any Medium or High risk vulnerabilities (i.e. vulnerabilities with a CVSS of 4.0 or higher) must be corrected and the affected systems re-scanned after the corrections to show the issue has been addressed.

However, as there is no known way to remediate some of these vulnerabilities, the recommended mitigation is to migrate to a secure alternative as soon as possible. Entities that are unable to immediately migrate to a secure alternative should work with their ASV to document their particular scenario as follows:

  • Prior to June 30, 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a Risk Mitigation and Migration Plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary.
  • After June 30, 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).



Share this: