Pages

Subscribe:

Ads 468x60px

Wednesday, November 14, 2018

Hashi Corp KV Secrets Manager integration with SpringBoot Application

Securing your secrets inside application is not an easy task. Typically applications deployed to multiple environments, and developers have to maintain separate credentials for each environment in configuration files, if there is no encryption mechanism (most of the time :( ) those username and passwords or secrets for token generation (API keys), database connections are stored as plain text. If there is a security breach, sensitive data can be compromised and lose millions of your business, because of not having encryption in place.

To address this there are various solutions available in the market. The most popular ones are the AWS Secret Manager, HashiCorp, Google Cloud KMS etc. Most of these services provide Authorization to secret vaults, Verification of Usage of Keys, Encryption data at rest, Automated Key Rotation etc. Selecting a suitable application is depend on your requirement of the organization or by the features of the service. If you are using AWS and deployed your application is cloud, AWS Secret Manager is one best possibility, since the management overhead is minimal. But for some companies which having serious security concerns, they tend to use on premise solution, and Hashi Corp can be a suitable choice. 

The scope of this post is to how to configure and use HashiCorp KV Secret Engine, and consume those secrets inside a SpringBoot application. Image result for hashicorp vault
Image source - https://www.vaultproject.io/
Configuring the Hashi Corp Vault.
1. Download the community version from [1]. https://www.vaultproject.io/downloads.html
2. Extract and set the path to the vault bin

export PATH=$PATH:/home/aruna/vault/bin

3. Start the vault with dev configuration

vault server --dev --dev-root-token-id="12345678" // use secure token to seed

 4. Now open another terminal and put some secrets to the vault, In KV secrets engine version 2 write operation has changed to put.

export PATH=$PATH:/home/aruna/vault/bin
vault kv put secret/my-secret username=spring-user password=se3ret

5. You can test the values are saved to vault using following curl command.

 curl --header "X-Vault-Token: 12345678"        http://127.0.0.1:8200/v1/secret/data/my-secret

      If the request is a success should get the below response.

{  
   "request_id":"b0a0f055-3eed-b3c1-353f-427de8f61bcd",
   "lease_id":"",
   "renewable":false,
   "lease_duration":0,
   "data":{  
      "data":{  
         "password":"se3ret",
         "username":"spring-user"
      },
      "metadata":{  
         "created_time":"2018-11-14T09:21:46.812937558Z",
         "deletion_time":"",
         "destroyed":false,
         "version":2
      }
   },
   "wrap_info":null,
   "warnings":null,
   "auth":null
}

More about the rest API can be found here.
[2]. https://www.vaultproject.io/api/secret/kv/kv-v2.html

Setting up the SpringBoot project to consume the secret stored above.

Add the following properties to your bootstrap.properties file. Before starting the application, these values should be injected to the spring vault to work.

spring.application.name=my-secret // name of the KV secrets engine
spring.cloud.vault.token=12345678 //token value set for server
spring.cloud.vault.scheme=http
spring.cloud.vault.kv.enabled=true

Then load the properties as follows.

@ConfigurationProperties
public class SecretConfiguration {

    private String username;
    private String password;

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getPassword() 
        return password;
    }

    public void setPassword(String password) {
        this.password = password;
    }
}



Full sample can be found here. [3]. https://github.com/arunasujith/hashi-corp-vault-sample
That's it for this article, hope you to see you in another exciting post.

Tuesday, October 23, 2018

My path to AWS Solutions Architect - Associate

As a part of Pearson Internal Employees’ Learning and Certification Program, I was given the opportunity to take the exam, in 2018 Q2. But due to the release schedules and other work, I was unable to complete within Q2. But determined to complete in Q3 2018.
So in this post, I’m going to explain my experience for the exam and the steps I’ve followed.

Things I’ve followed to get certified.
  1. Created a personal account in  https://console.aws.amazon.com, you need  a credit/debit card to use the free tier resources.
  2. Purchased the https://www.udemy.com/aws-certified-solutions-architect-associate/ course from Udemy. Cost around 10$ at that time.
  3. Official Book from Amazon. https://www.safaribooksonline.com/library/view/aws-certified-solutions/9781119138556/
  4. Book the exam using (cost around 150$) https://www.aws.training/
At the beginning I’ve got no idea regarding the scope of the exam other than [5], Older exam consists of 130 questions and the 2018 February updated with 65 questions. Had to do the latest one but there were less resources regarding the new version even in official exam guide [6].

I was recommended the course in Udemy taught by Ryan Kroonenburg. I purchased the course and followed every lesson of it. And I did the practical sessions using the aws console. So I had a better idea on the practical exercises. Then I did the exercises again alone to verify that I can do them by myself. For sections like VPC, I did try out several complex scenarios with security groups, NACL’s so I got the confident.

The course was good and it covered a lot, but for the exam I think that will not be enough. At the end of each section I read the FAQ section in the amazon official documentation. When doing the mock exercises in the course I was worried that it contained questions asking exact numbers for certain AWS services, but in real exam I did not encounter such questions.

But you need to have a clear understanding of certain comparisons aspect of scalability, availability and cost wise.

For example when talking about AWS EBS storage classes, you need to understand the different use cases for the ESB classes, General purpose SSD, EBS Provisioned IOPS SSD, Throughput Optimized HDD, Cold HDD. See below graph [7].

[7]. https://aws.amazon.com/ebs/features/

Almost all the questions are scenario based and you have to select the correct answer. Maybe they are looking for the COST aspect for the answer, or maybe the performance aspect. My personal opinion is that, you need to visualize those different classes with the rough numbers in mind.

And for some questions you have to compare cross services. For e.g. S3, EBS, EFS.



After I got confident enough, I booked the exam on 12th October 2018. From the 65 questions I flagged around 10 questions. Which means I was confident on 55 questions which make me above the minimum score. But trust me it was not easy as I thought it would be. However I was able to score got certified.

My final word for the exam takers, don’t take it easy, and practice and learn to compare the services to purpose the best solution in terms of cost and performance wise.


Wish you guys all the best for the exam :)

Friday, August 10, 2018

Drools - How we overcame the drastic conditions evaluation


One year ago, we started a project called keystone, a rules evaluation engine based on spring-boot. The high level architecture as follows [1]. It exposes several REST endpoints to evaluate some business rules. When a request hits the engine, several parallel calls hit the described endpoints based on the input parameters. ( We use RxJava to handle the async calls and zip out the results.) Then we have various IF ELSE blocks to evaluate the rules. And sent back the results back to the client after the rule evaluation.


At the beginning the rules were quite simple and everyone was happy with the architecture and the evaluation of the rules. There was a manageable number of rules with simple if else blocks, and the changes to existing rules were quite minimum at that time.

But when time passed by, there were many requests from partner teams and the rules engine team was asked to implement more logical evaluations, so new REST endpoints were introduced. Now the problem became more complex and hard to manage the rules in our code as well as presenting the rules. When some business users ask what happens if we use the REST endpoint X, we have no way to easily explain all the conditions and evaluation paths in a simple manner.

Then the drools comes into the picture to address this problem. We evaluate the drools and did POC for both the drl file and decision table approaches. The code becomes much more simpler and lean since all the evaluation tree was derived from the decision table. Then we presented both the drl file and decision table to the business people and they were really admired the decision table approach since it became more easy to present to other partner teams.
See below for an example decision table which is being used. It contains 10 decision points before the evaluation.



Let’s look into a sample which use a decision table to evaluate some rules. 


Sample use case.
We are going to evaluate the loan rate given by ABC bank depending on the customer is a GOVERNMENT or a PRIVATE worker and currently a retired person or not. Decision table for the above scenario is as follows.



Decision table for the above use case.


Maven dependencies.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
        <dependency>
            <groupId>org.drools</groupId>
            <artifactId>drools-core</artifactId>
            <version>7.0.0.Final</version>
        </dependency>
        <dependency>
            <groupId>org.kie</groupId>
            <artifactId>kie-spring</artifactId>
            <version>7.0.0.Final</version>
        </dependency>
        <dependency>
            <groupId>org.drools</groupId>
            <artifactId>drools-decisiontables</artifactId>
            <version>7.0.0.Final</version>
        </dependency>
Load the Configurations
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public KieContainer getKieContainer() {

        KieServices kieServices = KieServices.Factory.get();
        KieFileSystem kieFileSystem = kieServices.newKieFileSystem();
        kieFileSystem.write(ResourceFactory.newFileResource(drlFile));
        KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem);
        kieBuilder.buildAll();
        KieModule kieModule = kieBuilder.getKieModule();

        KieContainerkieContainer =  kieServices.newKieContainer(kieModule.getReleaseId());

        return kieContainer

}


We use the ExecutionBase class to hold the facts and the conditions. Fact is of course the Customer object and isGovernmentWorker() and is Retired() conditions.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public class ExecutionBase {

    private Customer customer;

    public Customer getCustomer() {
        return customer;
    }

    public void setCustomer(Customer customer) {
        this.customer = customer;
    }

    public boolean isGovermentWorker() {
        return this.customer.getWorkType().equals(WorkType.GOVERNEMNT);
    }

    public boolean isRetired() {
        return this.customer.getAge() &gt; 60;
    }

    public void execute(String result) {
        System.out.println(result);
    }
}


After the execution we get the entitlement loan rate for a bank customer. Try out the sample code link.



To summarize the post, we discussed how we can leverage the drools decision tables to overcome if there are drastic conditions evaluations in your program and you want to change those conditions without touching the code. Other advantage is the decision table can be used as a tool to describe your execution flow for non technical people. That’s it for this post and hope to see you in another exciting post.