Monday, September 2, 2019

How to use git to connect to Github and Gitlab on the same machine

Well, easy problem to solve.

Just edit/create the SSH config file ~/.ssh/config and add the two providers

Host github
HostName github.com 
IdentityFile ~/.ssh/github

Host gitlab
HostName gilab.com 
IdentityFile ~/.ssh/gitlab
changing/defining your SSH keys for each provider.

Go to Github config area and Gitlab config area and add your PUBLIC key (normally something like id_rsa.pub) to the list of keys.

That's all, now you can work with both platforms.


Saturday, August 31, 2019

Setting up PostgreSQL DB with Java Spring Boot and Gradle

Let's start checking the dependencies, considering that I am using for this example spring boot 2.1.7.

Go to your build.gradle file and in the section dependencies be sure that you have the following:

compile 'org.springframework.boot:spring-boot-starter-jdbc'
compile 'org.postgresql:postgresql:42.2.2'

If you still don't have the resources.config.application.yml file, create it or edit it adding the following lines:

spring:
  application:
    name: appname
  profiles:
    active: dev

spring.datasource:
  driver-class-name: org.postgresql.Driver
  url: ${ENV_DATASOURCE_URL}
  username: ${ENV_DB_APP_USER}
  password: ${ENV_DB_APP_PASSWORD}

server:
  port: 8080
  servlet.context-path: /appname

logging.file: logs/${spring.application.name}.log

and create an environment variables file outside the project, for example, .env file in your project root folder. Don't forget to add the file to .gitignore. For example:

ENV_DB_HOST=development
ENV_DB_PORT=5432
ENV_DB_SCHEMA=schema
ENV_DB_APP_USER=username
ENV_DB_APP_PASSWORD=password

and run

$ export $(cat .env | xargs)

Now create a resolurces.schema.sql file with the schema that you want to create. For example:

CREATE TABLE clients
(
  id                    bigserial primary key,
  uuid                  varchar(255)             not null unique,
  name                  text                     not null,
  version               varchar(64)              null,
  template              json                     null,
  create_time           timestamp with time zone not null,
  update_time           timestamp with time zone not null
);

And... that's all! You are ready to start working with your PostgreSQL database.

You can include in your resources.application.properties file the following lines

spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
#spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.hibernate.ddl-auto=none

The first remove a possible error launching your project. The second drop and creates the database using the schema.sql file that you defined previously. You should use this line only once and never in production. The third line deactivates this functionality. I recommend you to use update if you want to start using migrations or jump directly into Flyway to versioning your migrations.






Wednesday, March 8, 2017

Setting up Strict Transport Security (HSTS) in NGINX under a Vagrant Box

Today we I going to show how to set up HTTP Strict Transport Security (HSTS) in NGINX to improve slightly the security of your webapp.

HTTPS (HTTP encrypted with SSL or TLS) makes very difficult for an attacker to intercept, modify, or fake traffic between a user and the website. HSTS seeks to deal with potential vulnerabilities by instructing the browser that a domain can only be accessed using HTTPS. Even if the user enters a plain HTTP link, the browser will strictly upgrade the connection to HTTPS.

Let's learn by doing setting up HSTS in an standard Vagrant box running NGINX.

First lets create a self-signed SSL certificate for our HTTPS connection to the vagrant box (by default there is only HTTP connection available).

1) Create the certificates directory

$ sudo mkdir YOUR_CONFIG_FOLDER/certs

2) Create the key, csr and certificate:

$ cd YOUR_CONFIG_FOLDER/certs/

# Generate a new private key
$ sudo openssl genrsa -out "devcert.key" 2048

# Generate a CSR using the private key for encryption. It is interesting to enter the server name in this step.
$ sudo openssl req -new -key "devcert.key" -out "devcert.csr"

3) Sign and generate the certificate devcert.crt

$ sudo openssl x509 -req -days 365 -in "devcert.csr" -signkey "devcert.key" -out "devcert.crt"


Now set up the HSTS:

1) Edit the available site you want to setup in the sites-available folder

$ sudo vi /etc/nginx/sites-available/YOUR_SITE

2) Add the lines

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

ssl_certificate     YOUR_CONFIG_FOLDER/certs/devcert.crt;
ssl_certificate_key YOUR_CONFIG_FOLDER/certs/devcert.key;

after the listen or server_name variables. It will look like that:

server {
    listen 443 ssl;

    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

ssl_certificate     YOUR_CONFIG_FOLDER/certs/devcert.crt;
ssl_certificate_key YOUR_CONFIG_FOLDER/certs/devcert.key;

    [...]
}

3) Save the changes and restart the nginx service

$ sudo service nginx restart

That's all. Now you should be able to see also the HSTS header in the server response:

Strict-Transport-Security:max-age=31536000; includeSubDomains




Thursday, September 29, 2016

Setting up PHP-Kafka using librdkafka wrapper phpkafka from EVODelavega under Ubuntu.


We will install kafka for PHP based on the libraries:

https://github.com/EVODelavega/phpkafka
https://github.com/edenhill/librdkafka

Setting up
==========
Fast way (Only Ubuntu)
--------

$ sudo apt-get install librdkafka1

Dependencies
------------
First let install some dependencies (installing manually)

$ sudo apt-get install libsasl2-dev liblz4-dev

Installing php-rdkafka (Other OS)
----------------------

$ git clone https://github.com/edenhill/librdkafka/
$ cd librdkafka
$ ./configure
$ make
$ make test
$ sudo make install

Installing phpkafka Extension
-----------------------------

$ git clone https://github.com/EVODelavega/phpkafka.git
$ cd phpkafka
$ phpize
$ ./configure --enable-kafka
$ sudo make install

If you want to check that kafka is in place:

$ php -m | grep kafka
kafka


Final PHP configuration
-----------------------
1) Create a PHP file containing only the call to phpinfo()

2) Run the script from your browser and check the path of the config ini files that your system is using. For example, my additional INI files are in /etc/php5/fpm/conf.d.

3) Creates a new file librd.con in that folder. Example:

$ sudo touch /etc/php5/fpm/conf.d/librd.conf

4) Add to the file the path of your libraries: /usr/local/lib. You can also do (as superuser):

$ echo "/usr/local/lib" >> /etc/ld.so.conf.d/librd.conf

6) Create the file 20-kafka.ini in your additional INI directory:

$ sudo vi /etc/php5/fpm/conf.d/20-kafka.ini

and insert the line:

extension=kafka.so

7) Run the command

$ sudo ldconfig

8) Check that everthing is file and the extension was loaded

$ ldconfig -p | grep kafka

9) Reset services

sudo /etc/init.d/php5-fpm restart


Setting the kafka with Docker for testing
=========================================

1) Download and install the containers (Check https://hub.docker.com/r/confluent/platform/)

$ docker-machine start
$ eval $(docker-machine env)
$ docker pull confluent/platform

2) Run the hub

# Start Zookeeper and expose port 2181 for use by the host machine
docker run -d --name zookeeper -p 2181:2181 confluent/zookeeper

# Start Kafka and expose port 9092 for use by the host machine
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka

# Start Schema Registry and expose port 8081 for use by the host machine
docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper \
--link kafka:kafka confluent/schema-registry

# Start REST Proxy and expose port 8082 for use by the host machine
docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper \
--link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

3) Everything now is ready to continue


Working with kafka
==================

Create the files producer.php and consumer.php as follows in this section. For testing, open 2 terminals and run first the consumer.php script. On the other terminal run the producer.php script many times as you want. You should see how the message from the producer is picked by the consumer.

PHP Producer (producer.php)
------------
<?php

$avro_schema = [
"namespace" => "yournamespace",
"type" => "record",
"name" => "machineLog",
"doc" => "That is the documentation",
"fields" => [
["name" => "host", "type" => "string"],
["name" => "log", "type" => "string"],
]
];

$records = [
[ "host" => "hostname1", "host" => "cpu 67590 0 28263 13941723 602 7 1161 0 0 0" ],
[ "host" => "hostname2", "host" => "cpu0 67591 0 28266 13944700 602 7 1161 0 0 0" ]
];

$msg = json_encode([
"value_schema" => $avro_schema,
"records" => $records
]);

//var_dump(json_encode($msg));
$kafka = new Kafka("192.168.99.100:9092");
try {
$kafka->produce("jsontest", $msg);
} catch (Exception $e) {
echo $e->getMessage() . PHP_EOL;
}

$kafka->disconnect(Kafka::MODE_PRODUCER);



PHP Consumer (consumer.php)
------------
<?php

$kafka = new Kafka("192.168.99.100:9092");
$partitions = $kafka->getPartitionsForTopic('jsontest');
$kafka->setPartition($partitions[0]);
$offset = 1;
$size = 1;

while (1) {
try {
$messages = $kafka->consume("jsontest", $offset, $size);
if (count($messages) > 0) {
foreach ($messages as $message) {
echo $message . PHP_EOL;
$offset += 1;
}
}
} catch (Exception $e) {
echo $e->getMessage() . PHP_EOL;
break;
}
}

$kafka->disconnect();


That's all! Enjoy Kafka!

Sunday, September 4, 2016

How to run Laravel 5.3 in a 1and1 shared hosting


How to run Laravel 5.3 in a 1and1 shared hosting

1.- Go to your Control Panel and set the default PHP version to 5.6 for your current domain.

2.- Artisan script

Open artisan file and replace the first line to:

#!/usr/local/bin/php5.5

which corresponds to the 1and1 php5.5 path. Now you can run artisan using ./artisan.

3.- composer.json

Open composer.json file and replace all the php references to php5.5. For example:

[...]

"scripts": {
        "post-root-package-install": [
            "/usr/local/bin/php5.5 -r \"copy('.env.example', '.env');\""
        ],
        "post-create-project-cmd": [
            "/usr/local/bin/php5.5 artisan key:generate"
        ],
        "post-install-cmd": [
            "Illuminate\\Foundation\\ComposerScripts::postInstall",
            "/usr/local/bin/php5.5 artisan optimize"
        ],
        "post-update-cmd": [
            "Illuminate\\Foundation\\ComposerScripts::postUpdate",
            "/usr/local/bin/php5.5 artisan optimize"
        ]
    },

[...]


Now you can update your dependencies:

Run the command:

curl -sS https://getcomposer.org/installer | php5.5 

to get your composer.phar script. Now you can update as follow:

php5.5 composer.phar update


4) .htaccess

Go to your public folder and replace the default setup with the following:




5) .env

Of course you have to configure your database credentials an the rest of setups editing the file .env.

Happy Laravel

Wednesday, August 3, 2016

How to download a file grom S3 with Laravel

Today let's explain how to download a file from S3 using laravel. Next time I will explain how to upload the file.

1) Set up the bucket name. Ex. YOUR_BUCKET_NAME

2) Set up the bucket policy

{
  "Id": "PolicyXXXXXX",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "XXXXXX",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::testfcce/*",
      "Principal": "*"
    }
  ]
}
You can use the AWS policy generator for that:
http://awspolicygen.s3.amazonaws.com/policygen.html

3) Generate a credentials to have access to the bucket (if you don't have)

4) Change the config/filesystems.php to

        's3' => [
            'driver' => 's3',
            'key'    => env('S3_KEY'),
            'secret' => env('S3_SECRET'),
            'region' => env('S3_REGION'),
            'bucket' => env('S3_BUCKET'),
        ],

5) Add the keys to the .env file

S3_USER=YOUR_USER_NAME
S3_KEY=YOUR_KEY
S3_SECRET=YOUR_SECRET
S3_REGION=eu-central-1
S3_BUCKET=YOUR_BUCKET_NAME

6) Example how to check if a file called image001.jpg is in the bucket and in positive case get the URL

$s3 = Storage::disk('s3');

if ($s3->exists('image001.jpg'))
{
    $bucket = 'YOUR_BUCKET_NAME';

    return $s3->getDriver()->getAdapter()->getClient()->getObjectUrl($bucket, 'image001.jpg');
}


References URLs:

https://return-true.com/uploading-directly-to-amazon-s3-from-your-laravel-5-application/
http://www.server180.com/2016/01/upload-files-to-s3-using-laravel-52.html
https://chrisblackwell.me/upload-files-to-aws-s3-using-laravel/

Best

Tuesday, March 1, 2016

Setting up Capistrano for Laravel 5 in Ubuntu

Capistrano for Laravel 5 in Ubuntu

Install Capistrano in Ubuntu


gem install capistrano

SSH access

Capistrano deploys using SSH. Thus, you must be able to SSH from the deployment system to the destination system for Capistrano to work. You can test the SSH connection using a ssh client, e.g.

ssh your_username@destinationserver

If you cannot connect, in order to be able to do this you need two things:
1) An account on the target machine
2) Your public key added to ~/.ssh/authorized_keys file.

So, SSH into the staging machine and add your key to username’s ~/.ssh/authorized_keys. You should now be able to pull from git on the server without it asking you for identification.

Server privileges

In order to deploy, the user needs some special privileges.

First add a group named deploy to the server:

sudo groupadd deploy

We now need to add ourselves and any other users that need access (other developers) to the deploy group, and the user that executes our web server. Opening the /etc/group file, and look for the line that beings with deploy. We append users, comma delimited. In this example I am using a nginx server.

deploy:x:1003:your_username,nginx

Now we have a deploy group, whose members include us, any other developers who will be deploying and the web server user.

Add the deploy group to the /etc/sudoers. For that create file /etc/sudoers.d/01deply to enable members of deploy group to access root via sudo without passwords

%deploy ALL = (ALL) NOPASSWD: ALL

and set up 440 permissions to the file

chmod 440 /etc/sudoers.d/01wheel

Project structure

Now define the new structure for your project

projectname
 |-- components
 |-- deploy

all of then will have 755 permissions and g+i flag that indicates that old the subdirectories will have the same privileges as the project directory.

sudo mkdir -p new-project/components 
sudo mkdir -p new-project/deploy 
sudo chmod -R 775 new-project 
sudo chmod -R g+i new-project 
sudo chown -R your_username new-project 
sudo chgrp -R deploy new-project

On the local machine

Go to the Laravel root project's folder and run

cap install

The command creates the following structure:

mkdir -p config/deploy 
create config/deploy.rb 
create config/deploy/staging.rb 
create config/deploy/production.rb 
mkdir -p lib/capistrano/tasks 
create Capfile Capified

where staging.rb and deploy.rb are the files that we are going to work.
We have to create an additional file to store our credentials:

vi config/myconfig.rb

set :ssh_options, { user: 'your_username' }
set :tmp_dir, '/home/your_username/tmp'



Adding Task to Capistrano

Vendor files and config files

Edit the config file deploy.rb

vi config/deploy.rb

and add the next lines after the lock line.

set :application, 'new-project'
set :repo_url, 'git@github.com:githubusername/project.git'

set :deploy_to, '/var/www/new-project/deploy'

components_dir = '/var/www/new-project/components'
set :components_dir, components_dir

# Devops commands
namespace :ops do

  desc 'Copy non-git files to servers.'
  task :put_components do
    on roles(:app), in: :sequence, wait: 1 do
      system("tar czf .build/vendor.tar.gz ./vendor")
      upload! '.build/vendor.tar.gz', "#{components_dir}", :recursive => true
      execute "cd #{components_dir}
      tar -zxf /var/www/new-project/components/vendor.tar.gz"
    end
  end

end


Now configure the staging.rb condig file

vi config/deploy/staging.rb

adding the next lines:

role :app, %w{your_username@destinationserver}

require './config/myconfig.rb'

namespace :deploy do

  desc 'Get stuff ready prior to symlinking'
  task :compile_assets do
    on roles(:app), in: :sequence, wait: 1 do
      execute "cp #{deploy_to}/../components/.env.staging.php #{release_path}"
      execute "cp -r #{deploy_to}/../components/vendor #{release_path}"
    end
  end

  after :updated, :compile_assets

end


# Devops commands
namespace :ops do

  desc 'Copy non-git ENV specific files to servers.'
  task :put_env_components do
    on roles(:app), in: :sequence, wait: 1 do
      upload! './.env.staging.php', "#{deploy_to}/../components/.env.staging.php"
    end
  end

end


Deploy

Let's try if everything works. Check if the tasks are available for cap writing cat -T.

Now let's transfer the config and vendor files to the server:

cap staging ops:put_components 
cap staging ops:put_env_components

Once the files are in the server we can deploy

cap staging deploy


Possible errors

Is you receive an error like this:

the deploy has failed with an error: Exception while executing as your_username@destinationserver: git exit status: 128 git 
stdout: Nothing written git stderr: Error reading response length from authentication socket. 
Permission denied (publickey). 
fatal: Could not read from remote repository.

that means you have to follow an additional step in your local terminal. Run

eval $(ssh-agent) 
ssh-add ~/.ssh/id_rsa

And try again, e.g.

cap staging deploy

That's all

Hope helps