Setup ELK Stack in Linux

Advertisements
Posted in elk

Jenkins behind an Nginx Reverse Proxy

Setup Nginx:

  • yum install nginx
  • vim /etc/nginx/nginx.conf
    location / {
    sendfile off;
    proxy_pass http://localhost:8080;
    proxy_redirect default;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_max_temp_file_size 0;
    #this is the maximum upload size
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    proxy_connect_timeout 90;
    proxy_send_timeout 90;
    proxy_read_timeout 90;
    proxy_buffering off;
    proxy_request_buffering off; # Required for HTTP CLI commands in Jenkins > 2.54
    proxy_set_header Connection “”; # Clear for keepalive}
  • semanage port -mt http_port_t -p tcp 8080
  • systemctl start nginx
  • Reference: https://wiki.jenkins.io/display/JENKINS/Running+Jenkins+behind+Nginx

Setup Jenkins:

  1. Install:
  2. Setup jenkins user:
    • usermod -s /bin/bash jenkins
    • passwd jenkins
    • echo “jenkins ALL=(ALL) NOPASSWD: ALL” > /etc/sudoers.d/jenkins
    • ssh-keygen
    • ssh-copy-id
  3. Add firewall rule:
    • firewall-cmd –add-port=8080/tcp –permanent
    • firewall-cmd –reload
    • firewall-cmd –list-all
  4. Configure:
    • http://localhost                             (Jenkins running on: http://localhost:8080)
    • cat /var/lib/jenkins/secrets/initialAdminPassword
    • Install suggest plugins
    • admin/admin; Admin/admin@localhost
  5.  Files:
    • Install dir: /var/lib/jenkins/
    • Config: /etc/sysconfig/jenkins
    • Log: /var/log/jenkins/jenkins.log
  6. Jenkins CLI:
  7. Running Jenkins on Docker:
    • docker run –rm -u root -p 8080:8080 \
      -v jenkins-data:/var/jenkins_home \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v “$HOME”:/home \
      jenkinsci/blueocean
    • docker run -p 8080:8080 jenkinsci/blueocean
    • docker run jenkins/jenkins:lts –version

 

Posted in Jenkins, LFCE, LFCS, Linux, Nginx

Install Sonarqube on Linux

  • Create sonar user and group:
    • groupadd sonar
    • useradd -d /opt/sonarqube -g sonar -s /bin/bash sonar
    • chown -R sonar:sonar /opt/sonarqube
  • Configure Java 8:
    1. wget  http://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-linux-x64.tar.gz -P /tmp
    2. tar -zxvf /tmp/jdk-8u191-linux-x64.tar.gz -C /opt/
    3. vi /etc/profile.d/java.sh
      • export JAVA_HOME=”/opt/jdk8″
      • chmod +x  /etc/profile.d/java.sh
    4. vi /etc/.bash_profile
      • PATH=$PATH:$HOME/.local/bin:$HOME/bin:/opt/jdk8/bin
      • source .bash_profile
    5. java -version
  • Configure PostgreSQL 10 database:
    1. Install PostgreSQL:
    2. Setup sonar database:
      • su – postgres -c psql
      • ALTER USER sonar WITH ENCRYPTED password ‘sonar’;
      • CREATE DATABASE sonar WITH ENCODING ‘UTF8’ OWNER sonar TEMPLATE=template0;
      • \q
  • Configure Sonarqube:
    1. Download sonarqube:
      • wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.4.zip -P /tmp/
      • unzip /tmp/sonarqube-7.4.zip -d /opt/sonarqube
    2. vi /opt/sonarqube/conf/sonar.properties
      • sonar.jdbc.username=sonarqube
      • sonar.jdbc.password=sonarqube
      • sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube
    3. vi /opt/sonarqube/conf/wrapper.properties
      • wrapper.java.command=/opt/jdk8/bin/java
    4. Sonarqube systemd service:
      1. vi /etc/systemd/system/sonar.service
        [Unit]
        Description=Sonarqube service
        After=syslog.target network.target
        [Service]
        Type=forking
        ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
        ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
        User=sonar
        Group=sonar
        [Install]
        WantedBy=multi-user.target
      2. systemctl daemon-reload
      3. systemctl start sonar
      4. systemctl status sonar
      5. http://localhost:9000
      6. Login with admin/admin
  • Run Sonarqube and PostgreSQL in Docker:
    • #!/bin/bash
      #Creating docker network mynet
      docker network create mynet# Run Postgresql container
      docker run –name postgres -e POSTGRES_USER=sonar -e POSTGRES_PASSWORD=sonar -d -p 5432:5432 –net mynet postgres# Run Sonarqube container
      docker run –name sonarqube -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://postgres:5432/sonar -d –net mynet sonarqube
    • https://gist.github.com/ceduliocezar/b3bf93125024482b5f2f479696842046
  • SonarScanner / SonarRunner:
    • ${SONAR_RUNNER_HOME}/bin/sonar-runner \
      -Dsonar.projectKey=com.mycompany.app:my-app \
      -Dsonar.sources=. \
      -Dsonar.java.binaries=target \
      -Dsonar.host.://localhost:9000 \
      -Dsonar.eed2bd6caff9888996212edae388e1f387f82c32
  • sudo -H -u sonarqube sh -c “bin/linux-x86-64/sonar.sh ‘$1′”
  • References:
Posted in sonarqube

Basic git commands

Install and setup:

  • Required:
    • yum install git && git –version
    • git config [–global] user.name “user”
    • git config [–global] user.email “user@localhost”
  • General:
    • git config –global credential.helper “cache –timeout=28800”
    • git config –global core.excludesfile /etc/gitignore
    • git config –global http.postBuffer 524288000
    • git config –global credential.helper store
    • git config –system core.editor “/usr/bin/vim”
    • git config –list [–global|–system]
  • Windows:
    • git config –system core.longpaths true
    • git config –global core.autocrlf true

Clone Github repository to local directory:

Add local repository to Github:

  • cd repo      (Assuming repo already exist with src files)
  • git init
  • git add .  [–a]]
  • git commit -m “My first commit”
  • git remote add origin http://github.com/username/repo.git
  • git push -u origin master

Add/Remove/Commit:

  • git add . && git commit -m “Initial commit of files into Repo”
  • git rm test.txt && git commit -m “Removed test2.txt”

Checkout/Push/Merge:

  • git checkout -b dev && git branch
  • git checkout — test2.txt
  • git log [–online | –grep=”pattern” | –author=”username” | –grpah –decorate | -p]
  • git checkout master && git merge qa
  • git push -u origin master
Posted in git, LFCE, LFCS, Linux

Setup Cntlm proxy in CentOS

  • Download and Install:
    1. curl -o /tmp/cntlm.rpm  https://sourceforge.net/projects/cntlm/files/cntlm/cntlm%200.92.3/cntlm-0.92.3-1.x86_64.rpm
    2. sudo rpm -ivh /tmp/cntlm-*.rpm
  • Configure:
    1. cntlm -H -d domain1 -u user1
    2. sudo vi /etc/cntlm.conf
      Username user1
      Domain domain1
      PassNTLMv2   11112345325gsdg4535435    (Use this value from step#1)
      Proxy   www.myproxy.com:8080
      Listen 127.0.0.1:3128
      Listen 192.168.1.6:3128
    3. sudo cntlm -M http://google.com                          (Test it!)
    4. CentOS:
      1. vi  /etc/profile.d/proxy.sh              (~/.bash_profile)
        export http_proxy=http://localhost:3128
        export https_proxy=${http_proxy}
      2. source /etc/profile.d/proxy.sh
      3. vi /etc/yum.conf
        proxy=http://localhost:3128
    5. Ubuntu:
      1. vi  /etc/profile.d/proxy.sh     (vi ~.bashrc)
        export http_proxy=http://localhost:3128
        export https_proxy=${http_proxy}
      2. source /etc/profile.d/proxy.sh
      3. vi /etc/apt/apt.conf
        Acquire::http::Proxy “http://localhost:3128”;
        Acquire::https::Proxy “http://localhost:3128”;
    6. Common Issues:
      1. CentOS7 /var/log/messages: cntlm[8976]: Error creating a new PID file:
        • sudo vi /usr/lib/tmpfiles.d/cntlm.conf
          d /run/cntlm 0755 cntlm cntlm
        • Reboot
      2. Windows 10: Couldn’t start Cntlm service:
        1. Open regedit.exe and go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\cntlm\Parameters.
        2. Then change the AppArgs key to -f -c “C:\Program Files (x86)\Cntlm\cntlm.ini”

References:

 

 

Posted in cntlm, LFCE, LFCS, Linux

Azure Databases

  • SQL Server:
    • SQL elastic pool: Elastic pool provide a simple and cost effective solution for managing the performance of multiple databases withing a fixed budget.
      • An elastic pool provides compute (eDTUs) and storage resources that are shared between all the databases it contains.
      • Databases within a pool only use the resources they need, when they need them, within configurable limits.
      • The price of a pool is based only on the amonunt of resources configured is independent of the number of databases it contains.
    • Advanced Threat Protection: A unified security package for discovering and classifying sensitive data, surfacing and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your database.
    • Basic: For less demanding workloads.
    • Standard: For workloads with typical performance requirements.
    • Premium: For IO-intensive workloads.
  • MySQL Server:
    • Basic: Up to 2 vCores with variable IO performance (1-2 vCores). Supports Backup Redundancy Option only Locally Redundant.
    • General Purpose: Up to 32 vCores with predictable IO performance (2-32 vCores). Supports Backup Redundancy Option both Locally Redundant and Geo-Redundant.
    • Memory Optimized: Up to 16 memory optimized vCores with predictable IO performance (2-16 vCores). Supports Backup Redundancy Option both Locally Redundant and Geo-Redundant.
    • Please note that changing to and from the Basic pricing tier or changing the backup redundancy options after server creation is not supported.
Posted in azure

Azure App Services

  • App Services:
    • Azure Web Apps enables you to build and host web applications in the programming language (.NET, .NET Core, Java, Ruby, Node.js, PHP, or Python) of your choice without managing infrastructure.
    • It offers auto-scaling and high availability, supports both WindowsLinux, Docker, and enables automated deployments from FTPGitHub, Git repoVisual Studio Team Services, Bitbucket.
    • Web App name must be unique across all of Azure because the web app is given a URL that ends in .azurewebsites.net.
    • You can improve performance of your state-less apps by turning off the Affinity Cookie, state-full apps should keep Affinity Cookie tuned on for increased compatibility.
    • Always on: Indicates that your web app needs to be loaded at all times. By default, web apps are unloaded after they have been idle. Its recommended that you enable this option when you have continuous web jobs running on the web app.
    • ARR Affinity: You can improve the performance of your stateless apps by turning off the Affinity Cookie. State-ful apps should keep the Affinity Cookie turned on for increased compatibility.
    • App Services Plan:
      • App Service plans represent the collection of physical resources used to host your apps, like location, scale, size and SKU.
      • The cost for the app service plans is based upon per instance, so if you use increase the instances in a selected plan then your cost will multiply as per instances used.
      • Premium:
        • V1( P1, P2, P3): 1/2/4 cores; 1.7/3.5/7 GB RAM; 250 GB storage; 20 instances; 20 slots; Traffic Manager
        • V2(P1, P2, P3): 1/2/4 cores (faster Dv2 series workers); 3.5/7/14 GB RAM; 250 GB SSD storage; 20 instances; 20 slots; Traffic Manager
      • Standard (S1, S2, S3):
        • 1/2/4 cores; 1.7/3.5/7 GB RAM; 50 GB storage; 10 instances;  5 slots; Traffic Manager
      • Basic (B1, B2, B3):
        • 1/2/4 cores; 1.7/3.5/7 GB RAM; 10 GB storage; 3 instances
      • Shared (D1): Shared infrastructure, 1 GB storage
      • Free: Shared infrastructure, 1 GB storage
      • Scale Up: Allows to scale up or down App Service Plan.
      • Scale Out: Allows to increase or decrease instance count.
        • Auto scaling: Can be enabled based upon CPU, Memory, Disk Queue, Http Queue, Data In/Out.
        • It will multiply the cost based upon number of instances increased.
    • Deployment Slots: Deployment slots let you deploy different versions of your web app to different URLs. You can test a certain version and then swap content and configuration between slots.
      • Allows you to test if the deployment works before all users are switched to that new version of the code. This is good last-minute testing to make sure nothing is broken.
      • Auto swap destinations can’t be configured from production slot.
    • Continuous Delivery: Continuous Delivery in Visual Studio Team Services simplifies setting up a robust deployment pipeline for your application. The pipeline builds, runs load tests and deploys to staging slot and then to production.
      • A post deployment action hook is a script/executable that runs after the deployment has completed successfully as part of the default deployment script.
    • Application Insights: helps you to detect and diagnose quality issues in your web apps and web services, and helps you understand what your users actually do with it.
    • Diagnostics logs: Azure Monitor diagnostic logs are logs emitted by an Azure service that provide rich, frequent data about the operation of that service. Azure Monitor makes available two types of diagnostic logs:
      • Application logging (Filesystem): Enable application logging to collect diagnostic traces from your web app code. You need to turn this on to enable the streaming log feature. This setting turn itself off after 12 hours.
      • Application logging (Blob): Logs are collected in the Blob container that’s specified under Storage settings.
      • Web server logging: Gather diagnostic information for your webs server.
      • Detailed error messages: Gather detailed error messages from your web app.
      • Failed request tracing:
      • Tenant logs: these logs come from tenant-level services that exist outside of an Azure subscription, e.g Azure Active Directory logs.
      • Resource logs: these logs come from Azure services that deploy resources within an Azure subscription, e.g Network Security Groups or Storage Accounts.
      • You can export diagnostic logs into:
        • OMS Log Analytics: analyze them with Log analytics.
        • Event Hub: for ingestion by a third-party service or custom analytics solution such as PowerBI.
        • Storage account: for auditing or manual inspection.
      • Set-AzureRmDiagnosticSetting -ResourceId [Resource Id] -Enabled $true
        • -StorageAccountId [storage account id]
        • -ServiceBusRuleId [Service Bus rule id]
        • -WorkspaceId [resource id of the log analytics workspace]
          • (Get-AzureRmOperationalInsightsWorkspace).ResourceId
      • Activity log: provides insight into the operations that were performed on resources in your subscription using Resource Manager, for example, creating a virtual machine or deleting a logic app. The Activity Log is a subscription-level log.
    • SSL certificates:
      • Configure the custom domain
      • Scale up to Basic tier or higher
      • Get an SSL certificate
        • Its signed by a trusted CA (no private CA servers)
        • It contains a private key
        • Its created for key exchange, and exported in .PFX file
        • It uses minimum 2048-bit encryption
        • Its subject name matches the custom domain it needs to secure.
        • Its merged with all the intermediate certificates used by your CA.
      • SSL bindings:
        • Certificates must be associated with your app before you can use them to create a binding.
        • You can upload a certificate you purchased externally or import an App Service Certificate.
        • You may also select  where to use Server Name Identification (SNI) or IP based SSL.
    • PowerShell cmdlets:
      • New-AzureRmResourceGroup  -Name “rg1” -Location “East US”
      • New-AzureRmAppServicePlan  -ResourceGroupName “rg1” -Location “East US” -Name “plan1” -Tier “Standard
      • New-AzureRmWebApp  -ResourceGroupName “rg1” -Location “East US” -Name “webapp1” -AppServicePlan “plan1
      • New-AzureRmWebAppSlot  -ResourceGroupName “rg1” -Name “webapp1” -slot “Staging”
    • Azure CLI:
      • az group create –name rg1 –location “East US
        • az group list -o table
      • az appservice plan create –resource-group rg1 –location “East US
        –name “plan1” –sku FREE

        • az appservice plan list -o table
      • az webapp create –resource-group rg1 –plan plan1 –name webapp1
        • az webapp list -o table
Posted in azure