Connect MySQL Workbench to an AWS Instance

  • Download and install MySQL Workbench.
  • Open MySQL Workbench.
  • Select MySQL New Connection and enter a connection name.
  • Choose the Connection Method, and select Standard TCP/IP over SSH.
  • For SSH Hostname, enter the public IP address of your EC2 instance.
  • For SSH Username, enter the default SSH user name to connect to your EC2 instance.
  • Choose SSH Key File, and select the .pem file used to connect from your file system.
  • For MySQL Hostname, enter the database endpoint name.
  • To find the endpoint name:
    Login in AWS console.
    Click Services, and search for RDS (Managed Relational Database Service).
    Click on Databases.
    Click on the DB Identifier under the cluster DB Identifier you to connect to using MySql Workbench.
    The endpoint for that database will be displayed under Connectivity & security.

  • For MySQL Server Port, enter the port number that you use to connect to your database.
  • This will be in the same location as the identifier.

  • For Username, enter the user name that you use to connect to your database.
  • For Password, enter the MySQL user password.
  • Choose Test Connection. After the test is successful, choose OK to save the connection.
  • Howto extend an XFS partition in AWS.

    Login to the EC2 Console.
    Go down to Elastic Block Store.
    Select Volumes.
    Select the volume
    Click Actions and then Modify Volume.
    Enter the new size and click Modify.

    At this point, you will need to extend your volumes and partitions.

    In this case, I was not using LVM, so it was a pretty simple process.

    Check to see which partition you want to extend:

    $ df -hT
    /dev/xvda1 xfs 8G 8.7G 8G 97% /

    Determine the block device and the partition number:

    $ lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda 202:0 0 8G 0 disk
    └─xvda1 202:1 0 8G 0 part /

    Expand to partition to all the available unallocated storage on the disk:

    $ sudo growpart /dev/xvda 1

    Verify to see if the partition has been extended.

    $ lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda 202:0 0 30G 0 disk
    └─xvda1 202:1 0 30G 0 part /

    Extend the filesystem on the partition:

    $ sudo xfs_growfs -d /

    Verify:

    $ df -h
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/xvda1 xfs 30G 9.1G 21G 31% /

    MySQL/MariaDB logging to a table

    OS: CentOS 7
    There are a few ways to enable query logging in MariaDB. I did it by editing the server.cnf file.

    # cd /etc/my.cnf.d
    vi server.cnf

    [mariadb]
    log_output=TABLE
    general_log

    Restart mariadb:

    # systemctl restart mariadb

    The output is put in the mysql.general_log table. Fortunately, I had used unique userid for each of my applications. This allowed me to be able to query the table to find those queries for a particular database. For example, an application using dbuser for database access:

    MariaDB [mysql]> SELECT * FROM general_log WHERE user_host like ‘%dbuser%localhost%’;

    You are going to want to truncate this log once in a while, because it will get very large depending the on the activity.

    MariaDB [mysql]> TRUNCATE mysql.general_log

    Awk To Get The Last Elemet After a Delimiter

    I have to write these things, so I can find them when I inevitably need them again.

    This will give you the last element in a delimited string:

    awk -F “:” ‘{print $NF}’

    Here is an example:

    $ echo “elemet1:element2:element3:element4” | awk -F “:” ‘{print $NF}’
    element4

    Ubuntu: Disable Automatic Updates

    UPDATE: What outlined below did not work, so here is what I ended up doing:

    Change directory to:

    $ cd /etc/apt/apt.conf.d

    Edit the 20auto-upgrades, and change the Lists setting from 1 to 0:

    $ sudo vi 20auto-upgrades
    APT::Periodic::Update-Package-Lists “0”;
    APT::Periodic::Unattended-Upgrade “1”;

    Now, with the list disabled the service should never find anything to update.

    Another option is just get rid of the service:

    $ sudo apt remove unattended-upgrades

    Note: The following did not work in Ubuntu 18.x:

    For reason, I don’t know if this an Azure deployment issue or an Ubuntu decision, but I have found automatic updates turned on by default in a couple VMs. Not sure why you would ever want your production server automatically updating, so:

    Check to status of the unattended-upgrades service:

    $ sudo systemctl status unattended-upgrades

    Stop the service:

    $ sudo systemctl stop unattended-upgrades

    Disable the service. NOTE: Services that have this as a dependency will start the service despite it being disabled. I am not aware of any for this service, but just putting it out there.

    $ sudo systemctl disable unattended-upgrades

    To prevent the unintended situation above, mask will create a link to /dev/null to prevent the service from being started whether there are dependencies or not.

    $ sudo systemctl mask unattended-upgrades

    SQLite3 and OpenVPN

    I needed to cleanup an issue with an OpenVPN user. I cleaned was using a multi-factor solution with OpenVPN. The multi-factor solution was case sensitive, while the VPN was not. To clean up the mess, I needed to remove the certificates from the certs SQLite database.

    Database location:

    # cd /usr/local/openvpn_as/etc/db

    Connect to the database:

    # sqlite3 certs.db

    List the tables in the database:

    sqlite> .tables

    List column names in the table:
    sqlite> pragma table_info(certificates);

    List all the common_name records in the database:
    sqlite> select common_name from certificates ;

    Delete records with the common name somename:
    sqlite> delete from certificates where common_name=’somename‘;

    This worked well to clean up the database, and get the user working with a correct ovpn file.

    Note: You could also edit the ovpn with data from the database as well.

    Here is how you display a table (log in this instance) layout:

    sqlite> .schema log

    I used the following to query the login activity. You have to convert the timestamp in the start_time column. Also, the duration is stored in seconds, so I converted it to minutes.

    sqlite> select username,strftime(‘%m-%d-%Y’, datetime(start_time, ‘unixepoch’)),duration/60 from log;

    To run it from a script, save your query in a file (zreport) and run the following:

    sqlite3 log.db < zreport

    Docker Basics

    OS: CentOS 7
    Install Docker:

    # yum install docker-ce-cli docker-ce

    Start Docker:

    # systemctl start docker

    Help:

    # docker –help

    You can tack on a –help on the end of any command to get help of that specific option:

    # docker pull –help

    Pull in an image from hub.docker.com:

    # docker pull ubuntu

    To list your installed images and obtain the IMAGE ID:

    # docker images

    To run an image:

    # docker run -it -d ubuntu

    To view running images:

    # docker ps

    To stop a running container:

    # docker stop ‘CONTAINTER ID

    To execute a command in a container (In this case, bash in the ubuntu image.):

    # docker exec -it ‘CONTAINTER ID‘ bash

    To view all container, including those that are not running:

    # docker ps -a

    To remove a container:

    # docker rm ‘CONTAINTER ID

    To remove an image:
    # docker rmi ‘IMAGE ID

    To stop and remove a running container:

    # docker rm -f ‘CONTAINTER ID

    To save changes to a container:

    # docker commit ‘CONTAINTER ID‘ ‘NEW IMAGE NAME

    To remove all running containers:

    # docker rm -f $(docker ps -a -q)

    NOTE: INSTANCE ID can refer an image or container. The next two command will return the information in json format.
    To view a container’s history:

    # docker history ‘INSTANCE ID

    To view information about a container:

    # docker inspect ‘INSTANCE ID

    If you want to upload your container to hub.docker.com the container must have a name in the following format:

    # docker commit ‘CONTAINTER ID‘ ‘dockerhubuserid’/’newimagename’

    To run a container and map a host port to the container port:

    # docker run -it -d -p 82:80 jgz/apache2

    Then, in this case, I would access my container web server from my LAN by accessing my docker host name and the port 82. So, if my hostname is jims.domain.local, the I would browse to http://jims.domain.local:82.

    A Dockerfile is how you can create a container and run some commands when you start it up:
    Building a Dockerfile (YAML file):

    # mkdir /usr/local/docker
    # cd /usr/local/docker/
    # vi Dockerfile

    FROM ubuntu
    RUN apt-get update
    RUN apt-get -y install apache2
    ADD . /var/www/html
    ENTRYPOINT apachectl -D FOREGROUND
    ENV name jgz

    This will build the container and execute the RUN commands and ENTRYPOINT command:

    # docker build . -t new_dockerfile

    Ubuntu/Mint Allow Non-Privileged User To Connect To New WiFi SSID

    # cd /usr/share/polkit-1/actions/

    Make a backup copy of the org.freedesktop.NetworkManager.policy file:

    # cp -p org.freedesktop.NetworkManager.policy org.freedesktop.NetworkManager.policy.orig

    Edit the org.freedesktop.NetworkManager.policy file:

    # vi org.freedesktop.NetworkManager.policy

    In the <action id=”org.freedesktop.NetworkManager.settings.modify.system”> section, look for
    <allow_active>auth_admin_keep</allow_active> toward the end of the section.

    Change that line to:
    <allow_active>yes</allow_active>

    # shutdown -r now

    Windows 10 (post October 2018 Update) RSAT

    As usual with Microsoft, it seems that have come up with a “better” idea for how to install the Remote Server Administration Tools for Windows 10 versions later than the October 2018 Update.

    In the search, enter “Manage optional features” and click “Add a feature.”

    And then the brilliant engineers at Microsoft, thought it would be so much better if we had install each component one at a time. So, pick and chose at your will. Click an RSAT option and then click Install. At least, they will queue. To get a status, just go back to the previous screen.

    You need to reboot once you have all the features installed.

    To install (or reinstall) the ActiveDirectory module, I think the only feature required it the “RSAT: Active Directory Domain Services and Lightweight Directory Services Tools”. It was not in “RSAT: Server Manager.”

    Then, from a administrator Powershell windows, install the module:
    PS> Install-Module ActiveDirectory

    I had to re-install RSAT and the ActiveDirectory module after update to version 1909.

    I have yet to come across the powershell equivalent to just install the tools. I will update this if I find it.

    This is the best way to install the powershell ActiveDirectory module, because it actually works:

    $JGZSession = New-PSSession -ComputerName domaincontroller
    Export-PSsession -Session $JGZSession -Module ActiveDirectory -OutputModule RSATADModule
    Remove-PSSession -Session $JGZSession
    Import-Module RSATADModule

    Query Microsoft SQL from CentOS7

    Install the Microsoft repository into your yum configuration:

    # curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/mssql-release.repo

    Disable the repository:

    vi /etc/yum.repos.d/mssql-release.repo

    enabled=0

    Remove the unixODBC packages if applicable:

    # yum remove unixODBC-utf16 unixODBC-utf16-devel

    Install the driver and command line tools (if wanted):

    # yum –enablerepo packages-microsoft-com-prod install msodbcsql17
    # yum –enablerepo packages-microsoft-com-prod install mssql-tools

    Add the tools directory to your PATH variable as required:

    vi ~/.bash_profile ~/.bashrc

    export PATH=”$PATH:/opt/mssql-tools/bin”
    ..

    Add the tools to your current session:

    export PATH=”$PATH:/opt/mssql-tools/bin”

    Test with sqlcmd:

    sqlcmd -U username -P password -S server -d database

    Return top

    INFORMATION