Installing Ansible Automaton Platform on AWS part 4 – EC2 Instances

Part 4 of our series on Installing AAPv2 in AWS is going to focus on EC2 instances and the OS configurations that kept causing me issues. If you’re lucky and you can use raw RHEL instances you may not have these issues. But if you’re company is anything like mine then you’re probably deploying to custom AMIs that are already hardened.

Deploy your EC2 RHEL instances per company policy. There isn’t a lot I can provide here since so many settings can be controlled at the corporate level but if you just aren’t sure drop me a note and I’ll do my best to help.

Things I can tell you upfront. An instance of m4.large is required, if you have a fairly large inventory consider the m4.xlarge. This is straight from the setup guide; the first time I did the setup I ran something smaller and it threw errors on me. That’s what I get for trying to save a penny in my dev environment. There is an equation you can do based on how many forks etc but I keep it simple.

Now, one piece of advice up front. Do not run the installer on a node you plan on being part of your AAPv2 environment. I had issues with this and everyone I talked to at AnsibleFest had issues with this. Save yourself the headache and just don’t do it. Spin up a tools server or other place you can connect to everything via SSH. I’m going to reference this is a tools server for simplicity.

Since I work for a large corporation we deployed as a cluster, consider your deployment may vary from mine. That’s ok, its about your needs this should be easily scaled.

Grab the bundle installer, extract it to your tools server, and let’s get to work on the inventory file. Make sure you grab the right version based on RHEL8 or RHEL9.

First things first, list out your servers for each of the three systems, automationcontroller, execution_nodes, and automationhub. I’m going to specify my controllers as controller only, you could set them to hybrid if you want them to be part of the execution environment. The default is hybrid

To save some time I’m going to define most of my variables in the all hosts space. In the real world I did not use the same key for all my ec2 instances so you may need to divide these out for each group.

Continue on filling out the inventory file. When you get to your databases, if you followed my RDS set up, the pg_host will be the endpoint from Part 2 of this series.

One mistake I made that seemed like it should have been set is the automationhub_main_url but the install never worked with that setting set.

With that all in place your inventory file is now set. But don’t run the installer quite yet, we need to prep a few things first.

The first failure you’re likely to run into is noexec on /tmp. This is easy enough to fix. Assuming your in the extracted directory (where inventory and setup.sh are) run this sudo vi collections/ansible_collections/ansible/automation_platform_installer/roles/preflight/tasks/main.yml

You’re looking for this block of code.

- name: Find mount options for /var, /var/tmp, and /tmp
check_mode: yes
lineinfile:
  name: /proc/mounts
  regexp: ' {{ item }} .*noexec'
  state: absent
loop:
  - "/var"
  - "/var/tmp"
  - "/tmp"
register: mount

Comment out the directories you can’t use. For me it was just /tmp. Save and exit the editor.

Next, I had to set my user namespaces. Hardening had them set to 0. This requires a little bit of leg work but don’t worry I will show you how to automate it in another post.

First set them for the running configuration with one easy command.

sudo sysctl -w user.max_user_namespaces=63556

Now how do we make that persistent so we don’t have to do this on every reboot? Look for a file in /etc/sysctl.d/99-custom.conf. Your file name will vary from mine. What we want to do is edit that file and make sure value for user.max_user_namespaces isn’t set to 0. The official help article suggests setting it to 63556.

Once you have it saved run the following command dracut-f -v if you did it right this will permanently set that value.

Ok so here is a decision point. If you’re happy with your EC2 instances and you do not want a clustered Automation Hub then you are ready to go, run the installer when you’re ready. If you want a clustered Automation Hub see the next part of the series for that specific set up.

Installing AAPv2 on AWS – Part 1 – Security Groups 

Installing AAPv2 on AWS – Part 2 – Databases

Installing AAPv2 on AWS – Part 3 – Load Balancers

Installing AAPv2 on AWS – Part 4 – EC2 <- you are here

sources

https://access.redhat.com/solutions/4308791

https://access.redhat.com/solutions/6771781