The SSH config role creates a SSH config file for an inventory containing all inventory hosts.
Especially useful for cloud environments where you cannot know in advance which IP will be assigned to the instance. The role helps to write this IP to an SSH configuration file so that you don't have to remember it.
Configure SSH to also include configs under ~/.ssh/conf.d
:
# Canonicalize Hostnames to use Host *.XXX stanzas below without
# the need to always use the FQDN when connecting
CanonicalizeHostname yes
Include conf.d/*
Via requirements.yml
:
---
# file: requirements.yml
roles:
- name: ngine_io.ssh_config
version: v0.2.0
To install:
ansible-galaxy install -r requirements.yml
ssh_config__env: "{{ inventory_file }}"
The name of the environment the SSH config file is named after.
ssh_config__dest_path: "~/.ssh/conf.d"
The path where the config is stored.
ssh_config__configs:
- Hostname {{ ansible_host | default(inventory_hostname) }}
- Port {{ ansible_port | default(22) }}
- User {{ ansible_user | default(lookup('env', 'USERNAME')) }}
- StrictHostKeyChecking accept-new
The SSH config fragment for each host entry.
None.
Imaging we have a role hcloud
which creates instances based on our inventory and registers the IP of it as ansible_host
.
---
- hosts: cloud
gather_facts: false
connection: local
roles:
- role: hcloud
tags: hcloud
- role: ssh_config
tags: ssh_config
post_tasks:
- name: Waiting for SSH to come up
wait_for:
port: 22
host: "{{ ansible_host }}"
search_regex: OpenSSH
delay: 2
Assuming we are having an inventory prod like so:
[prod:children]
cloud ssh_config__env=prod
[cloud]
prod-web1
prod-web2
As a result we would get a config in ~/.ssh/conf.d/ssh_config_prod.conf like:
### Autogenerated, do not modify ###
Host prod-web1
Hostname 1.2.3.4
Port 22
User ansible
Host prod-web2
Hostname 1.2.3.5
Port 22
User ansible
Then you just SSH-into the instance like so:
ssh prod-web1
MIT / Apache2
This role was created in 2022 by René Moser.