Icecast Automation

Icecast Automation

Automation to deploy and configure Icecast. Intended as a) a quick-start guide to support small internet-radio communities launching their own Icecast setups, and b) an initial baseline to port over to the PCE Radio to automate our own internet radio deployment.

Surely you’ve heard of the massive juggernaut of internet radio, the Porkchop Express (or “PCE” as we call it.) This project was a quick launchpad to prepare for updating our deployment approach for the PCE. And when I say “our deployment approach” I mean one person having manually configured an Icecast server on an Ubuntu Digital Ocean droplet 5 years ago.

Community is really the main goal of the PCE. Playing and discussing music online is the primary way the people of the PCE get together, but the development of the site and the infrastructure that makes it possible is another cool medium for community engagement. Making sure we have the ability to quickly and easily deploy the site is a crucial step in allowing others to make collaborate on changes together.

Background

The primary component of PCE is an Icecast server which allows our radio DJs to broadcast a stream from their local sound source which our listeners can connect to. There is an nginx server in front of it, acting as a reverse proxy and presenting the simple static website generated by Jekyll. That’s it, it’s that simple. So simple anyone could do it. I tried to keep this first iteration of this project, this public repository, to the simplest form required to get an Icecast instance up and running and connect to it; just the mandatory components. Porting over the IaC automation to our PCE and further building it out to fit its specifics is step 2. The project is therefore just an Icecast server and an nginx server. PCE is running on Digital Ocean, so I built the project to do so as well. The droplets are provisioned via Terraform, and the services deployed and configured via Ansible.

Terraform

The Terraform setup is a simple as can be here, no extra bells or whistles. It creates two Ubuntu droplets with minimal specs, one each for Icecast and nginx.

There is a simple “dynamic” inventory setup that populates the Ansible inventory from the IP addresses reported in the output when applying the Terraform plan.

resource "local_file" "ansible_inventory" {
  content = templatefile("${path.module}/inventory.tmpl",
    {
      icecast_ip = digitalocean_droplet.icecast.ipv4_address
      nginx_ip = digitalocean_droplet.nginx.ipv4_address
    }
  )
  filename = "${path.module}/../ansible/inventory"
}

Ansible

The Ansible playbook handles the installation and configuration of icecast2 on one droplet, and nginx on a second.

The icecast2 setup includes some slightly interesting jinja templating which allows for multiple listening sockets, and mountpoints to be configured via a vars file in the role path:

{% for listen_socket in icecast_listen_sockets %}
  <listen-socket>
  {% for key, value in listen_socket.items() %}
    {% if key in ['port', 'bind-address', 'ssl', 'shoutcast-mount', 'shoutcast-compat'] %}
    <{{ key }}>{{ value }}</{{ key }}>
    {% endif %}
  {% endfor %}
  </listen-socket>
{% endfor %}

{% if icecast_http_headers is defined %}
  <http-headers>
  {% for http_header in icecast_http_headers %}
    <header name="{{ http_header['name'] }}" value="{{ http_header['value'] }}"{% if http_header['status'] is defined %} status="{{ http_header['status'] }}" {% endif %} />
  {% endfor %}
  </http-headers>
{% endif %}

{% if icecast_mounts is defined %}
  {% for mount in icecast_mounts %}
  <mount type="{{ mount['type'] | default('normal') }}">
    {% for key, value in mount.items() %}
      {% if key in ['mount-name', 'username', 'password', 'max-listeners', 'dump-file', 'intro',
        'fallback-mount', 'fallback-override', 'fallback-when-full', 'charset', 
        'public', 'stream-name', 'stream-description', 'stream-url', 'genre', 
        'bitrate', 'subtype', 'hidden', 'burst-size',
        'mp3-metadata-interval', 'authentication', 'http-headers', 'on-connect', 
        'on-disconnect'] %}
    <{{ key }}>{{ value }}</{{ key }}>
      {% endif %}
    {% endfor %}
  </mount>
  {% endfor %}
{% endif %}

The Nginx reverse proxy is also pretty straightforward, with some of its own templating to add the addresses from the populated Ansible inventory:

# Main stream location
location /live {
    proxy_pass http://{{ hostvars['icecast'].ansible_host }}:8000/live;
    add_header Cache-Control no-cache;
    # Debug headers
    add_header X-Debug-Target "{{ hostvars['icecast'].ansible_host }}:8000/live" always;
    add_header X-Debug-Host $host always;
}

It also configures a very basic landing page to allow some basic debugging: simple 'front end'

Usage

Usage of the project is intentionally as simple as possible as well

  1. Clone the repository:
      git clone https://github.com/alex-thorne/icecast.automation.git
      cd icecast.automation
    
  2. Configure Terraform:
    • Ensure you have a DigitalOcean API token and set it as an environment variable:
       export DIGITALOCEAN_TOKEN=your_token_here
      
    • Initialize and apply the Terraform plan:
       terraform init
       terraform apply
      
  3. Run Ansible Playbook:
    • Ensure you have SSH access to the droplets created by Terraform.
    • Run the Ansible playbook to configure Icecast and Nginx:
       cd ansible
       ansible-playbook -i inventory site.yml
      
  4. Access the Icecast Server:
    • Once the playbook completes, you can access the Icecast server via the Nginx reverse proxy at http://your_nginx_droplet_ip/live.
  5. Customize Configuration:
    • Modify the vars files in the Ansible roles to customize the Icecast and Nginx configurations as needed.

And you’ve got an Icecast server up and running, ready to broadcast to your own internet radio listener community, or just to shout into the void of the internet. Either way, congrats and enjoy broadcasting.


© 2025. All rights reserved.