Commit 2a560064 authored by Themis Zamani's avatar Themis Zamani Committed by GitHub

Merge pull request #74 from ARGOeu/devel

Latest updates
parents e32bf0f0 ddfe3970
......@@ -2,5 +2,4 @@
.DS_Store
setup.sh
.*.sw?
hostcert.pem
hostkey.pem
roles/has_certificate/files/*.key
# ARGO via Ansible
This repository contains a collection of Ansible roles and playbooks that aim at easing the deployment procedure of ARGO products. The goal for these roles and playbooks has been to be as generic as possible so that they are easily adaptable to different environments and e-Infrastructure requirements. Hence most of the variables used by default in these roles reside under the `roles/{role_name}/defaults/main.yml` files.
The administrator of the ARGO product being deployed via these Ansible playbooks may use any of the following places in order to successfully overwrite the default values of the variables and adapt the ARGO product to the specific environment and requirements:
- `roles/{role_name}/vars/main.yml`
- `groups_vars/{groups_name}`
- `host_vars/{inventory_hostname}`
Per ARGO product more details on prerequisites and variables are given in the following subsections.
## WebAPI deployment
Contains Ansible playbook for the deployment of the ARGO datastore and API service. The play is split into four (4) roles:
......@@ -10,7 +19,7 @@ Contains Ansible playbook for the deployment of the ARGO datastore and API servi
### Things to do before deployment
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `hostkey.pem` and `hostcert.pem` respectively.
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `{{inventory_hostname}}.key` and `{{inventory_hostname}}.pem` respectively. As `{{inventory_hostname}}` use the exact name used within the `inventory` file.
- Edit inventory and replace `webapi.node` with the hostname that you intend to deploy the API onto.
### Prerequisites
......@@ -25,6 +34,61 @@ Contains Ansible playbook for the deployment of the ARGO datastore and API servi
$ ansible-playbook -v webapi.yml
```
## Web UI deployment
Contains Ansible playbook for the deployment of the ARGO Web UI service. The play is split into four (4) roles:
- firewall (configures iptables firewall rules)
- repos (includes tasks for the installation of the required repository definitions)
- has_certificate (task for uploading the certificate file onto the host under the appropriate path)
- webui (installation and bootstrap of ARGO Web UI service)
### Things to do before deployment
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `{{inventory_hostname}}.key` and `{{inventory_hostname}}.pem` respectively. As `{{inventory_hostname}}` use the exact name used within the `inventory` file.
- Edit inventory and replace `webui.node` with the hostname that you intend to deploy the Web UI onto.
- Edit `roles/webui/vars/main.yml` file and change the values of the `certificate_password` and `keystore_password` variables to a stronger value.
- Note that by default the EGI based web UI will be deployed on your target node. To change this behaviour use the `argo_web` and `branch_name` variables within the `roles/webui/vars/main.yml` file to point to another upstream lavoisier repository.
### Prerequisites
- Deploy against CentOS 7.x node
- Ansible version used is `1.9.2`
### How to deploy
```bash
$ ansible-playbook -v webui.yml
```
## POEM deployment
Contains Ansible playbook for the deployment of the ARGO POEM service. The play is split into four (4) roles:
- firewall (configures iptables firewall rules)
- repos (includes tasks for the installation of the required repository definitions)
- has_certificate (task for uploading the certificate file onto the host under the appropriate path)
- poem (installs and bootstraps poem service)
### Things to do before deployment
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `{{inventory_hostname}}.key` and `{{inventory_hostname}}.pem` respectively. As `{{inventory_hostname}}` use the exact name used within the `inventory` file.
- Edit inventory and replace `poem.node` with the hostname that you intend to deploy the POEM service onto.
- Create a `host_vars/{{inventory_hostname}}` file and place therein the variables found within the `roles/poem/defaults/main.yml` file in order to overwrite them.
- In order to generate a uuid to be used in the place of the `poem_secret` variable you may use the `uuidgen` linux cli utility.
### Prerequisites
- Deploy against CentOS 6.x node
- Make sure `libselinux-python` is installed on the target node
- Ansible version used is `1.9.2`
### How to deploy
```bash
$ ansible-playbook -v poem.yml
```
## Full standalone deployment
Contains Ansible playbook for the deployment of all ARGO components. The play is split into six (6) roles:
......@@ -37,7 +101,7 @@ Contains Ansible playbook for the deployment of all ARGO components. The play is
### Things to do before deployment
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `hostkey.pem` and `hostcert.pem` respectively.
- Obtain a key/certificate pair from a trusted CA and after place them both under roles/has_certificate/files with names `{{inventory_hostname}}.key` and `{{inventory_hostname}}.pem` respectively. As `{{inventory_hostname}}` use the exact name used within the `inventory` file.
- Edit inventory and replace `standalone.node` with the hostname that you intend to deploy the complete ARGO stack onto.
### Prerequisites
......@@ -51,3 +115,8 @@ Contains Ansible playbook for the deployment of all ARGO components. The play is
```bash
$ ansible-playbook -v standalone.yml
```
## Monitoring your services
In case you are using Nagios or Icinga for health monitoring purposes a minimal `is_monitored` role is included in the repo. The puspose of this role is to install and configure the nrpe service on your target machines. Modify the remote host variable within the `roles/is_monitored/defaults/main.yml` file and include it in your playbooks.
---
epel_release_url: http://ftp.ntua.gr/pub/linux/fedora-epel/6/i386/
epel_release_name: epel-release-6-8.noarch.rpm
# Variable enabled_argo_repo specifies which RPM repository to use.
# To use the development repository set its value to argo-devel
arstats_release_url: http://rpm.hellasgrid.gr/mash/centos6-arstats/i386/
arstats_release_name: ar-release-1.0.0-3.el6.noarch.rpm
enabled_argo_repo: argo-prod
cert_dir: /etc/grid-security
---
cert_path: /etc/pki/tls/certs/localhost.crt
key_path: /etc/pki/tls/private/localhost.key
ca_path: /etc/pki/tls/certs/ca-bundle.crt
iptables_rules:
input:
- { dport: "80", proto: "tcp", policy: "accept"}
- { dport: "443", proto: "tcp", policy: "accept"}
nagios_plugins:
- { name: nagios-plugins-tcp , repo: "" }
- { name: nagios-plugins-disk , repo: "" }
- { name: nagios-plugins-http , repo: "" }
- { name: nagios-plugins , repo: "" }
- { name: nagios-plugins-dummy , repo: "" }
- { name: nagios-plugins-procs , repo: "" }
- { name: nagios-plugins-ping , repo: "" }
\ No newline at end of file
---
iptables_rules:
input:
- { dport: "443", proto: "tcp", policy: "accept"}
---
mongo_bind_interface: 127.0.0.1
mongo_bind_interfaces: 127.0.0.1
cert_path: /etc/grid-security/hostcert.pem
key_path: /etc/grid-security/hostkey.pem
......
---
mongo_bind_interface: 0.0.0.0
mongo_bind_interfaces: 0.0.0.0
cert_path: /etc/pki/tls/certs/localhost.crt
key_path: /etc/pki/tls/private/localhost.key
......@@ -8,4 +8,4 @@ key_path: /etc/pki/tls/private/localhost.key
iptables_rules:
input:
- { dport: "443", proto: "tcp", policy: "accept"}
- { dport: "27017", proto: "tcp", policy: "accept"}
\ No newline at end of file
- { dport: "27017", proto: "tcp", policy: "accept"}
......@@ -4,3 +4,12 @@ webapi.node
[standalone]
standalone.node
[poem]
poem.node
[webui]
webui.node
[monitoring_engine]
monitoring_engine.node
\ No newline at end of file
---
- hosts: monitoring_engine
sudo: true
roles:
- { role: firewall, tags: firewall }
- { role: repos, tags: repos }
- { role: ca_bundle, when: ca_bundle_install, tags: ca_bundle }
- { role: has_certificate, tags: certificate }
- { role: monitoring_engine, tags: monitoring_engine }
---
- hosts: poem
sudo: true
roles:
- { role: firewall, tags: firewall }
- { role: repos, tags: repos }
- { role: has_certificate, tags: certificate }
- { role: poem, tags: poem }
../private_files
\ No newline at end of file
---
tenants:
TenantA:
topics:
- "probe.metricOutput.tenantA.ngi.*"
- "probe.metricOutput.tenantA.roc.*"
- "probe.metricOutput.tenantA.opsmonitor.*"
- "probe.metricOutput.tenantA.project.*"
- "probe.metricOutput.tenantA.vo.*"
brokers:
- "broker1.example.com"
- "broker2.example.com"
outputdir: "/var/lib/argo-connectors/TenantA/"
jobs_all: "JOB_TenantA_ALL, JOB_TenantA_PART"
prefilter: "prefilter-tenantA.py"
jobs_details:
- name: "JOB_TenantA_ALL"
Directory: "TenantA_ALL"
Profiles: "ALL_SERVICES"
TopoType: "GOCDB"
TopoFeed: "https://goc.example.com/gocdbpi/"
TopoFetchType: "Sites"
TopoSelectGroupOfEndpoints: "Production:Y, Monitored:Y, Scope:TenantA"
TopoSelectGroupOfGroups: "Certification:Certified, Infrastructure:Production, Scope:TenantA"
DowntimesFeed: "https://goc.example.com/gocdbpi/"
- name: "JOB_TenantA_PART"
Directory: "TenantA_PART"
Profiles: "PART_SERVICES"
TopoType: "GOCDB"
TopoFeed: "https://goc.example.com/gocdbpi/"
TopoFetchType: "Sites"
TopoSelectGroupOfEndpoints: "Production:Y, Monitored:Y, Scope:TenantA"
TopoSelectGroupOfGroups: "Certification:Candidate, Infrastructure:Production, Scope:TenantA"
DowntimesFeed: "https://goc.example.com/gocdbpi/"
TenantB:
topics:
- "probe.*"
brokers:
- "broker3.example.com"
outputdir: "/var/lib/argo-connectors/TenantB/"
jobs_all: "JOB_TenantB_SERVICES"
jobs_details:
- name: "JOB_TenantB_SERVICES"
Directory: "SERVICES"
Profiles: "My_Critical_Services"
TopoType: "GOCDB"
TopoFeed: "https://goc.example.com/gocdbpi/"
TopoFetchType: "ServiceGroups"
TopoSelectGroupOfEndpoints: "Production:Y, Monitored:Y, Scope:TenantB"
TopoSelectGroupOfGroups: "Certification:Candidate, Infrastructure:Production, Scope:TenantB"
DowntimesFeed: "https://goc.example.com/gocdbpi/"
poem_servers:
- host: "poemA.example.com"
vos:
- ops
- gridpp
- host: "poemB.example.com"
vos:
- ops
poem_fetch_profiles:
- profile_1
- profile_2
mongo_host_or_ip: "127.0.0.1"
mongo_port_number: "27017"
argo_compute_mode: "local"
prefilter_clean_bool: "false"
argo_sync_conf_path: "/etc/argo-egi-connectors"
argo_sync_path: "/var/lib/argo-connectors"
argo_exec_path: "/usr/libexec/argo-egi-connectors"
---
- name: restart consumer
service: name=ar-consumer state=restarted
- name: restart egi consumer
service: name=argo-egi-consumer state=restarted
- name: restart all consumers
service: name=argo-{{ item.key | lower }}-consumer state=restarted
with_dict: tenants
# TODO: Make following handler task tenant unaware
- name: restart all non egi consumers
service: name=argo-{{ item.key | lower }}-consumer state=restarted
with_dict: tenants
when: item.key|lower != "egi"
---
- name: Install consumer from ar project
tags: ar-packages
yum: name=ar-consumer state=latest
notify: restart consumer
- name: Install avro from ar project
tags: ar-packages
yum: name=avro state=present
yum: name=avro state=present enablerepo={{ enabled_argo_repo }}
- name: Install argo-egi-connectors from ar project
- name: Install python-pip
tags: ar-packages
yum: name=argo-egi-connectors state=latest
yum: name=python-pip state=present
- name: Install ar-compute from ar project
- name: Install pymongo fixed version
tags: ar-packages
yum: name=ar-compute state=latest
- name: Configure ar-compute stuff 1
tags: compute_config
lineinfile: dest=/etc/ar-compute-engine.conf
regexp="^mongo_host="
line="mongo_host=127.0.0.1"
state=present
backup=yes
pip: name=pymongo state=present version=3.2.1
- name: Install egi consumer package from ar project
tags:
- ar-packages
- consumer_config
yum: name=argo-egi-consumer state=latest enablerepo={{ enabled_argo_repo }}
notify: restart all consumers
- name: Create consumer configuration directories
file: path=/etc/argo-{{ item.key | lower }}-consumer
state=directory
owner=root group=root mode=0755
with_dict: tenants
notify: restart all consumers
- name: Copy metric avro specification for each tenant
tags:
- consumer_config
template: src=metric_data.avsc.j2
dest=/etc/argo-{{ item.key | lower }}-consumer/metric_data.avsc
owner=root group=root mode=0644
with_dict: tenants
notify: restart all consumers
- name: Configure ar-compute stuff 2
tags: compute_config
lineinfile: dest=/etc/ar-compute-engine.conf
regexp="^mode="
line="mode=local"
state=present
backup=yes
- name: Create consumer output directories per tenant
tags: consumer_config
file: path=/var/lib/argo-{{ item.key | lower }}-consumer
state=directory
owner=arstats group=arstats mode=0755
with_dict: tenants
- name: Consumer configuration
tags:
- consumer_config
template: src=consumer.conf.j2
dest=/etc/argo-{{ item.key | lower }}-consumer/consumer.conf
owner=root group=root mode=0644
with_dict: tenants
notify: restart all consumers
# TODO: Make following task tenant unaware
- name: Copy out init scripts for non egi consumers
tags:
- consumer_config
template: src=consumer.init.j2
dest=/etc/init.d/argo-{{ item.key | lower }}-consumer
owner=root group=root mode=0755
with_dict: tenants
when: item.key|lower != "egi"
notify: restart all non egi consumers
# TODO: Make following task tenant unaware
- name: Create copies of python wrappers for non egi consumers
tags:
- consumer_config
file: path=/usr/bin/argo-{{ item.key | lower }}-wrapper-consumer.py
state=link src=/usr/bin/argo-egi-consumer.py
with_dict: tenants
when: item.key|lower != "egi"
- name: Enable and start consumer services
tags:
- consumer_config
service: name=argo-{{ item.key | lower }}-consumer enabled=on state=started
with_dict: tenants
- name: Configure ar-compute stuff 3
tags: compute_config
lineinfile: dest=/etc/ar-compute-engine.conf
regexp="^prefilter_clean="
line="prefilter_clean=false"
state=present
backup=yes
- name: Install argo-egi-connectors from ar project
tags:
- ar-packages
- connectors
yum: name=argo-egi-connectors state=latest enablerepo={{ enabled_argo_repo }}
- name: Configure connectors
tags:
- connectors_config
- connectors
template: src=customer.conf.j2
dest=/etc/argo-egi-connectors/{{ item.key | lower }}-customer.conf
owner=root group=root mode=0644
backup=yes
with_dict: tenants
- name: POEM configuration
tags:
- connectors_config
- poem_config
template: src=poem-connector.conf.j2
dest=/etc/argo-egi-connectors/poem-connector.conf
owner=root group=root mode=0644
backup=yes
- name: Configure poem connector per tenant cron job
tags:
- connectors_config
- connectors
- poem_cron
cron: cron_file=poem_{{ item.key | lower }}
name=poem_{{ item.key | lower }}
minute=2
hour=0
user=root
job="/usr/libexec/argo-egi-connectors/poem-connector.py {% if item.value.prefilter is not defined %} -np {% endif %} -c /etc/argo-egi-connectors/{{ item.key | lower }}-customer.conf"
state=present
with_dict: tenants
- name: Configure topology connector per tenant cron job
tags:
- connectors_config
- connectors
- topology_cron
cron: cron_file=topology_{{ item.key | lower }}
name=topology_{{ item.key | lower }}
minute=7
hour=0
user=root
job="/usr/libexec/argo-egi-connectors/topology-gocdb-connector.py -c /etc/argo-egi-connectors/{{ item.key | lower }}-customer.conf"
state=present
with_dict: tenants
- name: Configure weights connector per tenant cron job
tags:
- connectors_config
- connectors
- weights_cron
cron: cron_file=weights_{{ item.key | lower }}
name=weights_{{ item.key | lower }}
minute=5
hour=0
user=root
job="/usr/libexec/argo-egi-connectors/weights-gstat-connector.py -c /etc/argo-egi-connectors/{{ item.key | lower }}-customer.conf"
state=present
with_dict: tenants
- name: Configure ar-compute stuff 4
tags: consumer_config
lineinfile: dest=/etc/ar-compute-engine.conf
regexp="^sync_path="
line="sync_path=/var/lib/argo-connectors"
state=present
backup=yes
- name: Install ar-compute from ar project
tags: ar-packages
yum: name=ar-compute state=latest enablerepo={{ enabled_argo_repo }}
- name: Configure ar-compute stuff 5
tags: consumer_config
lineinfile: dest=/etc/ar-compute-engine.conf
regexp="^sync_exec="
line="sync_exec=/usr/libexec/argo-egi-connectors"
state=present
backup=yes
- name: Copy out compute engine configuration file
tags: ce_config
template: src=ar-compute-engine.conf.j2
dest=/etc/ar-compute-engine.conf
owner=root group=root mode=0644
backup=yes
- name: Configure ar-compute job cycle daily cron
tags: compute_config
......@@ -74,30 +172,22 @@
name=ar_job_cycle_hourly
state=present
minute=55
hour=*
hour=*/2
job="/usr/libexec/ar-compute/bin/job_cycle.py -d $(/bin/date --utc +\%Y-\%m-\%d)"
- name: Create job directories
tags: sync_config
file: path={{ item }} owner=root group=root mode=0755 state=directory
with_items:
- /var/lib/argo-connectors/EGI/Cloudmon
- /var/lib/argo-connectors/EGI/Critical
- name: Make sure ownerships are OK
tags: consumer_config
file: path={{ item }} owner=root group=arstats mode=0775 state=directory
with_items:
- /var/lib/argo-connectors
- /var/lib/ar-consumer
- name: Enable and start consumer service
tags: consumer_config
service: name=ar-consumer enabled=yes state=started
- name: Add ar-compute poller hourly cron for tenant EGI
tags: compute_crons
cron: cron_file=ar_poller_hourly_egi
name=ar_poller_hourly_egi
minute=25
hour=*
user=root
job="/usr/libexec/ar-compute/bin/poller_ar.py -t EGI"
state=present
- name: Install ar-data-retention from ar project
tags: ar-data-retention
yum: name=ar-data-retention state=latest
yum: name=ar-data-retention state=latest enablerepo={{ enabled_argo_repo }}
- name: Parametrize data retention policies
tags: data_retention
......
[default]
# mongo server ip location
mongo_host={{ mongo_host_or_ip }}
# mongo server port
mongo_port={{ mongo_port_number }}
# core database used by argo
mongo_core_db = argo_core
# mongo authentication
# mongo_user =
# mongo_pass =
# declare the mode of ARGOeu
# can be: local or cluster
mode={{ argo_compute_mode }}
# declare the serialization framework
# can be: avro or none
serialization=none
# declare if prefilter data must be cleaned after upload to hdfs
prefilter_clean={{ prefilter_clean_bool }}
sync_clean=true
# Provide maximum number of recomputations that can run in parallel.
recomp_threshold=1
[logging]
# mode for logging (syslog,file,none)
log_mode=syslog
# log level status
log_level=DEBUG
# If log_mode equals file - uncomment to set log file path:
# log_file=/var/log/ar-compute/ar-compute.log
# Hadoop clients log level and log appender
# If you want to log via SYSLOG make sure
# an appropriate appender is defined in hadoop
# log4j.properties file and just add the name
# of this appender in the following line. I.e.
# if you define a new appender named SYSLOG
# change console to SYSLOG, or just add
# SYSLOG appender in the following line
hadoop_log_root=INFO,console
[connectors]
sync_conf={{ argo_sync_conf_path }}
sync_exec={{ argo_exec_path }}
sync_path={{ argo_sync_path }}
[jobs]
# Here are declared available tenants and available jobs
# for each tenant (tenant/job names are case-sensitive)
# The order of declarations is as follows:
#
# tenants=TenantA,TenantB
# TenantA_jobs=Job1,Job2,Job3
# TenantB_jobs=Job4,Job5
# TenantA_prefilter=prefilter_exec (optional)
#
# Declare available tenants
tenants={{ tenants|join(',')}}
# For a declared tenant declare it's jobs by using
# {Tenant_Name}_jobs conformance
{% for key,value in tenants.iteritems() %}
{{ key }}_jobs={{ value.jobs_all|replace(" ","") }}
{% if value.prefilter is defined %}
{{ key }}_prefilter={{ value.prefilter }}
{% endif %}
{% endfor %}
[sampling]
s_period=1440
s_interval=5
\ No newline at end of file