diff --git a/README.md b/README.md index adaaaa53b..4d955ec9a 100644 --- a/README.md +++ b/README.md @@ -1,46 +1,62 @@ -# Hubble +Table of Contents +================= -An alternate version of Hubblestack which can be run without an existing -SaltStack infrastructure. + * [HUBBLE](#hubble) + * [Packaging / Installing](#packaging--installing) + * [Installing using setup.py](#installing-using-setuppy) + * [Building standalone packages (CentOS)](#building-standalone-packages-centos) + * [Building standalone packages (Debian)](#building-standalone-packages-debian) + * [Buidling Hubble packages through Dockerfile](#buidling-hubble-packages-through-dockerfile) + * [Nova](#nova) + * [Usage](#usage) + * [Configuration](#configuration) + * [Nebula](#nebula) + * [Usage](#usage-1) + * [Configuration](#configuration-1) + * [Pulsar](#pulsar) + * [Usage](#usage-2) + * [Configuration](#configuration-2) + * [Excluding Paths](#excluding-paths) + * [Pulsar topfile top.pulsar](#pulsar-topfile-toppulsar) -# Packaging / Installing +# HUBBLE -## Installing using setup.py +Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. -```bash +## Packaging / Installing +### Installing using setup.py +```sh sudo yum install git -y git clone https://github.com/hubblestack/hubble cd hubble sudo python setup.py install ``` +Installs a hubble "binary" into `/usr/bin/`. -Installs a `hubble` "binary" into `/usr/bin/`. +A config template has been placed in `/etc/hubble/hubble`. Modify it to your specifications and needs. You can do `hubble -h` to see the available options. -## Building standalone packages (CentOS) +The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. -```bash -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_rpms.sh # note the lack of sudo, that is important -``` - -Packages will be in the `hubble/pkg/dist/` directory. The only difference -between the packages is the inclusion of `/etc/init.d/hubble` for el6 and -the inclusion of a systemd unit file for el7. There's no guarantee of glibc -compatibility. - -## Building standalone packages (Debian) +### Buidling Hubble packages through Dockerfile +Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `/pkg`. For example, dockerfile for centos6 distribution is at the path `/pkg/centos6/` -```bash -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_debs.sh # note the lack of sudo, that is important +To build an image +```sh +docker build -t +``` +To run the container (which will output the package file in your current directory) +```sh +docker run -it --rm -v `pwd`:/data ``` -Package will be in the `hubble/pkg/dist/` directory. There's no guarantee of -glibc compatibility. +## Nova +Nova is Hubble's auditing system. +### Usage +There are four primary functions for Nova module: +- `hubble.sync` : syncs the `hubblestack_nova_profiles/` and `hubblestack_nova/` directories to the host(s). +- `hubble.load` : loads the synced audit hosts and their yaml configuration files. +- `hubble.audit` : audits the minion(s) using the YAML profile(s) you provide as comma-separated arguments. hubble.audit takes two optional arguments. The first is a comma-separated list of paths. These paths can be files or directories within the `hubblestack_nova_profiles` directory. The second argument allows for toggling Nova configuration, such as verbosity, level of detail, etc. If `hubble.audit` is run without targeting any audit configs or directories, it will instead run `hubble.top` with no arguments. `hubble.audit` will return a list of audits which were successful, and a list of audits which failed. +- `hubble.top` : audits the minion(s) using the top.nova configuration. ## Using released packages @@ -48,53 +64,159 @@ Various pre-built packages targeting several popular operating systems can be fo # Usage -A config template has been placed in `/etc/hubble/hubble`. Modify it to your -specifications and needs. +Here are some example calls for `hubble.audit`: +```sh +# Run the cve scanner and the CIS profile: +hubble hubble.audit cve.scan-v2,cis.centos-7-level-1-scored-v1 +# Run hubble.top with the default topfile (top.nova) +hubble hubble.top +# Run all yaml configs and tags under salt://hubblestack_nova_profiles/foo/ and salt://hubblestack_nova_profiles/bar, but only run audits with tags starting with "CIS" +hubble hubble.audit foo,bar tags='CIS*' +``` +### Configuration +For Nova module, configurations can be done via Nova topfiles. Nova topfiles look very similar to saltstack topfiles, except the top-level key is always nova, as nova doesn’t have environments. + +**hubblestack/hubblestack_data/top.nova** +```sh +nova: + '*': + - cve.scan-v2 + - network.ssh + - network.smtp + 'web*': + - cis.centos-7-level-1-scored-v1 + - cis.centos-7-level-2-scored-v1 + 'G@os_family:debian': + - network.ssh + - cis.debian-7-level-1-scored: 'CIS*' +``` +Additionally, all nova topfile matches are compound matches, so you never need to define a match type like you do in saltstack topfiles. Each list item is a string representing the dot-separated location of a yaml file which will be run with `hubble.audit`. You can also specify a tag glob to use as a filter for just that yaml file, using a colon after the yaml file (turning it into a dictionary). See the last two lines in the yaml above for examples. + +Examples: +```sh +hubble hubble.top +hubble hubble.top foo/bar/top.nova +hubble hubble.top foo/bar.nova verbose=True +``` + +In some cases, your organization may want to skip certain audit checks for certain hosts. This is supported via compensating control configuration. + +You can skip a check globally by adding a `control: ` key to the check itself. This key should be added at the same level as description and trigger pieces of a check. In this case, the check will never run, and will output under the Controlled results key. + +Nova also supports separate control profiles, for more fine-grained control using topfiles. You can use a separate YAML top-level key called control. Generally, you’ll put this top-level key inside of a separate YAML file and only include it in the top-data for the hosts for which it is relevant. + +For these separate control configs, the audits will always run, whether they are controlled or not. However, controlled audits which fail will be converted from Failure to Controlled in a post-processing operation. + +The control config syntax is as follows: +```sh +control: + - CIS-2.1.4: This is the reason we control the check + - some_other_tag: + reason: This is the reason we control the check + - a_third_tag_with_no_reason + ``` +Note that providing a reason for the control is optional. Any of the three formats shown in the yaml list above will work. + +Once you have your compensating control config, just target the yaml to the hosts you want to control using your topfile. In this case, all the audits will still run, but if any of the controlled checks fail, they will be removed from Failure and added to Controlled, and will be treated as a Success for the purposes of compliance percentage. -You can do `hubble -h` to see the available options. +## Nebula +Nebula is Hubble’s Insight system, which ties into osquery, allowing you to query your infrastructure as if it were a database. This system can be used to take scheduled snapshots of your systems. -The first two commands you should run to make sure things are set up correctly -are `hubble --version` and `hubble test.ping`. If those run without issue -you're probably in business! +Nebula leverages the osquery_nebula execution module which requires the osquery binary to be installed. More information about osquery can be found at `https://osquery.io`. -## Single invocation +### Usage -Hubble supports one-off invocations of specific functions: +Nebula queries have been designed to give detailed insight into system activity. The queries can be found in the following file. +**hubblestack_nebula/hubblestack_nebula_queries.yaml** +```sh +fifteen_min: + - query_name: running_procs + query: SELECT p.name AS process, p.pid AS process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size AS mem_used, p.parent, g.groupname, u.username AS user, p.path, h.md5, h.sha1, h.sha256 FROM processes AS p LEFT JOIN users AS u ON p.uid=u.uid LEFT JOIN groups AS g ON p.gid=g.gid LEFT JOIN hash AS h ON p.path=h.path; + - query_name: established_outbound + query: SELECT t.iso_8601 AS _time, pos.family, h.*, ltrim(pos.local_address, ':f') AS src, pos.local_port AS src_port, pos.remote_port AS dest_port, ltrim(remote_address, ':f') AS dest, name, p.path AS file_path, cmdline, pos.protocol, lp.protocol FROM process_open_sockets AS pos JOIN processes AS p ON p.pid=pos.pid LEFT JOIN time AS t LEFT JOIN (SELECT * FROM listening_ports) AS lp ON lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash AS h ON h.path=p.path WHERE NOT remote_address='' AND NOT remote_address='::' AND NOT remote_address='0.0.0.0' AND NOT remote_address='127.0.0.1' AND port is NULL; + - query_name: listening_procs + query: SELECT t.iso_8601 AS _time, h.md5 AS md5, p.pid, name, ltrim(address, ':f') AS address, port, p.path AS file_path, cmdline, root, parent FROM listening_ports AS lp LEFT JOIN processes AS p ON lp.pid=p.pid LEFT JOIN time AS t LEFT JOIN hash AS h ON h.path=p.path WHERE NOT address='127.0.0.1'; + - query_name: suid_binaries + query: SELECT sb.*, t.iso_8601 AS _time FROM suid_bin AS sb JOIN time AS t; +hour: + - query_name: crontab + query: SELECT c.*,t.iso_8601 AS _time FROM crontab AS c JOIN time AS t; +day: + - query_name: rpm_packages + query: SELECT rpm.name, rpm.version, rpm.release, rpm.source AS package_source, rpm.size, rpm.sha1, rpm.arch, t.iso_8601 FROM rpm_packages AS rpm JOIN time AS t; ``` -[root@host1 hubble-v2]# hubble hubble.audit cis.centos-7-level-1-scored-v2-1-0 tags=CIS-3.\* -{'Compliance': '45%', - 'Failure': [{'CIS-3.4.2': 'Ensure /etc/hosts.allow is configured'}, - {'CIS-3.4.3': 'Ensure /etc/hosts.deny is configured'}, - {'CIS-3.6.2': 'Ensure default deny firewall policy'}, - {'CIS-3.6.3': 'Ensure loopback traffic is configured'}, - {'CIS-3.6.1_running': 'Ensure iptables is installed'}, - {'CIS-3.2.4': 'Ensure suspicious packets are logged'}, - {'CIS-3.2.2': 'Ensure ICMP redirects are not accepted'}, - {'CIS-3.2.3': 'Ensure secure ICMP redirects are not accepted'}, - {'CIS-3.1.2': 'Ensure packet redirect sending is disabled'}, - {'CIS-3.3.1': 'Ensure IPv6 router advertisements are not accepted'}, - {'CIS-3.3.2': 'Ensure IPv6 redirects are not accepted'}], - 'Success': [{'CIS-3.6.1_installed': 'Ensure iptables is installed'}, - {'CIS-3.4.1': 'Ensure TCP Wrappers is installed'}, - {'CIS-3.4.5': 'Ensure permissions on /etc/hosts.deny are 644'}, - {'CIS-3.4.4': 'Ensure permissions on /etc/hosts.allow are configured'}, - {'CIS-3.2.5': 'Ensure broadcast ICMP requests are ignored'}, - {'CIS-3.2.6': None}, - {'CIS-3.2.1': 'Ensure source routed packets are not accepted'}, - {'CIS-3.1.1': 'Ensure IP forwarding is disabled'}, - {'CIS-3.2.8': 'Ensure TCP SYN Cookies is enabled'}]} + +Nebula query data is best tracked in a central logging or similar system. However, if you would like to run the queries manually you can call the nebula execution module. +```sh +query_group : Group of queries to run +verbose : Defaults to False. If set to True, more information (such as the query which was run) will be included in the result. +``` +Examples: +```sh +hubble nebula.queries day +hubble nebula.queries hour [verbose=True] +hubble nebula.queries fifteen-min ``` +### Configuration +For Nebula module, configurations can be done via Nebula topfiles. Nebula topfile functionality is similar to Nova topfiles. -## Scheduler +**top.nebula topfile** -Hubble supports scheduled jobs. See the docstring for `schedule` for -more information, but it follows the basic structure of salt scheduled jobs. -The schedule config should be placed in `/etc/hubble/hubble` along with any -other hubble config: +```sh +nebula: + '*': + - hubblestack_nebula_queries + 'sample_team': + - sample_team_nebula_queries +``` +Nebula topfile, `nebula.top` by default has `hubblestack_nebula_queries.yaml` which consists queries as explained in the above usage section and if specific queries are required by teams then those queries can be added in a another yaml file and include it in `nebula.top` topfile. Place this new yaml file at the path `hubblestack/hubblestack_data/hubblestack_nebula` +Examples for running `nebula.top`: +```sh +hubble nebula.top hour +hubble nebula.top foo/bar/top.nova hour +hubble nebula.top fifteen_min verbose=True ``` +## Pulsar + +Pulsar is designed to monitor for file system events, acting as a real-time File Integrity Monitoring (FIM) agent. Pulsar is composed of a custom Salt beacon that watches for these events and hooks into the returner system for alerting and reporting. In other words, you can recieve real-time alerts for unscheduled file system modifications anywhere you want to recieve them. We’ve designed Pulsar to be lightweight and does not affect the system performance. It simply watches for events and directly sends them to one of the Pulsar returner destinations. + +### Usage +Once Pulsar is configured there isn’t anything you need to do to interact with it. It simply runs quietly in the background and sends you alerts. + +### Configuration +The Pulsar configuration can be found at `hubblestack_pulsar_config.yaml` file. Every environment will have different needs and requirements, and we understand that, so we’ve designed Pulsar to be flexible. + +**hubblestack_pulsar_config.yaml** +```sh +/etc: { recurse: True, auto_add: True } +/bin: { recurse: True, auto_add: True } +/sbin: { recurse: True, auto_add: True } +/boot: { recurse: True, auto_add: True } +/usr/bin: { recurse: True, auto_add: True } +/usr/sbin: { recurse: True, auto_add: True } +/usr/local/bin: { recurse: True, auto_add: True } +/usr/local/sbin: { recurse: True, auto_add: True } +return: slack_pulsar +checksum: sha256 +stats: True +batch: False +``` + +Pulsar runs on schdule which can be found at `/etc/hubble/hubble` + +**/etc/hubble/hubble** +```sh schedule: + pulsar: + function: pulsar.process + seconds: 1 + returner: splunk_pulsar_return + run_on_start: True + pulsar_canary: + function: pulsar.canary + seconds: 86400 job1: function: hubble.audit seconds: 60 @@ -107,24 +229,45 @@ schedule: run_on_start: True ``` -Note that you need to have your hubblestack splunk returner configured in order -to use the above block: +In order to receive Pulsar notifications you’ll need to install the custom returners found in the Quasar repository. +Example of using the Slack Pulsar returner to recieve FIM notifications: +```sh +slack_pulsar: + as_user: true + username: calculon + channel: channel + api_key: xoxb-xxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx ``` -hubblestack: - returner: - splunk: - - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX - indexer: splunk-indexer.domain.tld - index: hubble - sourcetype_nova: hubble_audit - sourcetype_nebula: hubble_osquery - sourcetype_pulsar: hubble_fim -``` +#### Excluding Paths +There may be certain paths that you want to exclude from this real-time FIM tool. This can be done using the exclude: keyword beneath any defined path in `hubblestack_pulsar_config.yaml` file. + +/var: + recurse: True + auto_add: True + exclude: + - /var/log + - /var/spool + - /var/cache + - /var/lock + +For Pulsar module, configurations can be done via Pulsar topfiles when teams needs to add specific configurations or exclusions as discussed above. -When using the scheduler, you can just run `hubble` in the foreground, or use -the included sysvinit and systemd files to run it as a service in the -background. You can also start it as a daemon without any scripts by using the -`-d` argument. +#### Pulsar topfile `top.pulsar` -Use `-vvv` to turn on debug logging. +For Pulsar module, configurations can be done via Pulsar topfiles + +```sh +pulsar: + '*': + - hubblestack_pulsar_config + 'sample_team': + - sample_team_hubblestack_pulsar_config +``` +Pulsar topfile by default has 'hubblestack_pulsar_config' which consists of default configurations and if specific configurations are required by teams then those can be added in another yaml file and include it in 'pulsar.top' topfile. Place this new yaml file at the path `hubblestack/hubblestack_data/hubblestack_pulsar` + +Examples for running pulsar.top: +```sh +hubble pulsar.top +hubble pulsar.top verbose=True +``` diff --git a/conf/hubble b/conf/hubble index 1df67616b..ce6e21b09 100644 --- a/conf/hubble +++ b/conf/hubble @@ -89,6 +89,8 @@ fileserver_backend: # sourcetype_nova: hubble_audit # sourcetype_nebula: hubble_osquery # sourcetype_pulsar: hubble_fim +# sourcetype_log: hubble_log +# splunklogging: True ## If you are instead using the slack returner, you'll need a block similar to ## this: diff --git a/hubblestack/__init__.py b/hubblestack/__init__.py index 83b6ab028..3db1b9fed 100644 --- a/hubblestack/__init__.py +++ b/hubblestack/__init__.py @@ -1 +1 @@ -__version__ = '2.2.4' +__version__ = '2.2.5' diff --git a/hubblestack/daemon.py b/hubblestack/daemon.py index cbe2a1399..d0cb5931b 100644 --- a/hubblestack/daemon.py +++ b/hubblestack/daemon.py @@ -18,6 +18,7 @@ import salt.utils import salt.utils.jid import salt.log.setup +import hubblestack.splunklogging from hubblestack import __version__ log = logging.getLogger(__name__) @@ -376,6 +377,14 @@ def load_config(): __salt__ = salt.loader.minion_mods(__opts__, utils=__utils__) __returners__ = salt.loader.returners(__opts__, __salt__) + if __salt__['config.get']('hubblestack:splunklogging', False): + hubblestack.splunklogging.__grains__ = __grains__ + hubblestack.splunklogging.__salt__ = __salt__ + root_logger = logging.getLogger() + handler = hubblestack.splunklogging.SplunkHandler() + handler.setLevel(logging.ERROR) + root_logger.addHandler(handler) + def parse_args(): ''' diff --git a/hubblestack/extmods/grains/configgrains.py b/hubblestack/extmods/grains/configgrains.py new file mode 100644 index 000000000..48cba419d --- /dev/null +++ b/hubblestack/extmods/grains/configgrains.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +''' +Custom config-defined grains module + +:maintainer: HubbleStack +:platform: All +:requires: SaltStack + +Allow users to collect a list of config directives and set them as custom grains. +The list should be defined under the `hubblestack` key. + +The `grains` value should be a list of dictionaries. Each dictionary should have +a single key which will be set as the grain name. The dictionary's value will +be the grain's value. + +hubblestack: + grains: + - splunkindex: "hubblestack:returner:splunk:index" + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_nova: hubble_audit +''' + +import salt.modules.config + +salt.modules.config.__pillar__ = {} +salt.modules.config.__grains__ = {} + +__salt__ = {'config.get': salt.modules.config.get} + + +def configgrains(): + ''' + Given a list of config values, create custom grains with custom names. + The list comes from config. + + Example: + hubblestack: + grains: + - splunkindex: "hubblestack:returner:splunk:index" + ''' + grains = {} + salt.modules.config.__opts__ = __opts__ + + grains_to_make = __salt__['config.get']('hubblestack:grains', default=[]) + for grain in grains_to_make: + for k, v in grain.iteritems(): + grain_value = __salt__['config.get'](v, default=None) + if grain_value: + grains[k] = grain_value + return grains diff --git a/hubblestack/extmods/modules/pulsar.py b/hubblestack/extmods/modules/pulsar.py index 7f2f6cf76..72d6d5bab 100644 --- a/hubblestack/extmods/modules/pulsar.py +++ b/hubblestack/extmods/modules/pulsar.py @@ -393,7 +393,7 @@ def _dict_update(dest, upd, recursive_update=True, merge_lists=False): dest_subkey = None if isinstance(dest_subkey, collections.Mapping) \ and isinstance(val, collections.Mapping): - ret = update(dest_subkey, val, merge_lists=merge_lists) + ret = _dict_update(dest_subkey, val, merge_lists=merge_lists) dest[key] = ret elif isinstance(dest_subkey, list) \ and isinstance(val, list): diff --git a/hubblestack/extmods/modules/win_pulsar.py b/hubblestack/extmods/modules/win_pulsar.py index 6c1c2f78a..6d447bfa1 100644 --- a/hubblestack/extmods/modules/win_pulsar.py +++ b/hubblestack/extmods/modules/win_pulsar.py @@ -545,7 +545,7 @@ def _dict_update(dest, upd, recursive_update=True, merge_lists=False): dest_subkey = None if isinstance(dest_subkey, collections.Mapping) \ and isinstance(val, collections.Mapping): - ret = update(dest_subkey, val, merge_lists=merge_lists) + ret = _dict_update(dest_subkey, val, merge_lists=merge_lists) dest[key] = ret elif isinstance(dest_subkey, list) \ and isinstance(val, list): diff --git a/hubblestack/files/hubblestack_nova/misc.py b/hubblestack/files/hubblestack_nova/misc.py index e093b93b7..6e11b8b27 100644 --- a/hubblestack/files/hubblestack_nova/misc.py +++ b/hubblestack/files/hubblestack_nova/misc.py @@ -167,6 +167,14 @@ def _get_tags(data): # Begin function definitions ############################ + +def _execute_shell_command(cmd): + ''' + This function will execute passed command in /bin/shell + ''' + return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) + + def _is_valid_home_directory(directory_path, check_slash_home=False): directory_path = None if directory_path is None else directory_path.strip() if directory_path is not None and directory_path != "" and os.path.isdir(directory_path): @@ -177,11 +185,49 @@ def _is_valid_home_directory(directory_path, check_slash_home=False): return False -def _execute_shell_command(cmd): + +def _is_permission_in_limit(max_permission,given_permission): ''' - This function will execute passed command in /bin/shell + Return true only if given_permission is not more lenient that max_permission. In other words, if + r or w or x is present in given_permission but absent in max_permission, it should return False + Takes input two integer values from 0 to 7. ''' - return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) + max_permission = int(max_permission) + given_permission = int(given_permission) + allowed_r = False + allowed_w = False + allowed_x = False + given_r = False + given_w = False + given_x = False + + if max_permission >= 4: + allowed_r = True + max_permission = max_permission - 4 + if max_permission >= 2: + allowed_w = True + max_permission = max_permission - 2 + if max_permission >= 1: + allowed_x = True + + if given_permission >= 4: + given_r = True + given_permission = given_permission - 4 + if given_permission >= 2: + given_w = True + given_permission = given_permission - 2 + if given_permission >= 1: + given_x = True + + if given_r and ( not allowed_r ): + return False + if given_w and ( not allowed_w ): + return False + if given_x and ( not allowed_x ): + return False + + return True + def check_all_ports_firewall_rules(reason=''): ''' @@ -199,63 +245,56 @@ def check_all_ports_firewall_rules(reason=''): if open_port not in firewall_ports: no_firewall_ports.append(open_port) - if len(no_firewall_ports) == 0: - return True - return str(no_firewall_ports) + return True if len(no_firewall_ports) == 0 else str(no_firewall_ports) + def check_password_fields_not_empty(reason=''): ''' Ensure password fields are not empty ''' result = _execute_shell_command('cat /etc/shadow | awk -F: \'($2 == "" ) { print $1 " does not have a password "}\'') - if result == '': - return True - return result + return True if result == '' else result + def ungrouped_files_or_dir(reason=''): ''' Ensure no ungrouped files or directories exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -nogroup') - if result == '': - return True - return result + return True if result == '' else result + def unowned_files_or_dir(reason=''): ''' Ensure no unowned files or directories exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -nouser') - if result == '': - return True - return result + return True if result == '' else result + def world_writable_file(reason=''): ''' Ensure no world writable files exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -type f -perm -0002') - if result == '': - return True - return result + return True if result == '' else result + def system_account_non_login(reason=''): ''' Ensure system accounts are non-login ''' result = _execute_shell_command('egrep -v "^\+" /etc/passwd | awk -F: \'($1!="root" && $1!="sync" && $1!="shutdown" && $1!="halt" && $3<500 && $7!="/sbin/nologin" && $7!="/bin/false") {print}\'') - if result == '': - return True - return result + return True if result == '' else result + def sticky_bit_on_world_writable_dirs(reason=''): ''' Ensure sticky bit is set on all world-writable directories ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -type d \( -perm -0002 -a ! -perm -1000 \) 2>/dev/null') - if result == '': - return True - return "There are failures" + return True if result == '' else "There are failures" + def default_group_for_root(reason=''): ''' @@ -263,59 +302,41 @@ def default_group_for_root(reason=''): ''' result = _execute_shell_command('grep "^root:" /etc/passwd | cut -f4 -d:') result = result.strip() - if result == '0': - return True - return False + return True if result == '0' else False + def root_is_only_uid_0_account(reason=''): ''' Ensure root is the only UID 0 account ''' result = _execute_shell_command('cat /etc/passwd | awk -F: \'($3 == 0) { print $1 }\'') - if result.strip() == 'root': - return True - return result - -def test_success(): - ''' - Automatically returns success - ''' - return True + return True if result.strip() == 'root' else result -def test_failure(): - ''' - Automatically returns failure, no reason - ''' - return False - - -def test_failure_reason(reason): - ''' - Automatically returns failure, with a reason (first arg) - ''' - return reason - -def test_mount_attrs(mount_name,attribute,check_type='hard'): +def test_mount_attrs(mount_name, attribute, check_type='hard'): ''' Ensure that a given directory is mounted with appropriate attributes If check_type is soft, then in absence of volume, True will be returned If check_type is hard, then in absence of volume, False will be returned ''' - #check that the path exists on system - command = 'test -e ' + mount_name + ' ; echo $?' - output = _execute_shell_command( command) - if output.strip() == '1': + # check that the path exists on system + command = 'test -e ' + mount_name + results = __salt__['cmd.run_all'](command) + output = results['stdout'] + retcode = results['retcode'] + if str(retcode) == '1': return True if check_type == "soft" else (mount_name + " folder does not exist") - #if the path exits, proceed with following code - output = _execute_shell_command('mount | grep ' + mount_name) - if output.strip() == '': + # if the path exits, proceed with following code + output = __salt__['cmd.run']('mount') #| grep ' + mount_name) + if mount_name not in output: return True if check_type == "soft" else (mount_name + " is not mounted") - elif attribute not in output: - return str(output) else: - return True + for line in output.splitlines(): + if mount_name in line and attribute not in line: + return str(line) + return True + def check_time_synchronization(reason=''): ''' @@ -342,49 +363,6 @@ def restrict_permissions(path,permission): return given_permission -def _is_permission_in_limit(max_permission,given_permission): - ''' - Return true only if given_permission is not more linient that max_permission. In other words, if - r or w or x is present in given_permission but absent in max_permission, it should return False - Takes input two integer values from 0 to 7. - ''' - max_permission = int(max_permission) - given_permission = int(given_permission) - allowed_r = False - allowed_w = False - allowed_x = False - given_r = False - given_w = False - given_x = False - - if max_permission >= 4: - allowed_r = True - max_permission = max_permission - 4 - if max_permission >= 2: - allowed_w = True - max_permission = max_permission - 2 - if max_permission >= 1: - allowed_x = True - - if given_permission >= 4: - given_r = True - given_permission = given_permission - 4 - if given_permission >= 2: - given_w = True - given_permission = given_permission - 2 - if given_permission >= 1: - given_x = True - - if given_r and ( not allowed_r ): - return False - if given_w and ( not allowed_w ): - return False - if given_x and ( not allowed_x ): - return False - - return True - - def check_path_integrity(reason=''): ''' Ensure that system PATH variable is not malformed. @@ -428,10 +406,7 @@ def check_path_integrity(reason=''): """ output = _execute_shell_command(script) - if output.strip() == '': - return True - else: - return output + return True if output.strip() == '' else output def check_duplicate_uids(reason=''): @@ -443,7 +418,6 @@ def check_duplicate_uids(reason=''): duplicate_uids = [k for k,v in Counter(uids).items() if v>1] if duplicate_uids is None or duplicate_uids == []: return True - return str(duplicate_uids) @@ -456,7 +430,6 @@ def check_duplicate_gids(reason=''): duplicate_gids = [k for k,v in Counter(gids).items() if v>1] if duplicate_gids is None or duplicate_gids == []: return True - return str(duplicate_gids) @@ -469,7 +442,6 @@ def check_duplicate_unames(reason=''): duplicate_unames = [k for k,v in Counter(unames).items() if v>1] if duplicate_unames is None or duplicate_unames == []: return True - return str(duplicate_unames) @@ -482,11 +454,10 @@ def check_duplicate_gnames(reason=''): duplicate_gnames = [k for k,v in Counter(gnames).items() if v>1] if duplicate_gnames is None or duplicate_gnames == []: return True - return str(duplicate_gnames) -def check_directory_files_permission(path,permission): +def check_directory_files_permission(path, permission): ''' Check all files permission inside a directory ''' @@ -497,10 +468,7 @@ def check_directory_files_permission(path,permission): per = restrict_permissions(file_in_directory, permission) if per is not True: bad_permission_files += [file_in_directory + ": Bad Permission - " + per + ":"] - if bad_permission_files == []: - return True - - return str(bad_permission_files) + return True if bad_permission_files == [] else str(bad_permission_files) def check_core_dumps(reason=''): @@ -524,11 +492,11 @@ def check_service_status(service_name, state): Return True otherwise state can be enabled or disabled. ''' - output = _execute_shell_command('systemctl is-enabled ' + service_name + ' >/dev/null 2>&1; echo $?') - if (state == "disabled" and output.strip() == "1") or (state == "enabled" and output.strip() == "0"): + output = __salt__['cmd.retcode']('systemctl is-enabled ' + service_name) + if (state == "disabled" and str(output) == "1") or (state == "enabled" and str(output) == "0"): return True else: - return _execute_shell_command('systemctl is-enabled ' + service_name + ' 2>/dev/null') + return __salt__['cmd.run_stdout']('systemctl is-enabled ' + service_name) def check_ssh_timeout_config(reason=''): ''' @@ -553,15 +521,13 @@ def check_unowned_files(reason=''): unowned_files = _execute_shell_command("df --local -P | awk 'NR!=1 {print $6}' | xargs -I '{}' find '{}' -xdev -nouser 2>/dev/null").strip() unowned_files = unowned_files.split('\n') if unowned_files != "" else [] - # The command above only searches local filesystems, there may still be compromised items on network mounted partitions. + # The command above only searches local filesystems, there may still be compromised items on network + # mounted partitions. # Following command will check each partition for unowned files unowned_partition_files = _execute_shell_command("mount | awk '{print $3}' | xargs -I '{}' find '{}' -xdev -nouser 2>/dev/null").strip() unowned_partition_files = unowned_partition_files.split('\n') if unowned_partition_files != "" else [] unowned_files = unowned_files + unowned_partition_files - if unowned_files == []: - return True - - return str(list(set(unowned_files))) + return True if unowned_files == [] else str(list(set(unowned_files))) def check_ungrouped_files(reason=''): @@ -571,15 +537,13 @@ def check_ungrouped_files(reason=''): ungrouped_files = _execute_shell_command("df --local -P | awk 'NR!=1 {print $6}' | xargs -I '{}' find '{}' -xdev -nogroup 2>/dev/null").strip() ungrouped_files = ungrouped_files.split('\n') if ungrouped_files != "" else [] - # The command above only searches local filesystems, there may still be compromised items on network mounted partitions. + # The command above only searches local filesystems, there may still be compromised items on network + # mounted partitions. # Following command will check each partition for unowned files ungrouped_partition_files = _execute_shell_command("mount | awk '{print $3}' | xargs -I '{}' find '{}' -xdev -nogroup 2>/dev/null").strip() ungrouped_partition_files = ungrouped_partition_files.split('\n') if ungrouped_partition_files != "" else [] ungrouped_files = ungrouped_files + ungrouped_partition_files - if ungrouped_files == []: - return True - - return str(list(set(ungrouped_files))) + return True if ungrouped_files == [] else str(list(set(ungrouped_files))) def check_all_users_home_directory(max_system_uid): @@ -596,14 +560,11 @@ def check_all_users_home_directory(max_system_uid): if len(user_uid_dir) < 3: user_uid_dir = user_uid_dir + ['']*(3-len(user_uid_dir)) if user_uid_dir[1].isdigit(): - if not _is_valid_home_directory(user_uid_dir[2], True) and int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] is not "nfsnobody": + if not _is_valid_home_directory(user_uid_dir[2], True) and int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] != "nfsnobody": error += ["Either home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is invalid or does not exist."] else: error += ["User " + user_uid_dir[0] + " has invalid uid " + user_uid_dir[1]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_home_directory_permissions(reason=''): @@ -623,10 +584,7 @@ def check_users_home_directory_permissions(reason=''): if result is not True: error += ["permission on home directory " + user_dir[1] + " of user " + user_dir[0] + " is wrong: " + result] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_own_their_home(max_system_uid): @@ -647,17 +605,14 @@ def check_users_own_their_home(max_system_uid): if not _is_valid_home_directory(user_uid_dir[2]): if int(user_uid_dir[1]) >= max_system_uid: error += ["Either home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is invalid or does not exist."] - elif int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] is not "nfsnobody": - owner = _execute_shell_command("stat -L -c \"%U\" \"" + user_uid_dir[2] + "\"") + elif int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] != "nfsnobody": + owner = __salt__['cmd.run']("stat -L -c \"%U\" \"" + user_uid_dir[2] + "\"") if owner != user_uid_dir[0]: error += ["The home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is owned by " + owner] else: error += ["User " + user_uid_dir[0] + " has invalid uid " + user_uid_dir[1]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_dot_files(reason=''): @@ -685,10 +640,7 @@ def check_users_dot_files(reason=''): if file_permission[2] in ["2", "3", "6", "7"]: error += ["Other Write permission set on file " + dot_file + " for user " + user_dir[0]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_forward_files(reason=''): @@ -708,10 +660,7 @@ def check_users_forward_files(reason=''): if forward_file is not None and os.path.isfile(forward_file): error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has " + forward_file + " file"] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_netrc_files(reason=''): @@ -731,10 +680,7 @@ def check_users_netrc_files(reason=''): if netrc_file is not None and os.path.isfile(netrc_file): error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has .netrc file"] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_groups_validity(reason=''): @@ -751,10 +697,8 @@ def check_groups_validity(reason=''): if str(group_presence_validity) != "0": invalid_groups += ["Invalid groupid: " + group_id + " in /etc/passwd file"] - if invalid_groups == []: - return True + return True if invalid_groups == [] else str(invalid_groups) - return str(invalid_groups) def ensure_reverse_path_filtering(reason=''): ''' @@ -767,7 +711,7 @@ def ensure_reverse_path_filtering(reason=''): error_list.append( "net.ipv4.conf.all.rp_filter not found") search_results = re.findall("rp_filter = (\d+)",output) result = int(search_results[0]) - if( result < 1): + if result < 1: error_list.append( "net.ipv4.conf.all.rp_filter value set to " + str(result)) command = "sysctl net.ipv4.conf.default.rp_filter 2> /dev/null" output = _execute_shell_command(command) @@ -775,7 +719,7 @@ def ensure_reverse_path_filtering(reason=''): error_list.append( "net.ipv4.conf.default.rp_filter not found") search_results = re.findall("rp_filter = (\d+)",output) result = int(search_results[0]) - if( result < 1): + if result < 1: error_list.append( "net.ipv4.conf.default.rp_filter value set to " + str(result)) if len(error_list) > 0 : return str(error_list) @@ -783,6 +727,225 @@ def ensure_reverse_path_filtering(reason=''): return True +def check_users_rhosts_files(reason=''): + ''' + Ensure no users have .rhosts files + ''' + + users_dirs = _execute_shell_command("cat /etc/passwd | egrep -v '(root|halt|sync|shutdown)' | awk -F: '($7 != \"/sbin/nologin\") {print $1\" \"$6}'").strip() + users_dirs = users_dirs.split('\n') if users_dirs != "" else [] + error = [] + for user_dir in users_dirs: + user_dir = user_dir.split() + if len(user_dir) < 2: + user_dir = user_dir + ['']*(2-len(user_dir)) + if _is_valid_home_directory(user_dir[1]): + rhosts_file = _execute_shell_command("find " + user_dir[1] + " -maxdepth 1 -name \".rhosts\"").strip() + if rhosts_file is not None and os.path.isfile(rhosts_file): + error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has .rhosts file"] + return True if error == [] else str(error) + + +def check_netrc_files_accessibility(reason=''): + ''' + Ensure users' .netrc Files are not group or world accessible + ''' + + script = """ + for dir in `cat /etc/passwd | egrep -v '(root|sync|halt|shutdown)' | awk -F: '($7 != "/sbin/nologin") { print $6 }'`; do + for file in $dir/.netrc; do + if [ ! -h "$file" -a -f "$file" ]; then + fileperm=`ls -ld $file | cut -f1 -d" "` + if [ `echo $fileperm | cut -c5` != "-" ]; then + echo "Group Read set on $file" + fi + if [ `echo $fileperm | cut -c6` != "-" ]; then + echo "Group Write set on $file" + fi + if [ `echo $fileperm | cut -c7` != "-" ]; then + echo "Group Execute set on $file" + fi + if [ `echo $fileperm | cut -c8` != "-" ]; then + echo "Other Read set on $file" + fi + if [ `echo $fileperm | cut -c9` != "-" ]; then + echo "Other Write set on $file" + fi + if [ `echo $fileperm | cut -c10` != "-" ]; then + echo "Other Execute set on $file" + fi + fi + done + done + + """ + output = _execute_shell_command(script) + return True if output.strip() == '' else output + + +def _grep(path, + pattern, + *args): + ''' + Grep for a string in the specified file + + .. note:: + This function's return value is slated for refinement in future + versions of Salt + + path + Path to the file to be searched + + .. note:: + Globbing is supported (i.e. ``/var/log/foo/*.log``, but if globbing + is being used then the path should be quoted to keep the shell from + attempting to expand the glob expression. + + pattern + Pattern to match. For example: ``test``, or ``a[0-5]`` + + opts + Additional command-line flags to pass to the grep command. For example: + ``-v``, or ``-i -B2`` + + .. note:: + The options should come after a double-dash (as shown in the + examples below) to keep Salt's own argument parser from + interpreting them. + + CLI Example: + + .. code-block:: bash + + salt '*' file.grep /etc/passwd nobody + salt '*' file.grep /etc/sysconfig/network-scripts/ifcfg-eth0 ipaddr -- -i + salt '*' file.grep /etc/sysconfig/network-scripts/ifcfg-eth0 ipaddr -- -i -B2 + salt '*' file.grep "/etc/sysconfig/network-scripts/*" ipaddr -- -i -l + ''' + path = os.path.expanduser(path) + + if args: + options = ' '.join(args) + else: + options = '' + cmd = ( + r'''grep {options} {pattern} {path}''' + .format( + options=options, + pattern=pattern, + path=path, + ) + ) + + try: + ret = __salt__['cmd.run_all'](cmd, python_shell=False, ignore_retcode=True) + except (IOError, OSError) as exc: + raise CommandExecutionError(exc.strerror) + + return ret + + +def check_list_values(file_path, match_pattern, value_pattern, grep_arg, white_list, black_list, value_delimter): + ''' + This function will first get the line matching given match_pattern. + After this value pattern will be extracted from the above line. + value pattern will be splitted by value_delimiter to get the list of values. + match_pattern will be regex patter for grep command. + value_pattern will be regex for re module of python to get matched values. + Only one of white_list and blacklist is allowed. + white_list and black_list should have comma(,) seperated values. + + Example for CIS-2.2.1.2 + ensure_ntp_configured: + data: + CentOS Linux-7: + tag: 2.2.1.2 + function: check_list_values + args: + - /etc/ntp.conf + - '^restrict.*default' + - '^restrict.*default(.*)$' + - null + - kod,nomodify,notrap,nopeer,noquery + - null + - ' ' + description: Ensure ntp is configured + ''' + + list_delimter = "," + + if black_list is not None and white_list is not None: + return "Both black_list and white_list values are not allowed." + + grep_args = [] if grep_arg is None else [grep_arg] + matched_lines = _grep(file_path, match_pattern, *grep_args).get('stdout') + if not matched_lines: + return "No match found for the given pattern: " + str(match_pattern) + + matched_lines = matched_lines.split('\n') if matched_lines is not None else [] + error = [] + for matched_line in matched_lines: + regexp = re.compile(value_pattern) + matched_values = regexp.search(matched_line).group(1) + matched_values = matched_values.strip().split(value_delimter) if matched_values is not None else [] + if white_list is not None: + values_not_in_white_list = list(set(matched_values) - set(white_list.strip().split(list_delimter))) + if values_not_in_white_list != []: + error += ["values not in whitelist: " + str(values_not_in_white_list)] + else: + values_in_black_list = list(set(matched_values).intersection(set(black_list.strip().split(list_delimter)))) + if values_in_black_list != []: + error += ["values in blacklist: " + str(values_in_black_list)] + + return True if error == [] else str(error) + + +def mail_conf_check(reason=''): + ''' + Ensure mail transfer agent is configured for local-only mode + ''' + valid_addresses = ["localhost", "127.0.0.1", "::1"] + mail_addresses = _execute_shell_command("grep '^[[:blank:]]*inet_interfaces' /etc/postfix/main.cf | awk -F'=' '{print $2}'").strip() + mail_addresses = mail_addresses.split(',') if mail_addresses != "" else [] + mail_addresses = map(str.strip, mail_addresses) + invalid_addresses = list(set(mail_addresses) - set(valid_addresses)) + + return str(invalid_addresses) if invalid_addresses != [] else True + +def check_if_any_pkg_installed(args): + ''' + :param args: Comma separated list of packages those needs to be verified + :return: True if any of the input package is installed else False + ''' + result = False + for pkg in args.split(','): + if __salt__['pkg.version'](pkg): + result = True + break + return result + + +def test_success(): + ''' + Automatically returns success + ''' + return True + + +def test_failure(): + ''' + Automatically returns failure, no reason + ''' + return False + + +def test_failure_reason(reason): + ''' + Automatically returns failure, with a reason (first arg) + ''' + return reason + + FUNCTION_MAP = { 'check_all_ports_firewall_rules': check_all_ports_firewall_rules, 'check_password_fields_not_empty': check_password_fields_not_empty, @@ -818,4 +981,10 @@ def ensure_reverse_path_filtering(reason=''): 'check_users_netrc_files': check_users_netrc_files, 'check_groups_validity': check_groups_validity, 'ensure_reverse_path_filtering': ensure_reverse_path_filtering, + 'check_users_rhosts_files': check_users_rhosts_files, + 'check_netrc_files_accessibility': check_netrc_files_accessibility, + 'check_list_values': check_list_values, + 'mail_conf_check': mail_conf_check, + 'check_if_any_pkg_installed':check_if_any_pkg_installed, + } diff --git a/hubblestack/files/hubblestack_nova/mount.py b/hubblestack/files/hubblestack_nova/mount.py new file mode 100644 index 000000000..b12c9fc9a --- /dev/null +++ b/hubblestack/files/hubblestack_nova/mount.py @@ -0,0 +1,225 @@ +# -*- encoding: utf-8 -*- +''' +HubbleStack Nova plugin for verifying attributes associated with a mounted partition. + +Supports both blacklisting and whitelisting patterns. Blacklisted patterns must +not be found in the specified file. Whitelisted patterns must be found in the +specified file. + +:maintainer: HubbleStack / basepi +:maturity: 2017.8.29 +:platform: All +:requires: SaltStack + +This audit module requires yaml data to execute. It will search the local +directory for any .yaml files, and if it finds a top-level 'mount' key, it will +use that data. + +Sample YAML data, with inline comments: + + +mount: + whitelist: # or blacklist + ensure_nodev_option_on_/tmp: # unique ID + data: + CentOS Linux-6: # osfinger grain + - '/tmp': # path of partition + tag: 'CIS-1.1.1' # audit tag + attribute: nodev # attribute which must exist for the mounted partition + check_type: soft # if 'hard', the check fails if the path doesn't exist or + # if it is not a mounted partition. If 'soft', the test passes + # for such cases (default: hard) +''' +from __future__ import absolute_import +import logging + +import fnmatch +import yaml +import os +import copy +import salt.utils +import re + +from distutils.version import LooseVersion + +log = logging.getLogger(__name__) + + +def __virtual__(): + if salt.utils.is_windows(): + return False, 'This audit module only runs on linux' + return True + + +def audit(data_list, tags, debug=False, **kwargs): + ''' + Run the mount audits contained in the YAML files processed by __virtual__ + ''' + + __data__ = {} + + for profile, data in data_list: + _merge_yaml(__data__, data, profile) + + __tags__ = _get_tags(__data__) + + if debug: + log.debug('mount audit __data__:') + log.debug(__data__) + log.debug('mount audit __tags__:') + log.debug(__tags__) + + ret = {'Success': [], 'Failure': [], 'Controlled': []} + for tag in __tags__: + if fnmatch.fnmatch(tag, tags): + for tag_data in __tags__[tag]: + if 'control' in tag_data: + ret['Controlled'].append(tag_data) + continue + + + name = tag_data.get('name') + audittype = tag_data.get('type') + + + if 'attribute' not in tag_data: + log.error('No attribute found for mount audit {0}, file {1}' + .format(tag,name)) + tag_data = copy.deepcopy(tag_data) + tag_data['error'] = 'No pattern found'.format(mod) + ret['Failure'].append(tag_data) + continue + + attribute = tag_data.get('attribute') + + check_type = 'hard' + if 'check_type' in tag_data: + check_type = tag_data.get('check_type') + + if check_type not in ['hard','soft']: + log.error('Unrecognized option: ' + check_type) + tag_data = copy.deepcopy(tag_data) + tag_data['error'] = 'check_type can only be hard or soft' + ret['Failure'].append(tag_data) + continue + + found = _check_mount_attribute(name,attribute,check_type) + + if audittype == 'blacklist': + if found: + ret['Failure'].append(tag_data) + else: + ret['Success'].append(tag_data) + + elif audittype == 'whitelist': + if found: + ret['Success'].append(tag_data) + else: + ret['Failure'].append(tag_data) + + return ret + + +def _merge_yaml(ret, data, profile=None): + ''' + Merge two yaml dicts together at the mount:blacklist and mount:whitelist level + ''' + if 'mount' not in ret: + ret['mount'] = {} + for topkey in ('blacklist', 'whitelist'): + if topkey in data.get('mount', {}): + if topkey not in ret['mount']: + ret['mount'][topkey] = [] + for key, val in data['mount'][topkey].iteritems(): + if profile and isinstance(val, dict): + val['nova_profile'] = profile + ret['mount'][topkey].append({key: val}) + return ret + + +def _get_tags(data): + ''' + Retrieve all the tags for this distro from the yaml + ''' + + ret = {} + distro = __grains__.get('osfinger') + + for toplist, toplevel in data.get('mount', {}).iteritems(): + # mount:blacklist + for audit_dict in toplevel: + # mount:blacklist:0 + for audit_id, audit_data in audit_dict.iteritems(): + # mount:blacklist:0:telnet + tags_dict = audit_data.get('data', {}) + # mount:blacklist:0:telnet:data + tags = None + for osfinger in tags_dict: + if osfinger == '*': + continue + osfinger_list = [finger.strip() for finger in osfinger.split(',')] + for osfinger_glob in osfinger_list: + if fnmatch.fnmatch(distro, osfinger_glob): + tags = tags_dict.get(osfinger) + break + if tags is not None: + break + # If we didn't find a match, check for a '*' + if tags is None: + tags = tags_dict.get('*', []) + # mount:blacklist:0:telnet:data:Debian-8 + if isinstance(tags, dict): + # malformed yaml, convert to list of dicts + tmp = [] + for name, tag in tags.iteritems(): + tmp.append({name: tag}) + tags = tmp + for item in tags: + for name, tag in item.iteritems(): + tag_data = {} + # Whitelist could have a dictionary, not a string + if isinstance(tag, dict): + tag_data = copy.deepcopy(tag) + tag = tag_data.pop('tag') + if tag not in ret: + ret[tag] = [] + formatted_data = {'name': name, + 'tag': tag, + 'module': 'mount', + 'type': toplist} + formatted_data.update(tag_data) + formatted_data.update(audit_data) + formatted_data.pop('data') + ret[tag].append(formatted_data) + return ret + +def _check_mount_attribute(path,attribute, check_type): + ''' + This function checks if the partition at a given path is mounted with a particular attribute or not. + If 'check_type' is 'hard', the function returns False if he specified path does not exist, or if it + is not a mounted partition. If 'check_type' is 'soft', the functions returns True in such cases. + ''' + + if not os.path.exists(path): + if check_type == 'hard': + return False + else: + return True + + + mount_object = __salt__['mount.active']() + + if path in mount_object: + attributes = mount_object.get(path) + opts = attributes.get('opts') + if attribute in opts: + return True + else: + return False + + else: + if check_type == 'hard': + return False + else: + return True + diff --git a/hubblestack/files/hubblestack_nova/stat_nova.py b/hubblestack/files/hubblestack_nova/stat_nova.py index 995d792af..7ea2da4ba 100644 --- a/hubblestack/files/hubblestack_nova/stat_nova.py +++ b/hubblestack/files/hubblestack_nova/stat_nova.py @@ -28,6 +28,8 @@ - '/etc/grub2/grub.cfg': tag: 'CIS-1.5.1' user: 'root' + mode: 644 + allow_more_strict: True # file permissions can be 644 or more strict [default = False ] uid: 0 group: 'root' gid: 0 @@ -83,10 +85,18 @@ def audit(data_list, tags, debug=False, **kwargs): continue name = tag_data['name'] expected = {} - for e in ['mode', 'user', 'uid', 'group', 'gid']: + for e in ['mode', 'user', 'uid', 'group', 'gid', 'allow_more_strict']: if e in tag_data: expected[e] = tag_data[e] + if 'allow_more_strict' in expected.keys() and 'mode' not in expected.keys(): + reason_dict = {} + reason = "'allow_more_strict' tag can't be specified without 'mode' tag" + reason_dict['allow_more_strict'] = reason + tag_data['reason'] = reason_dict + ret['Failure'].append(tag_data) + continue + #getting the stats using salt salt_ret = __salt__['file.stats'](name) if not salt_ret: @@ -99,14 +109,36 @@ def audit(data_list, tags, debug=False, **kwargs): passed = True reason_dict = {} for e in expected.keys(): + if e == 'allow_more_strict': + continue r = salt_ret[e] - if e == 'mode' and r != '0': - r = r[1:] - if str(expected[e]) != str(r): - passed = False - reason = { 'expected': str(expected[e]), - 'current': str(r) } - reason_dict[e] = reason + + if e == 'mode': + if r != '0': + r = r[1:] + allow_more_strict = False + if 'allow_more_strict' in expected.keys(): + allow_more_strict = expected['allow_more_strict'] + if not isinstance(allow_more_strict, bool ): + passed = False + reason = (str(allow_more_strict) + ' is not a valid boolean') + reason_dict[e] = reason + + else: + subcheck_passed = _check_mode(str(expected[e]), str(r), allow_more_strict) + if not subcheck_passed: + passed = False + reason = { 'expected': str(expected[e]), + 'allow_more_strict': str(allow_more_strict), + 'current': str(r) } + reason_dict[e] = reason + else: + subcheck_passed = (str(expected[e]) == str(r)) + if not subcheck_passed: + passed = False + reason = { 'expected': str(expected[e]), + 'current': str(r) } + reason_dict[e] = reason if reason_dict: tag_data['reason'] = reason_dict @@ -177,3 +209,73 @@ def _get_tags(data): formatted_data.pop('data') ret[tag].append(formatted_data) return ret + + +def _check_mode(max_permission, given_permission, allow_more_strict): + ''' + Checks whether a file's permission are equal to a given permission or more restrictive. + Permission is a string of 3 digits [0-7]. 'given_permission' is the actual permission on file, + 'max_permission' is the expected permission on this file. Set 'allow_more_strict' to True, + to allow more restrictive permissions as well. Example: + + _check_mode('644', '644', False) returns True + _check_mode('644', '600', False) returns False + _check_mode('644', '644', True) returns True + _check_mode('644', '600', True) returns True + _check_mode('644', '655', True) returns False + + ''' + + if given_permission == '0': + return True + + if not allow_more_strict: + return (max_permission == given_permission) + + if (_is_permission_in_limit(max_permission[0],given_permission[0]) and _is_permission_in_limit(max_permission[1],given_permission[1]) and _is_permission_in_limit(max_permission[2],given_permission[2])): + return True + + return False + + +def _is_permission_in_limit(max_permission,given_permission): + ''' + Return true only if given_permission is not more lenient that max_permission. In other words, if + r or w or x is present in given_permission but absent in max_permission, it should return False + Takes input two integer values from 0 to 7. + ''' + max_permission = int(max_permission) + given_permission = int(given_permission) + allowed_r = False + allowed_w = False + allowed_x = False + given_r = False + given_w = False + given_x = False + + if max_permission >= 4: + allowed_r = True + max_permission = max_permission - 4 + if max_permission >= 2: + allowed_w = True + max_permission = max_permission - 2 + if max_permission >= 1: + allowed_x = True + + if given_permission >= 4: + given_r = True + given_permission = given_permission - 4 + if given_permission >= 2: + given_w = True + given_permission = given_permission - 2 + if given_permission >= 1: + given_x = True + + if given_r and ( not allowed_r ): + return False + if given_w and ( not allowed_w ): + return False + if given_x and ( not allowed_x ): + return False + + return True \ No newline at end of file diff --git a/hubblestack/files/hubblestack_nova/sysctl.py b/hubblestack/files/hubblestack_nova/sysctl.py index 03b35bfcc..84f608ac0 100644 --- a/hubblestack/files/hubblestack_nova/sysctl.py +++ b/hubblestack/files/hubblestack_nova/sysctl.py @@ -82,6 +82,7 @@ def audit(data_list, tags, debug=False, **kwargs): if str(salt_ret).startswith('error'): passed = False if str(salt_ret) != str(match_output): + tag_data['failure_reason'] = str(salt_ret) passed = False if passed: ret['Success'].append(tag_data) @@ -147,4 +148,4 @@ def _get_tags(data): formatted_data.update(audit_data) formatted_data.pop('data') ret[tag].append(formatted_data) - return ret + return ret \ No newline at end of file diff --git a/hubblestack/files/hubblestack_nova/systemctl.py b/hubblestack/files/hubblestack_nova/systemctl.py new file mode 100644 index 000000000..2e1ff6711 --- /dev/null +++ b/hubblestack/files/hubblestack_nova/systemctl.py @@ -0,0 +1,163 @@ +# -*- encoding: utf-8 -*- +''' +HubbleStack Nova plugin for using systemctl to verify status of a given service. + +Supports both blacklisting and whitelisting patterns. Blacklisted services must +not be enabled. Whitelisted services must be enabled. + +:maintainer: HubbleStack / basepi +:maturity: 2017.8.29 +:platform: All +:requires: SaltStack + +This audit module requires yaml data to execute. It will search the local +directory for any .yaml files, and if it finds a top-level 'systemctl' key, it will +use that data. + +Sample YAML data, with inline comments: + + +systemctl: + whitelist: # or blacklist + dhcpd-disabled: # unique ID + data: + CentOS Linux-7: # osfinger grain + tag: 'CIS-1.1.1' # audit tag + service: dhcpd # mandatory field. + description: Ensure DHCP Server is not enabled + alert: email + trigger: state + + +''' +from __future__ import absolute_import +import logging + +import fnmatch +import yaml +import os +import copy +import salt.utils +import re + +from distutils.version import LooseVersion + +log = logging.getLogger(__name__) + + +def __virtual__(): + if salt.utils.is_windows(): + return False, 'This audit module only runs on linux' + return True + + +def audit(data_list, tags, debug=False, **kwargs): + ''' + Run the systemctl audits contained in the YAML files processed by __virtual__ + ''' + __data__ = {} + for profile, data in data_list: + _merge_yaml(__data__, data, profile) + __tags__ = _get_tags(__data__) + + if debug: + log.debug('systemctl audit __data__:') + log.debug(__data__) + log.debug('systemctl audit __tags__:') + log.debug(__tags__) + + ret = {'Success': [], 'Failure': [], 'Controlled': []} + for tag in __tags__: + if fnmatch.fnmatch(tag, tags): + for tag_data in __tags__[tag]: + if 'control' in tag_data: + ret['Controlled'].append(tag_data) + continue + name = tag_data['name'] + audittype = tag_data['type'] + + enabled = __salt__['service.enabled'](name) + # Blacklisted service (must not be running or not found) + if audittype == 'blacklist': + if not enabled: + ret['Success'].append(tag_data) + else: + ret['Failure'].append(tag_data) + # Whitelisted pattern (must be found and running) + elif audittype == 'whitelist': + if enabled: + ret['Success'].append(tag_data) + else: + ret['Failure'].append(tag_data) + + return ret + + +def _merge_yaml(ret, data, profile=None): + ''' + Merge two yaml dicts together at the systemctl:blacklist and systemctl:whitelist level + ''' + if 'systemctl' not in ret: + ret['systemctl'] = {} + for topkey in ('blacklist', 'whitelist'): + if topkey in data.get('systemctl', {}): + if topkey not in ret['systemctl']: + ret['systemctl'][topkey] = [] + for key, val in data['systemctl'][topkey].iteritems(): + if profile and isinstance(val, dict): + val['nova_profile'] = profile + ret['systemctl'][topkey].append({key: val}) + + return ret + + +def _get_tags(data): + ''' + Retrieve all the tags for this distro from the yaml + ''' + ret = {} + distro = __grains__.get('osfinger') + for toplist, toplevel in data.get('systemctl', {}).iteritems(): + for audit_dict in toplevel: + for audit_id, audit_data in audit_dict.iteritems(): + tags_dict = audit_data.get('data', {}) + tags = None + for osfinger in tags_dict: + if osfinger == '*': + continue + osfinger_list = [finger.strip() for finger in osfinger.split(',')] + for osfinger_glob in osfinger_list: + if fnmatch.fnmatch(distro, osfinger_glob): + tags = tags_dict.get(osfinger) + break + if tags is not None: + break + # If we didn't find a match, check for a '*' + if tags is None: + tags = tags_dict.get('*', []) + # systemctl:blacklist:0:telnet:data:Debian-8 + if isinstance(tags, dict): + # malformed yaml, convert to list of dicts + tmp = [] + for name, tag in tags.iteritems(): + tmp.append({name: tag}) + tags = tmp + for item in tags: + for name, tag in item.iteritems(): + tag_data = {} + # Whitelist could have a dictionary, not a string + if isinstance(tag, dict): + tag_data = copy.deepcopy(tag) + tag = tag_data.pop('tag') + if tag not in ret: + ret[tag] = [] + formatted_data = {'name': name, + 'tag': tag, + 'module': 'systemctl', + 'type': toplist} + formatted_data.update(tag_data) + formatted_data.update(audit_data) + formatted_data.pop('data') + ret[tag].append(formatted_data) + + return ret diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py new file mode 100644 index 000000000..604df24c3 --- /dev/null +++ b/hubblestack/splunklogging.py @@ -0,0 +1,303 @@ +''' +Hubblestack python log handler for splunk + +Uses the same configuration as the rest of the splunk returners, returns to +the same destination but with an alternate sourcetype (``hubble_log`` by +default) + +.. code-block:: yaml + + hubblestack: + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_log: hubble_log + +You can also add an `custom_fields` argument which is a list of keys to add to events +with using the results of config.get(). These new keys will be prefixed +with 'custom_' to prevent conflicts. The values of these keys should be +strings or lists (will be sent as CSV string), do not choose grains or pillar values with complex values or they will +be skipped: + +.. code-block:: yaml + + hubblestack: + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_log: hubble_log + custom_fields: + - site + - product_group +''' +import socket +# Import cloud details +from hubblestack.extmods.returners.cloud_details import get_cloud_details + +# Imports for http event forwarder +import requests +import json +import time +from datetime import datetime + +import copy + +import logging + +_max_content_bytes = 100000 +http_event_collector_SSL_verify = False +http_event_collector_debug = False + +hec = None + + +class SplunkHandler(logging.Handler): + ''' + Log handler for splunk + ''' + def __init__(self): + super(SplunkHandler, self).__init__() + + self.opts_list = _get_options() + self.clouds = get_cloud_details() + self.endpoint_list = [] + + for opts in self.opts_list: + http_event_collector_key = opts['token'] + http_event_collector_host = opts['indexer'] + http_event_collector_port = opts['port'] + hec_ssl = opts['http_event_server_ssl'] + proxy = opts['proxy'] + timeout = opts['timeout'] + custom_fields = opts['custom_fields'] + + # Set up the fields to be extracted at index time. The field values must be strings. + # Note that these fields will also still be available in the event data + index_extracted_fields = ['aws_instance_id', 'aws_account_id', 'azure_vmId'] + try: + index_extracted_fields.extend(opts['index_extracted_fields']) + except TypeError: + pass + + # Set up the collector + hec = http_event_collector(http_event_collector_key, http_event_collector_host, http_event_port=http_event_collector_port, http_event_server_ssl=hec_ssl, proxy=proxy, timeout=timeout) + + minion_id = __grains__['id'] + master = __grains__['master'] + fqdn = __grains__['fqdn'] + # Sometimes fqdn is blank. If it is, replace it with minion_id + fqdn = fqdn if fqdn else minion_id + try: + fqdn_ip4 = __grains__['fqdn_ip4'][0] + except IndexError: + fqdn_ip4 = __grains__['ipv4'][0] + if fqdn_ip4.startswith('127.'): + for ip4_addr in __grains__['ipv4']: + if ip4_addr and not ip4_addr.startswith('127.'): + fqdn_ip4 = ip4_addr + break + + event = {} + event.update({'master': master}) + event.update({'minion_id': minion_id}) + event.update({'dest_host': fqdn}) + event.update({'dest_ip': fqdn_ip4}) + + for cloud in self.clouds: + event.update(cloud) + + for custom_field in custom_fields: + custom_field_name = 'custom_' + custom_field + custom_field_value = __salt__['config.get'](custom_field, '') + if isinstance(custom_field_value, str): + event.update({custom_field_name: custom_field_value}) + elif isinstance(custom_field_value, list): + custom_field_value = ','.join(custom_field_value) + event.update({custom_field_name: custom_field_value}) + + payload = {} + payload.update({'host': fqdn}) + payload.update({'index': opts['index']}) + payload.update({'sourcetype': opts['sourcetype']}) + + # Potentially add metadata fields: + fields = {} + for item in index_extracted_fields: + if item in event and not isinstance(event[item], (list, dict, tuple)): + fields[item] = str(event[item]) + if fields: + payload.update({'fields': fields}) + + self.endpoint_list.append((hec, event, payload)) + + def emit(self, record): + ''' + Emit a single record using the hec/event template/payload template + generated in __init__() + ''' + log_entry = self.format_record(record) + for hec, event, payload in self.endpoint_list: + event = copy.deepcopy(event) + payload = copy.deepcopy(payload) + event.update(log_entry) + payload['event'] = event + hec.batchEvent(payload, eventtime=time.time()) + hec.flushBatch() + return True + + def format_record(self, record): + ''' + Format the log record into a dictionary for easy insertion into a + splunk event dictionary + ''' + log_entry = {'message': record.message, + 'level': record.levelname, + 'timestamp': record.asctime, + 'loggername': record.name, + } + return log_entry + + +def _get_options(): + if __salt__['config.get']('hubblestack:returner:splunk'): + splunk_opts = [] + returner_opts = __salt__['config.get']('hubblestack:returner:splunk') + if not isinstance(returner_opts, list): + returner_opts = [returner_opts] + for opt in returner_opts: + processed = {} + processed['token'] = opt.get('token') + processed['indexer'] = opt.get('indexer') + processed['port'] = str(opt.get('port', '8088')) + processed['index'] = opt.get('index') + processed['custom_fields'] = opt.get('custom_fields', []) + processed['sourcetype'] = opt.get('sourcetype_log', 'hubble_log') + processed['http_event_server_ssl'] = opt.get('hec_ssl', True) + processed['proxy'] = opt.get('proxy', {}) + processed['timeout'] = opt.get('timeout', 9.05) + processed['index_extracted_fields'] = opt.get('index_extracted_fields', []) + splunk_opts.append(processed) + return splunk_opts + else: + raise Exception('Cannot find splunk config at `hubblestack:returner:splunk`!') + + +# Thanks to George Starcher for the http_event_collector class (https://github.com/georgestarcher/) +# Default batch max size to match splunk's default limits for max byte +# See http_input stanza in limits.conf; note in testing I had to limit to 100,000 to avoid http event collector breaking connection +# Auto flush will occur if next event payload will exceed limit + +class http_event_collector: + + def __init__(self, token, http_event_server, host='', http_event_port='8088', http_event_server_ssl=True, max_bytes=_max_content_bytes, proxy=None, timeout=9.05): + self.timeout = timeout + self.token = token + self.batchEvents = [] + self.maxByteLength = max_bytes + self.currentByteLength = 0 + self.server_uri = [] + if proxy and http_event_server_ssl: + self.proxy = {'https': 'https://{0}'.format(proxy)} + elif proxy: + self.proxy = {'http': 'http://{0}'.format(proxy)} + else: + self.proxy = {} + + # Set host to specified value or default to localhostname if no value provided + if host: + self.host = host + else: + self.host = socket.gethostname() + + # Build and set server_uri for http event collector + # Defaults to SSL if flag not passed + # Defaults to port 8088 if port not passed + + servers = http_event_server + if not isinstance(servers, list): + servers = [servers] + for server in servers: + if http_event_server_ssl: + self.server_uri.append(['https://%s:%s/services/collector/event' % (server, http_event_port), True]) + else: + self.server_uri.append(['http://%s:%s/services/collector/event' % (server, http_event_port), True]) + + if http_event_collector_debug: + print self.token + print self.server_uri + + def sendEvent(self, payload, eventtime=''): + # Method to immediately send an event to the http event collector + + headers = {'Authorization': 'Splunk ' + self.token} + + # If eventtime in epoch not passed as optional argument use current system time in epoch + if not eventtime: + eventtime = str(int(time.time())) + + # Fill in local hostname if not manually populated + if 'host' not in payload: + payload.update({'host': self.host}) + + # Update time value on payload if need to use system time + data = {'time': eventtime} + data.update(payload) + + # send event to http event collector + r = requests.post(self.server_uri, data=json.dumps(data), headers=headers, verify=http_event_collector_SSL_verify, proxies=self.proxy) + + # Print debug info if flag set + if http_event_collector_debug: + logger.debug(r.text) + logger.debug(data) + + def batchEvent(self, payload, eventtime=''): + # Method to store the event in a batch to flush later + + # Fill in local hostname if not manually populated + if 'host' not in payload: + payload.update({'host': self.host}) + + payloadLength = len(json.dumps(payload)) + + if (self.currentByteLength + payloadLength) > self.maxByteLength: + self.flushBatch() + # Print debug info if flag set + if http_event_collector_debug: + print 'auto flushing' + else: + self.currentByteLength = self.currentByteLength + payloadLength + + # If eventtime in epoch not passed as optional argument use current system time in epoch + if not eventtime: + eventtime = str(int(time.time())) + + # Update time value on payload if need to use system time + data = {'time': eventtime} + data.update(payload) + + self.batchEvents.append(json.dumps(data)) + + def flushBatch(self): + # Method to flush the batch list of events + + if len(self.batchEvents) > 0: + headers = {'Authorization': 'Splunk ' + self.token} + self.server_uri = [x for x in self.server_uri if x[1] is not False] + for server in self.server_uri: + try: + r = requests.post(server[0], data=' '.join(self.batchEvents), headers=headers, verify=http_event_collector_SSL_verify, proxies=self.proxy, timeout=self.timeout) + r.raise_for_status() + server[1] = True + break + except requests.exceptions.RequestException: + log.info('Request to splunk server "%s" failed. Marking as bad.' % server[0]) + server[1] = False + except Exception as e: + log.error('Request to splunk threw an error: {0}'.format(e)) + self.batchEvents = [] + self.currentByteLength = 0 diff --git a/pkg/amazonlinux2016.09/Dockerfile b/pkg/amazonlinux2016.09/Dockerfile index 2659b040c..c4a4b73bf 100644 --- a/pkg/amazonlinux2016.09/Dockerfile +++ b/pkg/amazonlinux2016.09/Dockerfile @@ -91,8 +91,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/amazonlinux2017.03/Dockerfile b/pkg/amazonlinux2017.03/Dockerfile index f27a163da..6f9046209 100644 --- a/pkg/amazonlinux2017.03/Dockerfile +++ b/pkg/amazonlinux2017.03/Dockerfile @@ -91,8 +91,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/build_debs.sh b/pkg/build_debs.sh deleted file mode 100755 index 4498bd6c9..000000000 --- a/pkg/build_debs.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -set -x # echo on - -_user=`id -u` - -# Check if the current user is root -if [ "$_user" == "0" ] -then - echo "This script should not be run as root ..." - echo "Please run this script as regular user with sudo privileges ..." - echo "Exiting ..." - exit -fi - -rm -rf build -rm -rf dist - -mkdir -p build -mkdir -p dist - -bash ./init_pkg.sh -y -cp ../hubble.tar.gz dist/hubble.tar.gz -mv ../hubble.tar.gz build/hubble.tar.gz -mkdir build/hubblestack-2.2.4 -tar -xzvf build/hubble.tar.gz -C build/hubblestack-2.2.4 -mkdir -p build/hubblestack-2.2.4/etc/init.d -cp ./hubble build/hubblestack-2.2.4/etc/init.d -mkdir -p build/hubblestack-2.2.4/usr/lib/systemd/system -cp ./hubble.service build/hubblestack-2.2.4/usr/lib/systemd/system -cp -f ../conf/hubble build/hubblestack-2.2.4/etc/hubble/hubble -cd build/hubblestack-2.2.4 - -sudo apt-get install -y ruby ruby-dev rubygems gcc make -sudo gem install --no-ri --no-rdoc fpm -mkdir -p usr/bin -ln -s /opt/hubble/hubble usr/bin/hubble -ln -s /opt/osquery/osqueryd usr/bin/osqueryd -ln -s /opt/osquery/osqueryi usr/bin/osqueryi -fpm -s dir -t deb \ - -n hubblestack \ - -v 2.2.4-1 \ - -d 'git' \ - --config-files /etc/hubble/hubble --config-files /etc/osquery/osquery.conf \ - --deb-no-default-config-files \ - etc/hubble etc/osquery etc/init.d opt usr/bin -cp hubblestack_2.2.4-1_amd64.deb ../../dist/ diff --git a/pkg/build_rpms.sh b/pkg/build_rpms.sh deleted file mode 100755 index 3401fb55b..000000000 --- a/pkg/build_rpms.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash - -set -x # echo on - -_user=`id -u` - -# Check if the current user is root -if [ "$_user" == "0" ] -then - echo "This script should not be run as root ..." - echo "Please run this script as regular user with sudo privileges ..." - echo "Exiting ..." - exit -fi - -rm -rf build -rm -rf dist - -mkdir -p build -mkdir -p dist - -bash ./init_pkg.sh -y -cp ../hubble.tar.gz dist/hubble.tar.gz -mv ../hubble.tar.gz build/hubble.tar.gz -mkdir build/hubblestack-2.2.4 -tar -xzvf build/hubble.tar.gz -C build/hubblestack-2.2.4 -mkdir -p build/hubblestack-2.2.4/etc/init.d -cp ./hubble build/hubblestack-2.2.4/etc/init.d -mkdir -p build/hubblestack-2.2.4/usr/lib/systemd/system -cp ./hubble.service build/hubblestack-2.2.4/usr/lib/systemd/system -cp -f ../conf/hubble build/hubblestack-2.2.4/etc/hubble/hubble -cd build -tar -czvf hubblestack-2.2.4.tar.gz hubblestack-2.2.4/ -mkdir -p rpmbuild/{RPMS,SRPMS,BUILD,SOURCES,SPECS,tmp} - -cp hubblestack-2.2.4.tar.gz rpmbuild/SOURCES/ -cd rpmbuild - -cp ../../specs/* SPECS/ - -rpmbuild --define "_topdir $(pwd)" --define "_tmppath %{_topdir}/tmp" -ba SPECS/hubblestack-el6.spec -cp RPMS/x86_64/hubblestack-2.2.4-1.x86_64.rpm ../../dist/hubblestack-2.2.4-1.el6.x86_64.rpm -rpmbuild --define "_topdir $(pwd)" --define "_tmppath %{_topdir}/tmp" -ba SPECS/hubblestack-el7.spec -cp RPMS/x86_64/hubblestack-2.2.4-1.x86_64.rpm ../../dist/hubblestack-2.2.4-1.el7.x86_64.rpm diff --git a/pkg/centos6/Dockerfile b/pkg/centos6/Dockerfile index ecad4e7aa..8e212b5f4 100644 --- a/pkg/centos6/Dockerfile +++ b/pkg/centos6/Dockerfile @@ -93,8 +93,8 @@ RUN yum install -y rpmbuild gcc make rh-ruby23 rh-ruby23-ruby-devel \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/centos7/Dockerfile b/pkg/centos7/Dockerfile index b10fceb79..031f5f735 100644 --- a/pkg/centos7/Dockerfile +++ b/pkg/centos7/Dockerfile @@ -90,8 +90,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/coreos/Dockerfile b/pkg/coreos/Dockerfile index 78a7675aa..5406ac1d5 100644 --- a/pkg/coreos/Dockerfile +++ b/pkg/coreos/Dockerfile @@ -17,7 +17,7 @@ RUN mkdir -p /etc/osquery /var/log/osquery /etc/hubble/hubble.d /opt/hubble /opt #osquery should be built first since requirements for other packages can interfere with osquery dependencies #to build, osquery scripts want sudo and a user to sudo with. #to pin to a different version change the following envirnment variable -ENV OSQUERY_SRC_VERSION=2.6.0 +ENV OSQUERY_SRC_VERSION=2.7.0 ENV OSQUERY_BUILD_USER=osquerybuilder ENV OSQUERY_GIT_URL=https://github.com/facebook/osquery.git RUN apt-get -y install git make python ruby sudo @@ -34,7 +34,7 @@ RUN cd /home/"$OSQUERY_BUILD_USER" \ #these homebrew hashes need to be current. hashes in osquery git repo are often out of date for the tags we check out and try to build. #this is a problem and they are aware of it. let the magic hashes commence: && sed -i 's,^\(HOMEBREW_CORE=\).*,\1'941ca36839ea354031846d73ad538e1e44e673f4',' tools/provision.sh \ - && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'abc5c5782c5850f2deff1f3d463945f90f2feaac',' tools/provision.sh \ + && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'f54281a496bb7d3dd2f46b2f3067193d05f5013b',' tools/provision.sh \ && sed -i 's,^\(HOMEBREW_BREW=\).*,\1'ac2cbd2137006ebfe84d8584ccdcb5d78c1130d9',' tools/provision.sh \ && sed -i 's,^\(LINUXBREW_BREW=\).*,\1'20bcce2c176469cec271b46d523eef1510217436',' tools/provision.sh \ && make sysprep \ @@ -64,7 +64,8 @@ RUN apt-get -y install \ #libgit2 install start #must precede pyinstaller requirements ENV LIBGIT2_SRC_URL=https://github.com/libgit2/libgit2/archive/v0.26.0.tar.gz -ENV LIBGIT2_SRC_SHA256=4ac70a2bbdf7a304ad2a9fb2c53ad3c8694be0dbec4f1fce0f3cd0cda14fb3b9 +#it turns out github provided release files can change. so even though the code hopefully hasn't changed, the hash has. +ENV LIBGIT2_SRC_SHA256=6a62393e0ceb37d02fe0d5707713f504e7acac9006ef33da1e88960bd78b6eac ENV LIBGIT2_SRC_VERSION=0.26.0 ENV LIBGIT2TEMP=/tmp/libgit2temp RUN mkdir -p "$LIBGIT2TEMP" \ @@ -87,8 +88,8 @@ RUN pip install --upgrade pip \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian7/Dockerfile b/pkg/debian7/Dockerfile index 8535819b9..a27fdbdc1 100644 --- a/pkg/debian7/Dockerfile +++ b/pkg/debian7/Dockerfile @@ -17,7 +17,7 @@ RUN mkdir -p /etc/osquery /var/log/osquery /etc/hubble/hubble.d /opt/hubble /opt #osquery should be built first since requirements for other packages can interfere with osquery dependencies #to build, osquery scripts want sudo and a user to sudo with. #to pin to a different version change the following envirnment variable -ENV OSQUERY_SRC_VERSION=2.6.0 +ENV OSQUERY_SRC_VERSION=2.7.0 ENV OSQUERY_BUILD_USER=osquerybuilder ENV OSQUERY_GIT_URL=https://github.com/facebook/osquery.git RUN apt-get -y install git make python ruby sudo locales @@ -37,7 +37,7 @@ RUN cd /home/"$OSQUERY_BUILD_USER" \ #these homebrew hashes need to be current. hashes in osquery git repo are often out of date for the tags we check out and try to build. #this is a problem and they are aware of it. let the magic hashes commence: && sed -i 's,^\(HOMEBREW_CORE=\).*,\1'941ca36839ea354031846d73ad538e1e44e673f4',' tools/provision.sh \ - && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'abc5c5782c5850f2deff1f3d463945f90f2feaac',' tools/provision.sh \ + && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'f54281a496bb7d3dd2f46b2f3067193d05f5013b',' tools/provision.sh \ && sed -i 's,^\(HOMEBREW_BREW=\).*,\1'ac2cbd2137006ebfe84d8584ccdcb5d78c1130d9',' tools/provision.sh \ && sed -i 's,^\(LINUXBREW_BREW=\).*,\1'20bcce2c176469cec271b46d523eef1510217436',' tools/provision.sh \ && make sysprep \ @@ -85,7 +85,8 @@ RUN mkdir -p "$CMAKE_TEMP" \ #must be preceded by cmake install #must precede pyinstaller requirements ENV LIBGIT2_SRC_URL=https://github.com/libgit2/libgit2/archive/v0.26.0.tar.gz -ENV LIBGIT2_SRC_SHA256=4ac70a2bbdf7a304ad2a9fb2c53ad3c8694be0dbec4f1fce0f3cd0cda14fb3b9 +#it turns out github provided release files can change. so even though the code hopefully hasn't changed, the hash has. +ENV LIBGIT2_SRC_SHA256=6a62393e0ceb37d02fe0d5707713f504e7acac9006ef33da1e88960bd78b6eac ENV LIBGIT2_SRC_VERSION=0.26.0 ENV LIBGIT2TEMP=/tmp/libgit2temp RUN mkdir -p "$LIBGIT2TEMP" \ @@ -114,8 +115,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian8/Dockerfile b/pkg/debian8/Dockerfile index 42a1992a6..b3e8286ac 100644 --- a/pkg/debian8/Dockerfile +++ b/pkg/debian8/Dockerfile @@ -96,8 +96,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian9/Dockerfile b/pkg/debian9/Dockerfile index d20553c0e..47a130409 100644 --- a/pkg/debian9/Dockerfile +++ b/pkg/debian9/Dockerfile @@ -92,8 +92,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/specs/hubblestack-el6.spec b/pkg/specs/hubblestack-el6.spec deleted file mode 100644 index 3083b2e83..000000000 --- a/pkg/specs/hubblestack-el6.spec +++ /dev/null @@ -1,92 +0,0 @@ -# Don't try fancy stuff like debuginfo, which is useless on binary-only -# packages. Don't strip binary too -# Be sure buildpolicy set to do nothing -%define __spec_install_post %{nil} -%define debug_package %{nil} -%define __os_install_post %{_dbpath}/brp-compress -# Don't fail out because we're not packaging the other distro's service files -%define _unpackaged_files_terminate_build 0 - -Summary: Hubblestack is a module, open-source security compliance framework -Name: hubblestack -Version: 2.2.4 -Release: 1 -License: Apache 2.0 -Group: Development/Tools -SOURCE0: %{name}-%{version}.tar.gz -URL: https://hubblestack.io -Autoreq: 0 -Autoprov: 0 -Requires: git -Provides: osquery - -BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root - -%description -%{summary} - -%prep -%setup -q - -%build -# Empty section. - -%install -rm -rf %{buildroot} -mkdir -p %{buildroot} -mkdir -p %{buildroot}/usr/bin -ln -s /opt/hubble/hubble %{buildroot}/usr/bin/hubble -ln -s /opt/osquery/osqueryi %{buildroot}/usr/bin/osqueryi -ln -s /opt/osquery/osqueryd %{buildroot}/usr/bin/osqueryd - -# in builddir -cp -a * %{buildroot} - - -%clean -rm -rf %{buildroot} - - -%files -%config(noreplace) /etc/hubble/hubble -%config /etc/osquery/osquery.conf -%config /etc/osquery/osquery.flags -/etc/init.d/hubble -/opt/* -/usr/bin/* - -%changelog -* Fri Apr 7 2017 Colton Myers 2.1.7-1 -- Force config and logs to 600 permissions to hide tokens -- Splunk returners: Fix for hosts with misconfigured FQDN (no localhost IPs, please!) - -* Mon Apr 3 2017 Colton Myers 2.1.6-1 -- Fix pulsar loading -- Fix splay in scheduler - -* Wed Mar 22 2017 Colton Myers 2.1.5-1 -- Reduce fileserver frequency by default -- Fix pidfile management -- Add %config macros -- Multi-endpoint support in splunk returners - -* Tue Mar 7 2017 Colton Myers 2.1.4-1 -- Consolidate pillar and allow for multiple splunk endpoints in splunk returners -- Move output formatting code out of nova modules into hubble.py -- Add error handling for problems in config file - -* Tue Feb 28 2017 Colton Myers 2.1.3-1 -- Add Debian packaging -- Add uptime fallback query -- Fix for blank hosts when fqdn doesn't return anything in returners -- Add AWS information to events from splunk returners -- Turn off http_event_collector_debug everywhere - -* Mon Feb 13 2017 Colton Myers 2.1.2-1 -- Fix the changelog order - -* Mon Feb 13 2017 Colton Myers 2.1.1-1 -- Remove autoreq, add unit files - -* Wed Feb 8 2017 Colton Myers 2.1.0-1 -- First Build diff --git a/pkg/specs/hubblestack-el7.spec b/pkg/specs/hubblestack-el7.spec deleted file mode 100644 index 2fedfec49..000000000 --- a/pkg/specs/hubblestack-el7.spec +++ /dev/null @@ -1,92 +0,0 @@ -# Don't try fancy stuff like debuginfo, which is useless on binary-only -# packages. Don't strip binary too -# Be sure buildpolicy set to do nothing -%define __spec_install_post %{nil} -%define debug_package %{nil} -%define __os_install_post %{_dbpath}/brp-compress -# Don't fail out because we're not packaging the other distro's service files -%define _unpackaged_files_terminate_build 0 - -Summary: Hubblestack is a module, open-source security compliance framework -Name: hubblestack -Version: 2.2.4 -Release: 1 -License: Apache 2.0 -Group: Development/Tools -SOURCE0: %{name}-%{version}.tar.gz -URL: https://hubblestack.io -Autoreq: 0 -Autoprov: 0 -Requires: git -Provides: osquery - -BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root - -%description -%{summary} - -%prep -%setup -q - -%build -# Empty section. - -%install -rm -rf %{buildroot} -mkdir -p %{buildroot} -mkdir -p %{buildroot}/usr/bin -ln -s /opt/hubble/hubble %{buildroot}/usr/bin/hubble -ln -s /opt/osquery/osqueryi %{buildroot}/usr/bin/osqueryi -ln -s /opt/osquery/osqueryd %{buildroot}/usr/bin/osqueryd - -# in builddir -cp -a * %{buildroot} - - -%clean -rm -rf %{buildroot} - - -%files -%config(noreplace) /etc/hubble/hubble -%config /etc/osquery/osquery.conf -%config /etc/osquery/osquery.flags -/opt/* -/usr/bin/* -/usr/lib/* - -%changelog -* Fri Apr 7 2017 Colton Myers 2.1.7-1 -- Force config and logs to 600 permissions to hide tokens -- Splunk returners: Fix for hosts with misconfigured FQDN (no localhost IPs, please!) - -* Mon Apr 3 2017 Colton Myers 2.1.6-1 -- Fix pulsar loading -- Fix splay in scheduler - -* Wed Mar 22 2017 Colton Myers 2.1.5-1 -- Reduce fileserver frequency by default -- Fix pidfile management -- Add %config macros -- Multi-endpoint support in splunk returners - -* Tue Mar 7 2017 Colton Myers 2.1.4-1 -- Consolidate pillar and allow for multiple splunk endpoints in splunk returners -- Move output formatting code out of nova modules into hubble.py -- Add error handling for problems in config file - -* Tue Feb 28 2017 Colton Myers 2.1.3-1 -- Add Debian packaging -- Add uptime fallback query -- Fix for blank hosts when fqdn doesn't return anything in returners -- Add AWS information to events from splunk returners -- Turn off http_event_collector_debug everywhere - -* Mon Feb 13 2017 Colton Myers 2.1.2-1 -- Fix the changelog order - -* Mon Feb 13 2017 Colton Myers 2.1.1-1 -- Remove autoreq, add unit files - -* Wed Feb 8 2017 Colton Myers 2.1.0-1 -- First Build