From 2b5a4f29ef9c30357c3cdd2949bc2f7a566aec7b Mon Sep 17 00:00:00 2001 From: root Date: Wed, 13 Sep 2017 15:40:00 +0000 Subject: [PATCH 01/35] changes added to README.md --- README.md | 279 ++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 185 insertions(+), 94 deletions(-) diff --git a/README.md b/README.md index 8ce8902ae..a47bcb971 100644 --- a/README.md +++ b/README.md @@ -1,127 +1,218 @@ -# Hubble +# HUBBLE -An alternate version of Hubblestack which can be run without an existing -SaltStack infrastructure. +Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, automated remediation, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. -# Packaging / Installing +#### Installation -## Installing using setup.py - -```bash +Installing using setup.py +```sh sudo yum install git -y git clone https://github.com/hubblestack/hubble cd hubble sudo python setup.py install ``` +Installs a hubble "binary" into /usr/bin/. + +Hubble has three components which are discussed as follows: + +#### Nova +Nova is Hubble's auditing system. +##### Usage +There are four primary functions for Nova module: +- hubble.sync: syncs the hubblestack_nova_profiles/ and hubblestack_nova/ directories to the minion(s). +- hubble.load: loads the synced audit modules and their yaml configuration files. +- hubble.audit: audits the minion(s) using the YAML profile(s) you provide as comma-separated arguments. hubble.audit takes two optional arguments. The first is a comma-separated list of paths. These paths can be files or directories within the hubblestack_nova_profiles directory. The second argument allows for toggling Nova configuration, such as verbosity, level of detail, etc. If hubble.audit is run without targeting any audit configs or directories, it will instead run hubble.top with no arguments. hubble.audit will return a list of audits which were successful, and a list of audits which failed. +- hubble.top: audits the minion(s) using the top.nova configuration. + +Here are some example calls for hubble.audit: +```sh +- Run the cve scanner and the CIS profile: +hubble hubble.audit cve.scan-v2,cis.centos-7-level-1-scored-v1 +- Run hubble.top with the default topfile (top.nova) +hubble hubble.top +- Run all yaml configs and tags under salt://hubblestack_nova_profiles/foo/ and salt://hubblestack_nova_profiles/bar, but only run audits with tags starting with "CIS" +hubble hubble.audit foo,bar tags='CIS*' +``` +##### Configuration +For Nova module, configurations can be done via Nova topfiles. Nova topfiles look very similar to saltstack topfiles, except the top-level key is always nova, as nova doesn’t have environments. +** hubblestack/hubblestack_data/top.nova ** +```sh +nova: + '*': + - cve.scan-v2 + - network.ssh + - network.smtp + 'web*': + - cis.centos-7-level-1-scored-v1 + - cis.centos-7-level-2-scored-v1 + 'G@os_family:debian': + - network.ssh + - cis.debian-7-level-1-scored: 'CIS*' +``` +Additionally, all nova topfile matches are compound matches, so you never need to define a match type like you do in saltstack topfiles. Each list item is a string representing the dot-separated location of a yaml file which will be run with hubble.audit. You can also specify a tag glob to use as a filter for just that yaml file, using a colon after the yaml file (turning it into a dictionary). See the last two lines in the yaml above for examples. -Installs a `hubble` "binary" into `/usr/bin/`. - -## Building standalone packages (CentOS) - -```bash -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_rpms.sh # note the lack of sudo, that is important +Examples: +```sh +hubble hubble.top +hubble hubble.top foo/bar/top.nova +hubble hubble.top foo/bar.nova verbose=True ``` -Packages will be in the `hubble/pkg/dist/` directory. The only difference -between the packages is the inclusion of `/etc/init.d/hubble` for el6 and -the inclusion of a systemd unit file for el7. There's no guarantee of glibc -compatibility. +In some cases, your organization may want to skip certain audit checks for certain hosts. This is supported via compensating control configuration. -## Building standalone packages (Debian) +You can skip a check globally by adding a control: key to the check itself. This key should be added at the same level as description and trigger pieces of a check. In this case, the check will never run, and will output under the Controlled results key. -```bash -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_debs.sh # note the lack of sudo, that is important -``` +Nova also supports separate control profiles, for more fine-grained control using topfiles. You can use a separate YAML top-level key called control. Generally, you’ll put this top-level key inside of a separate YAML file and only include it in the top-data for the hosts for which it is relevant. -Package will be in the `hubble/pkg/dist/` directory. There's no guarantee of -glibc compatibility. +For these separate control configs, the audits will always run, whether they are controlled or not. However, controlled audits which fail will be converted from Failure to Controlled in a post-processing operation. -# Usage +The control config syntax is as follows: +```sh +control: + - CIS-2.1.4: This is the reason we control the check + - some_other_tag: + reason: This is the reason we control the check + - a_third_tag_with_no_reason + ``` +Note that providing a reason for the control is optional. Any of the three formats shown in the yaml list above will work. -A config template has been placed in `/etc/hubble/hubble`. Modify it to your -specifications and needs. +Once you have your compensating control config, just target the yaml to the hosts you want to control using your topfile. In this case, all the audits will still run, but if any of the controlled checks fail, they will be removed from Failure and added to Controlled, and will be treated as a Success for the purposes of compliance percentage. -You can do `hubble -h` to see the available options. +#### Nebula +Nebula is Hubble’s Insight system, which ties into osquery, allowing you to query your infrastructure as if it were a database. This system can be used to take scheduled snapshots of your systems. -The first two commands you should run to make sure things are set up correctly -are `hubble --version` and `hubble test.ping`. If those run without issue -you're probably in business! +Nebula leverages the osquery_nebula execution module which requires the osquery binary to be installed. More information about osquery can be found at https://osquery.io. -## Single invocation +##### Usage -Hubble supports one-off invocations of specific functions: +Nebula queries have been designed to give detailed insight into system activity. The queries can be found in the following file. +** hubblestack_nebula/hubblestack_nebula_queries.yaml ** +```sh +fifteen_min: + - query_name: running_procs + query: SELECT p.name AS process, p.pid AS process_id, p.cmdline, p.cwd, p.on_disk, p.resident_size AS mem_used, p.parent, g.groupname, u.username AS user, p.path, h.md5, h.sha1, h.sha256 FROM processes AS p LEFT JOIN users AS u ON p.uid=u.uid LEFT JOIN groups AS g ON p.gid=g.gid LEFT JOIN hash AS h ON p.path=h.path; + - query_name: established_outbound + query: SELECT t.iso_8601 AS _time, pos.family, h.*, ltrim(pos.local_address, ':f') AS src, pos.local_port AS src_port, pos.remote_port AS dest_port, ltrim(remote_address, ':f') AS dest, name, p.path AS file_path, cmdline, pos.protocol, lp.protocol FROM process_open_sockets AS pos JOIN processes AS p ON p.pid=pos.pid LEFT JOIN time AS t LEFT JOIN (SELECT * FROM listening_ports) AS lp ON lp.port=pos.local_port AND lp.protocol=pos.protocol LEFT JOIN hash AS h ON h.path=p.path WHERE NOT remote_address='' AND NOT remote_address='::' AND NOT remote_address='0.0.0.0' AND NOT remote_address='127.0.0.1' AND port is NULL; + - query_name: listening_procs + query: SELECT t.iso_8601 AS _time, h.md5 AS md5, p.pid, name, ltrim(address, ':f') AS address, port, p.path AS file_path, cmdline, root, parent FROM listening_ports AS lp LEFT JOIN processes AS p ON lp.pid=p.pid LEFT JOIN time AS t LEFT JOIN hash AS h ON h.path=p.path WHERE NOT address='127.0.0.1'; + - query_name: suid_binaries + query: SELECT sb.*, t.iso_8601 AS _time FROM suid_bin AS sb JOIN time AS t; +hour: + - query_name: crontab + query: SELECT c.*,t.iso_8601 AS _time FROM crontab AS c JOIN time AS t; +day: + - query_name: rpm_packages + query: SELECT rpm.name, rpm.version, rpm.release, rpm.source AS package_source, rpm.size, rpm.sha1, rpm.arch, t.iso_8601 FROM rpm_packages AS rpm JOIN time AS t; ``` -[root@host1 hubble-v2]# hubble hubble.audit cis.centos-7-level-1-scored-v2-1-0 tags=CIS-3.\* -{'Compliance': '45%', - 'Failure': [{'CIS-3.4.2': 'Ensure /etc/hosts.allow is configured'}, - {'CIS-3.4.3': 'Ensure /etc/hosts.deny is configured'}, - {'CIS-3.6.2': 'Ensure default deny firewall policy'}, - {'CIS-3.6.3': 'Ensure loopback traffic is configured'}, - {'CIS-3.6.1_running': 'Ensure iptables is installed'}, - {'CIS-3.2.4': 'Ensure suspicious packets are logged'}, - {'CIS-3.2.2': 'Ensure ICMP redirects are not accepted'}, - {'CIS-3.2.3': 'Ensure secure ICMP redirects are not accepted'}, - {'CIS-3.1.2': 'Ensure packet redirect sending is disabled'}, - {'CIS-3.3.1': 'Ensure IPv6 router advertisements are not accepted'}, - {'CIS-3.3.2': 'Ensure IPv6 redirects are not accepted'}], - 'Success': [{'CIS-3.6.1_installed': 'Ensure iptables is installed'}, - {'CIS-3.4.1': 'Ensure TCP Wrappers is installed'}, - {'CIS-3.4.5': 'Ensure permissions on /etc/hosts.deny are 644'}, - {'CIS-3.4.4': 'Ensure permissions on /etc/hosts.allow are configured'}, - {'CIS-3.2.5': 'Ensure broadcast ICMP requests are ignored'}, - {'CIS-3.2.6': None}, - {'CIS-3.2.1': 'Ensure source routed packets are not accepted'}, - {'CIS-3.1.1': 'Ensure IP forwarding is disabled'}, - {'CIS-3.2.8': 'Ensure TCP SYN Cookies is enabled'}]} + +Nebula query data is best tracked in a central logging or similar system. However, if you would like to run the queries manually you can call the nebula execution module. + +query_group : Group of queries to run +verbose : Defaults to False. If set to True, more information (such as the query which was run) will be included in the result. + +Examples: +```sh +hubble nebula.queries day +hubble nebula.queries hour [verbose=True] +hubble nebula.queries fifteen-min [pillar_key=sec_osqueries] ``` +##### Configuration +For Nebula module, configurations can be done via Nebula topfiles. Nebula topfile functionality is similar to Nova topfiles. -## Scheduler +** top.nebula topfile ** -Hubble supports scheduled jobs. See the docstring for `schedule` for -more information, but it follows the basic structure of salt scheduled jobs. -The schedule config should be placed in `/etc/hubble/hubble` along with any -other hubble config: +```sh +nebula: + '*': + - hubblestack_nebula_queries + 'sample_team': + - sample_team_nebula_queries +``` +Nebula topfile, nebula.top by default has hubblestack_nebula_queries.yaml which consists queries as explained in the above usage section and if specific queries are required by teams then those queries can be added in a another yaml file and include it in nebula.top topfile. Place this new yaml file at the path hubblestack_data/hubblestack_nebula +Examples for running nebula.top: +```sh +hubble nebula.top hour +hubble nebula.top foo/bar/top.nova +hubble nebula.top fifteen_min verbose=True ``` +#### Pulsar + +Pulsar is designed to monitor for file system events, acting as a real-time File Integrity Monitoring (FIM) agent. Pulsar is composed of a custom Salt beacon that watches for these events and hooks into the returner system for alerting and reporting. In other words, you can recieve real-time alerts for unscheduled file system modifications anywhere you want to recieve them. We’ve designed Pulsar to be lightweight and not dependent on a Salt Master. It simply watches for events and directly sends them to one of the Pulsar returner destinations. + +##### Usage +Once Pulsar is fully running there isn’t anything you need to do to interact with it. It simply runs quietly in the background and sends you alerts. + +##### Configuration +The default Pulsar configuration (found in /etc/hubble/hubble) is meant to act as a template. It works in tandem with the file. Every environment will have different needs and requirements, and we understand that, so we’ve designed Pulsar to be flexible. + +** /etc/hubble/hubble ** +```sh schedule: - job1: - function: hubble.audit - seconds: 60 - splay: 30 - args: - - cis.centos-7-level-1-scored-v2-1-0 - kwargs: - verbose: True - show_profile: True - returner: splunk_nova_return + pulsar: + function: pulsar.process + seconds: 1 + returner: splunk_pulsar_return + run_on_start: True + pulsar_canary: + function: pulsar.canary + seconds: 86400 run_on_start: True ``` - -Note that you need to have your hubblestack splunk returner configured in order -to use the above block: - +** hubblestack_pulsar_config.yaml ** +```sh +/etc: { recurse: True, auto_add: True } +/bin: { recurse: True, auto_add: True } +/sbin: { recurse: True, auto_add: True } +/boot: { recurse: True, auto_add: True } +/usr/bin: { recurse: True, auto_add: True } +/usr/sbin: { recurse: True, auto_add: True } +/usr/local/bin: { recurse: True, auto_add: True } +/usr/local/sbin: { recurse: True, auto_add: True } +return: slack_pulsar +checksum: sha256 +stats: True +batch: False ``` -hubblestack: - returner: - splunk: - - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX - indexer: splunk-indexer.domain.tld - index: hubble - sourcetype_nova: hubble_audit - sourcetype_nebula: hubble_osquery - sourcetype_pulsar: hubble_fim +In order to receive Pulsar notifications you’ll need to install the custom returners found in the Quasar repository. + +Example of using the Slack Pulsar returner to recieve FIM notifications: +```sh +slack_pulsar: + as_user: true + username: calculon + channel: channel + api_key: xoxb-xxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx ``` +Excluding Paths +There may be certain paths that you want to exclude from this real-time FIM tool. This can be done using the exclude: keyword beneath any defined path. + +/var: + recurse: True + auto_add: True + exclude: + - /var/log + - /var/spool + - /var/cache + - /var/lock + +For Pulsar module, configurations can be done via Pulsar topfiles when teams needs to add specific configurations or exclusions as discussed above. + +** top.pulsar topfile ** + +```sh +pulsar: +pulsar: + '*': + - hubblestack_pulsar_config + 'sample_team': + - sample_team_hubblestack_pulsar_config +``` +Pulsar topfile by default has hubblestack_pulsar_config which consists of default configurations and if specific configurations are required by teams then those can be added in another yaml file and include it in pulsar.top topfile. Place this new yaml file at the path /hubblestack_pulsar -When using the scheduler, you can just run `hubble` in the foreground, or use -the included sysvinit and systemd files to run it as a service in the -background. You can also start it as a daemon without any scripts by using the -`-d` argument. - -Use `-vvv` to turn on debug logging. +Examples for running pulsar.top: +```sh +hubble pulsar.top +hubble pulsar.top verbose=True +``` From e6e5bf25e9a3372f5d86248d5e139149fd7d13c2 Mon Sep 17 00:00:00 2001 From: root Date: Wed, 13 Sep 2017 20:13:55 +0000 Subject: [PATCH 02/35] formatting changes --- README.md | 120 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 62 insertions(+), 58 deletions(-) diff --git a/README.md b/README.md index a47bcb971..11bce1c8a 100644 --- a/README.md +++ b/README.md @@ -1,41 +1,40 @@ # HUBBLE -Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, automated remediation, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. +Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. -#### Installation +## Installation -Installing using setup.py +Installing using `setup.py` ```sh sudo yum install git -y git clone https://github.com/hubblestack/hubble cd hubble sudo python setup.py install ``` -Installs a hubble "binary" into /usr/bin/. - -Hubble has three components which are discussed as follows: - -#### Nova +Installs a hubble "binary" into `/usr/bin/`. + +### Nova Nova is Hubble's auditing system. -##### Usage +#### Usage There are four primary functions for Nova module: -- hubble.sync: syncs the hubblestack_nova_profiles/ and hubblestack_nova/ directories to the minion(s). -- hubble.load: loads the synced audit modules and their yaml configuration files. -- hubble.audit: audits the minion(s) using the YAML profile(s) you provide as comma-separated arguments. hubble.audit takes two optional arguments. The first is a comma-separated list of paths. These paths can be files or directories within the hubblestack_nova_profiles directory. The second argument allows for toggling Nova configuration, such as verbosity, level of detail, etc. If hubble.audit is run without targeting any audit configs or directories, it will instead run hubble.top with no arguments. hubble.audit will return a list of audits which were successful, and a list of audits which failed. -- hubble.top: audits the minion(s) using the top.nova configuration. +- `hubble.sync` : syncs the `hubblestack_nova_profiles/` and `hubblestack_nova/` directories to the host(s). +- `hubble.load` : loads the synced audit hosts and their yaml configuration files. +- `hubble.audit` : audits the minion(s) using the YAML profile(s) you provide as comma-separated arguments. hubble.audit takes two optional arguments. The first is a comma-separated list of paths. These paths can be files or directories within the `hubblestack_nova_profiles` directory. The second argument allows for toggling Nova configuration, such as verbosity, level of detail, etc. If `hubble.audit` is run without targeting any audit configs or directories, it will instead run `hubble.top` with no arguments. `hubble.audit` will return a list of audits which were successful, and a list of audits which failed. +- `hubble.top` : audits the minion(s) using the top.nova configuration. -Here are some example calls for hubble.audit: +Here are some example calls for `hubble.audit`: ```sh -- Run the cve scanner and the CIS profile: +# Run the cve scanner and the CIS profile: hubble hubble.audit cve.scan-v2,cis.centos-7-level-1-scored-v1 -- Run hubble.top with the default topfile (top.nova) +# Run hubble.top with the default topfile (top.nova) hubble hubble.top -- Run all yaml configs and tags under salt://hubblestack_nova_profiles/foo/ and salt://hubblestack_nova_profiles/bar, but only run audits with tags starting with "CIS" +# Run all yaml configs and tags under salt://hubblestack_nova_profiles/foo/ and salt://hubblestack_nova_profiles/bar, but only run audits with tags starting with "CIS" hubble hubble.audit foo,bar tags='CIS*' ``` -##### Configuration +#### Configuration For Nova module, configurations can be done via Nova topfiles. Nova topfiles look very similar to saltstack topfiles, except the top-level key is always nova, as nova doesn’t have environments. -** hubblestack/hubblestack_data/top.nova ** + +**hubblestack/hubblestack_data/top.nova** ```sh nova: '*': @@ -49,7 +48,7 @@ nova: - network.ssh - cis.debian-7-level-1-scored: 'CIS*' ``` -Additionally, all nova topfile matches are compound matches, so you never need to define a match type like you do in saltstack topfiles. Each list item is a string representing the dot-separated location of a yaml file which will be run with hubble.audit. You can also specify a tag glob to use as a filter for just that yaml file, using a colon after the yaml file (turning it into a dictionary). See the last two lines in the yaml above for examples. +Additionally, all nova topfile matches are compound matches, so you never need to define a match type like you do in saltstack topfiles. Each list item is a string representing the dot-separated location of a yaml file which will be run with `hubble.audit`. You can also specify a tag glob to use as a filter for just that yaml file, using a colon after the yaml file (turning it into a dictionary). See the last two lines in the yaml above for examples. Examples: ```sh @@ -60,7 +59,7 @@ hubble hubble.top foo/bar.nova verbose=True In some cases, your organization may want to skip certain audit checks for certain hosts. This is supported via compensating control configuration. -You can skip a check globally by adding a control: key to the check itself. This key should be added at the same level as description and trigger pieces of a check. In this case, the check will never run, and will output under the Controlled results key. +You can skip a check globally by adding a `control: ` key to the check itself. This key should be added at the same level as description and trigger pieces of a check. In this case, the check will never run, and will output under the Controlled results key. Nova also supports separate control profiles, for more fine-grained control using topfiles. You can use a separate YAML top-level key called control. Generally, you’ll put this top-level key inside of a separate YAML file and only include it in the top-data for the hosts for which it is relevant. @@ -78,16 +77,16 @@ Note that providing a reason for the control is optional. Any of the three forma Once you have your compensating control config, just target the yaml to the hosts you want to control using your topfile. In this case, all the audits will still run, but if any of the controlled checks fail, they will be removed from Failure and added to Controlled, and will be treated as a Success for the purposes of compliance percentage. -#### Nebula +### Nebula Nebula is Hubble’s Insight system, which ties into osquery, allowing you to query your infrastructure as if it were a database. This system can be used to take scheduled snapshots of your systems. -Nebula leverages the osquery_nebula execution module which requires the osquery binary to be installed. More information about osquery can be found at https://osquery.io. +Nebula leverages the osquery_nebula execution module which requires the osquery binary to be installed. More information about osquery can be found at `https://osquery.io`. -##### Usage +#### Usage Nebula queries have been designed to give detailed insight into system activity. The queries can be found in the following file. -** hubblestack_nebula/hubblestack_nebula_queries.yaml ** +**hubblestack_nebula/hubblestack_nebula_queries.yaml** ```sh fifteen_min: - query_name: running_procs @@ -107,20 +106,20 @@ day: ``` Nebula query data is best tracked in a central logging or similar system. However, if you would like to run the queries manually you can call the nebula execution module. - +```sh query_group : Group of queries to run verbose : Defaults to False. If set to True, more information (such as the query which was run) will be included in the result. - +``` Examples: ```sh hubble nebula.queries day hubble nebula.queries hour [verbose=True] -hubble nebula.queries fifteen-min [pillar_key=sec_osqueries] +hubble nebula.queries fifteen-min ``` -##### Configuration +#### Configuration For Nebula module, configurations can be done via Nebula topfiles. Nebula topfile functionality is similar to Nova topfiles. -** top.nebula topfile ** +**top.nebula topfile** ```sh nebula: @@ -129,38 +128,25 @@ nebula: 'sample_team': - sample_team_nebula_queries ``` -Nebula topfile, nebula.top by default has hubblestack_nebula_queries.yaml which consists queries as explained in the above usage section and if specific queries are required by teams then those queries can be added in a another yaml file and include it in nebula.top topfile. Place this new yaml file at the path hubblestack_data/hubblestack_nebula +Nebula topfile, `nebula.top` by default has `hubblestack_nebula_queries.yaml` which consists queries as explained in the above usage section and if specific queries are required by teams then those queries can be added in a another yaml file and include it in `nebula.top` topfile. Place this new yaml file at the path `hubblestack/hubblestack_data/hubblestack_nebula` -Examples for running nebula.top: +Examples for running `nebula.top`: ```sh hubble nebula.top hour -hubble nebula.top foo/bar/top.nova +hubble nebula.top foo/bar/top.nova hour hubble nebula.top fifteen_min verbose=True ``` -#### Pulsar +### Pulsar -Pulsar is designed to monitor for file system events, acting as a real-time File Integrity Monitoring (FIM) agent. Pulsar is composed of a custom Salt beacon that watches for these events and hooks into the returner system for alerting and reporting. In other words, you can recieve real-time alerts for unscheduled file system modifications anywhere you want to recieve them. We’ve designed Pulsar to be lightweight and not dependent on a Salt Master. It simply watches for events and directly sends them to one of the Pulsar returner destinations. +Pulsar is designed to monitor for file system events, acting as a real-time File Integrity Monitoring (FIM) agent. Pulsar is composed of a custom Salt beacon that watches for these events and hooks into the returner system for alerting and reporting. In other words, you can recieve real-time alerts for unscheduled file system modifications anywhere you want to recieve them. We’ve designed Pulsar to be lightweight and does not affect the system performance. It simply watches for events and directly sends them to one of the Pulsar returner destinations. -##### Usage -Once Pulsar is fully running there isn’t anything you need to do to interact with it. It simply runs quietly in the background and sends you alerts. +#### Usage +Once Pulsar is configured there isn’t anything you need to do to interact with it. It simply runs quietly in the background and sends you alerts. -##### Configuration -The default Pulsar configuration (found in /etc/hubble/hubble) is meant to act as a template. It works in tandem with the file. Every environment will have different needs and requirements, and we understand that, so we’ve designed Pulsar to be flexible. +#### Configuration +The Pulsar configuration can be found at `hubblestack_pulsar_config.yaml` file. Every environment will have different needs and requirements, and we understand that, so we’ve designed Pulsar to be flexible. -** /etc/hubble/hubble ** -```sh -schedule: - pulsar: - function: pulsar.process - seconds: 1 - returner: splunk_pulsar_return - run_on_start: True - pulsar_canary: - function: pulsar.canary - seconds: 86400 - run_on_start: True -``` -** hubblestack_pulsar_config.yaml ** +**hubblestack_pulsar_config.yaml** ```sh /etc: { recurse: True, auto_add: True } /bin: { recurse: True, auto_add: True } @@ -175,6 +161,23 @@ checksum: sha256 stats: True batch: False ``` + +Pulsar runs on schdule which can be found at `/etc/hubble/hubble` + +**/etc/hubble/hubble** +```sh +schedule: + pulsar: + function: pulsar.process + seconds: 1 + returner: splunk_pulsar_return + run_on_start: True + pulsar_canary: + function: pulsar.canary + seconds: 86400 + run_on_start: True +``` + In order to receive Pulsar notifications you’ll need to install the custom returners found in the Quasar repository. Example of using the Slack Pulsar returner to recieve FIM notifications: @@ -185,8 +188,8 @@ slack_pulsar: channel: channel api_key: xoxb-xxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx ``` -Excluding Paths -There may be certain paths that you want to exclude from this real-time FIM tool. This can be done using the exclude: keyword beneath any defined path. +##### Excluding Paths +There may be certain paths that you want to exclude from this real-time FIM tool. This can be done using the exclude: keyword beneath any defined path in `hubblestack_pulsar_config.yaml` file. /var: recurse: True @@ -199,17 +202,18 @@ There may be certain paths that you want to exclude from this real-time FIM tool For Pulsar module, configurations can be done via Pulsar topfiles when teams needs to add specific configurations or exclusions as discussed above. -** top.pulsar topfile ** +##### Pulsar topfile `top.pulsar` + +For Pulsar module, configurations can be done via Pulsar topfiles ```sh -pulsar: pulsar: '*': - hubblestack_pulsar_config 'sample_team': - sample_team_hubblestack_pulsar_config ``` -Pulsar topfile by default has hubblestack_pulsar_config which consists of default configurations and if specific configurations are required by teams then those can be added in another yaml file and include it in pulsar.top topfile. Place this new yaml file at the path /hubblestack_pulsar +Pulsar topfile by default has 'hubblestack_pulsar_config' which consists of default configurations and if specific configurations are required by teams then those can be added in another yaml file and include it in 'pulsar.top' topfile. Place this new yaml file at the path `hubblestack/hubblestack_data/hubblestack_pulsar` Examples for running pulsar.top: ```sh From a4c75d4c65309eae77208afa81347729c002d4a9 Mon Sep 17 00:00:00 2001 From: Andres Martinson Date: Fri, 15 Sep 2017 12:49:56 -0600 Subject: [PATCH 03/35] update osquery build to 2.7.0 for debian 7 (wheezy) --- pkg/debian7/Dockerfile | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/pkg/debian7/Dockerfile b/pkg/debian7/Dockerfile index d0421fa7f..33699ea78 100644 --- a/pkg/debian7/Dockerfile +++ b/pkg/debian7/Dockerfile @@ -17,7 +17,7 @@ RUN mkdir -p /etc/osquery /var/log/osquery /etc/hubble/hubble.d /opt/hubble /opt #osquery should be built first since requirements for other packages can interfere with osquery dependencies #to build, osquery scripts want sudo and a user to sudo with. #to pin to a different version change the following envirnment variable -ENV OSQUERY_SRC_VERSION=2.6.0 +ENV OSQUERY_SRC_VERSION=2.7.0 ENV OSQUERY_BUILD_USER=osquerybuilder ENV OSQUERY_GIT_URL=https://github.com/facebook/osquery.git RUN apt-get -y install git make python ruby sudo locales @@ -37,7 +37,7 @@ RUN cd /home/"$OSQUERY_BUILD_USER" \ #these homebrew hashes need to be current. hashes in osquery git repo are often out of date for the tags we check out and try to build. #this is a problem and they are aware of it. let the magic hashes commence: && sed -i 's,^\(HOMEBREW_CORE=\).*,\1'941ca36839ea354031846d73ad538e1e44e673f4',' tools/provision.sh \ - && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'abc5c5782c5850f2deff1f3d463945f90f2feaac',' tools/provision.sh \ + && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'f54281a496bb7d3dd2f46b2f3067193d05f5013b',' tools/provision.sh \ && sed -i 's,^\(HOMEBREW_BREW=\).*,\1'ac2cbd2137006ebfe84d8584ccdcb5d78c1130d9',' tools/provision.sh \ && sed -i 's,^\(LINUXBREW_BREW=\).*,\1'20bcce2c176469cec271b46d523eef1510217436',' tools/provision.sh \ && make sysprep \ @@ -85,7 +85,8 @@ RUN mkdir -p "$CMAKE_TEMP" \ #must be preceded by cmake install #must precede pyinstaller requirements ENV LIBGIT2_SRC_URL=https://github.com/libgit2/libgit2/archive/v0.26.0.tar.gz -ENV LIBGIT2_SRC_SHA256=4ac70a2bbdf7a304ad2a9fb2c53ad3c8694be0dbec4f1fce0f3cd0cda14fb3b9 +#it turns out github provided release files can change. so even though the code hopefully hasn't changed, the hash has. +ENV LIBGIT2_SRC_SHA256=6a62393e0ceb37d02fe0d5707713f504e7acac9006ef33da1e88960bd78b6eac ENV LIBGIT2_SRC_VERSION=0.26.0 ENV LIBGIT2TEMP=/tmp/libgit2temp RUN mkdir -p "$LIBGIT2TEMP" \ @@ -114,7 +115,7 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.3 +ENV HUBBLE_CHECKOUT=develop ENV HUBBLE_VERSION=2.2.3 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src From 0b20975f14ff7773b427001cab423b14a8226197 Mon Sep 17 00:00:00 2001 From: Andres Martinson Date: Fri, 15 Sep 2017 13:29:21 -0600 Subject: [PATCH 04/35] update osquery build to 2.7.0 for coreos --- pkg/coreos/Dockerfile | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/pkg/coreos/Dockerfile b/pkg/coreos/Dockerfile index 0df021523..7b7917d79 100644 --- a/pkg/coreos/Dockerfile +++ b/pkg/coreos/Dockerfile @@ -17,7 +17,7 @@ RUN mkdir -p /etc/osquery /var/log/osquery /etc/hubble/hubble.d /opt/hubble /opt #osquery should be built first since requirements for other packages can interfere with osquery dependencies #to build, osquery scripts want sudo and a user to sudo with. #to pin to a different version change the following envirnment variable -ENV OSQUERY_SRC_VERSION=2.6.0 +ENV OSQUERY_SRC_VERSION=2.7.0 ENV OSQUERY_BUILD_USER=osquerybuilder ENV OSQUERY_GIT_URL=https://github.com/facebook/osquery.git RUN apt-get -y install git make python ruby sudo @@ -34,7 +34,7 @@ RUN cd /home/"$OSQUERY_BUILD_USER" \ #these homebrew hashes need to be current. hashes in osquery git repo are often out of date for the tags we check out and try to build. #this is a problem and they are aware of it. let the magic hashes commence: && sed -i 's,^\(HOMEBREW_CORE=\).*,\1'941ca36839ea354031846d73ad538e1e44e673f4',' tools/provision.sh \ - && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'abc5c5782c5850f2deff1f3d463945f90f2feaac',' tools/provision.sh \ + && sed -i 's,^\(LINUXBREW_CORE=\).*,\1'f54281a496bb7d3dd2f46b2f3067193d05f5013b',' tools/provision.sh \ && sed -i 's,^\(HOMEBREW_BREW=\).*,\1'ac2cbd2137006ebfe84d8584ccdcb5d78c1130d9',' tools/provision.sh \ && sed -i 's,^\(LINUXBREW_BREW=\).*,\1'20bcce2c176469cec271b46d523eef1510217436',' tools/provision.sh \ && make sysprep \ @@ -64,7 +64,8 @@ RUN apt-get -y install \ #libgit2 install start #must precede pyinstaller requirements ENV LIBGIT2_SRC_URL=https://github.com/libgit2/libgit2/archive/v0.26.0.tar.gz -ENV LIBGIT2_SRC_SHA256=4ac70a2bbdf7a304ad2a9fb2c53ad3c8694be0dbec4f1fce0f3cd0cda14fb3b9 +#it turns out github provided release files can change. so even though the code hopefully hasn't changed, the hash has. +ENV LIBGIT2_SRC_SHA256=6a62393e0ceb37d02fe0d5707713f504e7acac9006ef33da1e88960bd78b6eac ENV LIBGIT2_SRC_VERSION=0.26.0 ENV LIBGIT2TEMP=/tmp/libgit2temp RUN mkdir -p "$LIBGIT2TEMP" \ @@ -87,7 +88,7 @@ RUN pip install --upgrade pip \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.3 +ENV HUBBLE_CHECKOUT=develop ENV HUBBLE_VERSION=2.2.3 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src From 083f961abbec66fe3de51a6cbe04bdf23053401b Mon Sep 17 00:00:00 2001 From: yassingh Date: Mon, 18 Sep 2017 10:50:53 +0530 Subject: [PATCH 05/35] Adding mount and systemctl modules for hubble (#10) * Adding mount and systemctl modules for hubble --- hubblestack/files/hubblestack_nova/mount.py | 225 ++++++++++++++++++ .../files/hubblestack_nova/systemctl.py | 185 ++++++++++++++ 2 files changed, 410 insertions(+) create mode 100644 hubblestack/files/hubblestack_nova/mount.py create mode 100644 hubblestack/files/hubblestack_nova/systemctl.py diff --git a/hubblestack/files/hubblestack_nova/mount.py b/hubblestack/files/hubblestack_nova/mount.py new file mode 100644 index 000000000..eb33489f5 --- /dev/null +++ b/hubblestack/files/hubblestack_nova/mount.py @@ -0,0 +1,225 @@ +# -*- encoding: utf-8 -*- +''' +HubbleStack Nova plugin for verifying attributes associated with a mounted partition. + +Supports both blacklisting and whitelisting patterns. Blacklisted patterns must +not be found in the specified file. Whitelisted patterns must be found in the +specified file. + +:maintainer: HubbleStack / basepi +:maturity: 2017.8.29 +:platform: All +:requires: SaltStack + +This audit module requires yaml data to execute. It will search the local +directory for any .yaml files, and if it finds a top-level 'mount' key, it will +use that data. + +Sample YAML data, with inline comments: + + +mount: + whitelist: # or blacklist + ensure_nodev_option_on_/tmp: # unique ID + data: + CentOS Linux-6: # osfinger grain + - '/tmp': # path of partition + tag: 'CIS-1.1.1' # audit tag + attribute: nodev # attribute which must exist for the mounted partition + check_type: soft # if 'hard', the check fails if the path doesn't exist or + # if it is not a mounted partition. If 'soft', the test passes + # for such cases (default: hard) +''' +from __future__ import absolute_import +import logging + +import fnmatch +import yaml +import os +import copy +import salt.utils +import re + +from distutils.version import LooseVersion + +log = logging.getLogger(__name__) + + +def __virtual__(): + if salt.utils.is_windows(): + return False, 'This audit module only runs on linux' + return True + + +def audit(data_list, tags, debug=False, **kwargs): + ''' + Run the mount audits contained in the YAML files processed by __virtual__ + ''' + + __data__ = {} + + for profile, data in data_list: + _merge_yaml(__data__, data, profile) + + __tags__ = _get_tags(__data__) + + if debug: + log.debug('mount audit __data__:') + log.debug(__data__) + log.debug('mount audit __tags__:') + log.debug(__tags__) + + ret = {'Success': [], 'Failure': [], 'Controlled': []} + for tag in __tags__: + if fnmatch.fnmatch(tag, tags): + for tag_data in __tags__[tag]: + if 'control' in tag_data: + ret['Controlled'].append(tag_data) + continue + + + name = tag_data.get('name') + audittype = tag_data.get('type') + + + if 'attribute' not in tag_data: + log.error('No attribute found for mount audit {0}, file {1}' + .format(tag,name)) + tag_data = copy.deepcopy(tag_data) + tag_data['error'] = 'No pattern found'.format(mod) + ret['Failure'].append(tag_data) + continue + + attribute = tag_data.get('attribute') + + check_type = 'hard' + if 'check_type' in tag_data: + check_type = tag_data.get('check_type') + + if check_type not in ['hard','soft']: + log.error('Unrecognized option: ' + check_type) + tag_data = copy.deepcopy(tag_data) + tag_data['error'] = 'check_type can only be hard or soft' + ret['Failure'].append(tag_data) + continue + + found = _check_mount_attribute(name,attribute,check_type) + + if audittype == 'blacklist': + if found: + ret['Failure'].append(tag_data) + else: + ret['Success'].append(tag_data) + + elif audittype == 'whitelist': + if found: + ret['Success'].append(tag_data) + else: + ret['Failure'].append(tag_data) + + return ret + + +def _merge_yaml(ret, data, profile=None): + ''' + Merge two yaml dicts together at the mount:blacklist and mount:whitelist level + ''' + if 'mount' not in ret: + ret['mount'] = {} + for topkey in ('blacklist', 'whitelist'): + if topkey in data.get('mount', {}): + if topkey not in ret['mount']: + ret['mount'][topkey] = [] + for key, val in data['mount'][topkey].iteritems(): + if profile and isinstance(val, dict): + val['nova_profile'] = profile + ret['mount'][topkey].append({key: val}) + return ret + + +def _get_tags(data): + ''' + Retrieve all the tags for this distro from the yaml + ''' + + ret = {} + distro = __grains__.get('osfinger') + + for toplist, toplevel in data.get('mount', {}).iteritems(): + # mount:blacklist + for audit_dict in toplevel: + # mount:blacklist:0 + for audit_id, audit_data in audit_dict.iteritems(): + # mount:blacklist:0:telnet + tags_dict = audit_data.get('data', {}) + # mount:blacklist:0:telnet:data + tags = None + for osfinger in tags_dict: + if osfinger == '*': + continue + osfinger_list = [finger.strip() for finger in osfinger.split(',')] + for osfinger_glob in osfinger_list: + if fnmatch.fnmatch(distro, osfinger_glob): + tags = tags_dict.get(osfinger) + break + if tags is not None: + break + # If we didn't find a match, check for a '*' + if tags is None: + tags = tags_dict.get('*', []) + # mount:blacklist:0:telnet:data:Debian-8 + if isinstance(tags, dict): + # malformed yaml, convert to list of dicts + tmp = [] + for name, tag in tags.iteritems(): + tmp.append({name: tag}) + tags = tmp + for item in tags: + for name, tag in item.iteritems(): + tag_data = {} + # Whitelist could have a dictionary, not a string + if isinstance(tag, dict): + tag_data = copy.deepcopy(tag) + tag = tag_data.pop('tag') + if tag not in ret: + ret[tag] = [] + formatted_data = {'name': name, + 'tag': tag, + 'module': 'mount', + 'type': toplist} + formatted_data.update(tag_data) + formatted_data.update(audit_data) + formatted_data.pop('data') + ret[tag].append(formatted_data) + return ret + +def _check_mount_attribute(path,attribute, check_type): + ''' + This function checks if the partition at a given path is mounted with a particular attribute or not. + If 'check_type' is 'hard', the function returns False if he specified path does not exist, or if it + is not a mounted partition. If 'check_type' is 'soft', the functions returns True in such cases. + ''' + + if not os.path.exists(path): + if check_type == 'hard': + return False + else: + return True + + + mount_object = __salt__['mount.active']() + + if path in mount_object: + attributes = mount_object.get(path) + opts = attributes.get('opts') + if attribute in opts: + return True + else: + return False + + else: + if check_type == 'hard': + return False + else: + return True + diff --git a/hubblestack/files/hubblestack_nova/systemctl.py b/hubblestack/files/hubblestack_nova/systemctl.py new file mode 100644 index 000000000..db67250d4 --- /dev/null +++ b/hubblestack/files/hubblestack_nova/systemctl.py @@ -0,0 +1,185 @@ +# -*- encoding: utf-8 -*- +''' +HubbleStack Nova plugin for using systemctl to verify status of a given service. + +Supports both blacklisting and whitelisting patterns. Blacklisted services must +not be running. Whitelisted services must be running. + +:maintainer: HubbleStack / basepi +:maturity: 2017.8.29 +:platform: All +:requires: SaltStack + +This audit module requires yaml data to execute. It will search the local +directory for any .yaml files, and if it finds a top-level 'systemctl' key, it will +use that data. + +Sample YAML data, with inline comments: + + +systemctl: + whitelist: # or blacklist + dhcpd-disabled: # unique ID + data: + CentOS Linux-7: # osfinger grain + tag: 'CIS-1.1.1' # audit tag + service: dhcpd # mandatory field. + description: Ensure DHCP Server is not enabled + alert: email + trigger: state + + +''' +from __future__ import absolute_import +import logging + +import fnmatch +import yaml +import os +import copy +import salt.utils +import re + +from distutils.version import LooseVersion + +log = logging.getLogger(__name__) + + +def __virtual__(): + if salt.utils.is_windows(): + return False, 'This audit module only runs on linux' + return True + + +def audit(data_list, tags, debug=False, **kwargs): + ''' + Run the systemctl audits contained in the YAML files processed by __virtual__ + ''' + __data__ = {} + for profile, data in data_list: + _merge_yaml(__data__, data, profile) + __tags__ = _get_tags(__data__) + + if debug: + log.debug('systemctl audit __data__:') + log.debug(__data__) + log.debug('systemctl audit __tags__:') + log.debug(__tags__) + + ret = {'Success': [], 'Failure': [], 'Controlled': []} + for tag in __tags__: + if fnmatch.fnmatch(tag, tags): + for tag_data in __tags__[tag]: + if 'control' in tag_data: + ret['Controlled'].append(tag_data) + continue + name = tag_data['name'] + audittype = tag_data['type'] + disabled_states = ["disabled", "not_found", "indirect"] + + status_code, status = _systemctl(name) + # Blacklisted service (must not be running or not found) + if audittype == 'blacklist': + if status_code == "1" or status in disabled_states: + ret['Success'].append(tag_data) + else: + tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code + ret['Failure'].append(tag_data) + # Whitelisted pattern (must be found and running) + elif audittype == 'whitelist': + if status_code == "0": + ret['Success'].append(tag_data) + else: + tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code + ret['Failure'].append(tag_data) + + return ret + + +def _merge_yaml(ret, data, profile=None): + ''' + Merge two yaml dicts together at the systemctl:blacklist and systemctl:whitelist level + ''' + if 'systemctl' not in ret: + ret['systemctl'] = {} + for topkey in ('blacklist', 'whitelist'): + if topkey in data.get('systemctl', {}): + if topkey not in ret['systemctl']: + ret['systemctl'][topkey] = [] + for key, val in data['systemctl'][topkey].iteritems(): + if profile and isinstance(val, dict): + val['nova_profile'] = profile + ret['systemctl'][topkey].append({key: val}) + + return ret + + +def _get_tags(data): + ''' + Retrieve all the tags for this distro from the yaml + ''' + ret = {} + distro = __grains__.get('osfinger') + for toplist, toplevel in data.get('systemctl', {}).iteritems(): + for audit_dict in toplevel: + for audit_id, audit_data in audit_dict.iteritems(): + tags_dict = audit_data.get('data', {}) + tags = None + for osfinger in tags_dict: + if osfinger == '*': + continue + osfinger_list = [finger.strip() for finger in osfinger.split(',')] + for osfinger_glob in osfinger_list: + if fnmatch.fnmatch(distro, osfinger_glob): + tags = tags_dict.get(osfinger) + break + if tags is not None: + break + # If we didn't find a match, check for a '*' + if tags is None: + tags = tags_dict.get('*', []) + # systemctl:blacklist:0:telnet:data:Debian-8 + if isinstance(tags, dict): + # malformed yaml, convert to list of dicts + tmp = [] + for name, tag in tags.iteritems(): + tmp.append({name: tag}) + tags = tmp + for item in tags: + for name, tag in item.iteritems(): + tag_data = {} + # Whitelist could have a dictionary, not a string + if isinstance(tag, dict): + tag_data = copy.deepcopy(tag) + tag = tag_data.pop('tag') + if tag not in ret: + ret[tag] = [] + formatted_data = {'name': name, + 'tag': tag, + 'module': 'systemctl', + 'type': toplist} + formatted_data.update(tag_data) + formatted_data.update(audit_data) + formatted_data.pop('data') + ret[tag].append(formatted_data) + + return ret + +def _execute_shell_command(cmd): + ''' + This function will execute passed command in /bin/shell + ''' + return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) + + +def _systemctl(service_name): + ''' + Return service status. + Return object will be like ('status_code', 'status') if {service_is_present} else ('status_code',) + ''' + output = _execute_shell_command('systemctl is-enabled ' + service_name + ' 2>/dev/null; echo $?').strip() + output = output.split('\n') if output != "" else [] + if output == []: + return False + return (output[1], output[0]) if len(output) == 2 else (output[0], "not_found") + From ddd1328ed47fc81d3763114b08834d9076228b93 Mon Sep 17 00:00:00 2001 From: root Date: Mon, 18 Sep 2017 17:14:04 +0000 Subject: [PATCH 06/35] changes added for Installation section --- README.md | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 11bce1c8a..1c5e4da47 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,8 @@ Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. -## Installation - -Installing using `setup.py` +## Packaging / Installing +### Installing using setup.py ```sh sudo yum install git -y git clone https://github.com/hubblestack/hubble @@ -13,6 +12,40 @@ sudo python setup.py install ``` Installs a hubble "binary" into `/usr/bin/`. +A config template has been placed in `/etc/hubble/hubble`. Modify it to your specifications and needs. You can do `hubble -h` to see the available options. + +The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. + +### Building standalone packages (CentOS) +```sh +sudo yum install git -y +git clone https://github.com/hubblestack/hubble +cd hubble/pkg +./build_rpms.sh # note the lack of sudo, that is important +``` +Packages will be in the `hubble/pkg/dist/` directory. The only difference between the packages is the inclusion of `/etc/init.d/hubble` for el6 and the inclusion of a systemd unit file for el7. There's no guarantee of glibc compatibility. + +### Building standalone packages (Debian) +```sh +sudo yum install git -y +git clone https://github.com/hubblestack/hubble +cd hubble/pkg +./build_debs.sh # note the lack of sudo, that is important +``` +Package will be in the `hubble/pkg/dist/` directory. There's no guarantee of glibc compatibility. + +### Buidling Hubble packages through Dockerfile +Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles can be found at `hubblestack/hubble/pkg`. +To build an image +```sh +1. copy pkg/scripts/pyinstaller-requirements.txt to directory with this Dockerfile +2. docker build -t +``` +To run the container +```sh +docker run -it --rm +``` + ### Nova Nova is Hubble's auditing system. #### Usage From 4de3a32d372f469201d4a718ed779b31105bdcb5 Mon Sep 17 00:00:00 2001 From: root Date: Mon, 18 Sep 2017 17:27:10 +0000 Subject: [PATCH 07/35] added TOC --- README.md | 43 ++++++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 1c5e4da47..76e1067a0 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,24 @@ +Table of Contents +================= + + * [HUBBLE](#hubble) + * [Packaging / Installing](#packaging--installing) + * [Installing using setup.py](#installing-using-setuppy) + * [Building standalone packages (CentOS)](#building-standalone-packages-centos) + * [Building standalone packages (Debian)](#building-standalone-packages-debian) + * [Buidling Hubble packages through Dockerfile](#buidling-hubble-packages-through-dockerfile) + * [Nova](#nova) + * [Usage](#usage) + * [Configuration](#configuration) + * [Nebula](#nebula) + * [Usage](#usage-1) + * [Configuration](#configuration-1) + * [Pulsar](#pulsar) + * [Usage](#usage-2) + * [Configuration](#configuration-2) + * [Excluding Paths](#excluding-paths) + * [Pulsar topfile top.pulsar](#pulsar-topfile-toppulsar) + # HUBBLE Hubble is a modular, open-source, security & compliance auditing framework which is built on SaltStack. It is alternate version of Hubblestack which can be run without an existing SaltStack infrastructure. Hubble provides on-demand profile-based auditing, real-time security event notifications, alerting and reporting. It also reports the security information to Splunk. This document describes installation, configuration and general use of Hubble. @@ -46,9 +67,9 @@ To run the container docker run -it --rm ``` -### Nova +## Nova Nova is Hubble's auditing system. -#### Usage +### Usage There are four primary functions for Nova module: - `hubble.sync` : syncs the `hubblestack_nova_profiles/` and `hubblestack_nova/` directories to the host(s). - `hubble.load` : loads the synced audit hosts and their yaml configuration files. @@ -64,7 +85,7 @@ hubble hubble.top # Run all yaml configs and tags under salt://hubblestack_nova_profiles/foo/ and salt://hubblestack_nova_profiles/bar, but only run audits with tags starting with "CIS" hubble hubble.audit foo,bar tags='CIS*' ``` -#### Configuration +### Configuration For Nova module, configurations can be done via Nova topfiles. Nova topfiles look very similar to saltstack topfiles, except the top-level key is always nova, as nova doesn’t have environments. **hubblestack/hubblestack_data/top.nova** @@ -110,12 +131,12 @@ Note that providing a reason for the control is optional. Any of the three forma Once you have your compensating control config, just target the yaml to the hosts you want to control using your topfile. In this case, all the audits will still run, but if any of the controlled checks fail, they will be removed from Failure and added to Controlled, and will be treated as a Success for the purposes of compliance percentage. -### Nebula +## Nebula Nebula is Hubble’s Insight system, which ties into osquery, allowing you to query your infrastructure as if it were a database. This system can be used to take scheduled snapshots of your systems. Nebula leverages the osquery_nebula execution module which requires the osquery binary to be installed. More information about osquery can be found at `https://osquery.io`. -#### Usage +### Usage Nebula queries have been designed to give detailed insight into system activity. The queries can be found in the following file. @@ -149,7 +170,7 @@ hubble nebula.queries day hubble nebula.queries hour [verbose=True] hubble nebula.queries fifteen-min ``` -#### Configuration +### Configuration For Nebula module, configurations can be done via Nebula topfiles. Nebula topfile functionality is similar to Nova topfiles. **top.nebula topfile** @@ -169,14 +190,14 @@ hubble nebula.top hour hubble nebula.top foo/bar/top.nova hour hubble nebula.top fifteen_min verbose=True ``` -### Pulsar +## Pulsar Pulsar is designed to monitor for file system events, acting as a real-time File Integrity Monitoring (FIM) agent. Pulsar is composed of a custom Salt beacon that watches for these events and hooks into the returner system for alerting and reporting. In other words, you can recieve real-time alerts for unscheduled file system modifications anywhere you want to recieve them. We’ve designed Pulsar to be lightweight and does not affect the system performance. It simply watches for events and directly sends them to one of the Pulsar returner destinations. -#### Usage +### Usage Once Pulsar is configured there isn’t anything you need to do to interact with it. It simply runs quietly in the background and sends you alerts. -#### Configuration +### Configuration The Pulsar configuration can be found at `hubblestack_pulsar_config.yaml` file. Every environment will have different needs and requirements, and we understand that, so we’ve designed Pulsar to be flexible. **hubblestack_pulsar_config.yaml** @@ -221,7 +242,7 @@ slack_pulsar: channel: channel api_key: xoxb-xxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx ``` -##### Excluding Paths +#### Excluding Paths There may be certain paths that you want to exclude from this real-time FIM tool. This can be done using the exclude: keyword beneath any defined path in `hubblestack_pulsar_config.yaml` file. /var: @@ -235,7 +256,7 @@ There may be certain paths that you want to exclude from this real-time FIM tool For Pulsar module, configurations can be done via Pulsar topfiles when teams needs to add specific configurations or exclusions as discussed above. -##### Pulsar topfile `top.pulsar` +#### Pulsar topfile `top.pulsar` For Pulsar module, configurations can be done via Pulsar topfiles From 86bb36e1f21676288cca306c303ae97466c225bd Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 14:42:46 -0600 Subject: [PATCH 08/35] Update README.md --- README.md | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/README.md b/README.md index a2db57bcb..533ae4b8f 100644 --- a/README.md +++ b/README.md @@ -37,24 +37,6 @@ A config template has been placed in `/etc/hubble/hubble`. Modify it to your spe The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. -### Building standalone packages (CentOS) -```sh -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_rpms.sh # note the lack of sudo, that is important -``` -Packages will be in the `hubble/pkg/dist/` directory. The only difference between the packages is the inclusion of `/etc/init.d/hubble` for el6 and the inclusion of a systemd unit file for el7. There's no guarantee of glibc compatibility. - -### Building standalone packages (Debian) -```sh -sudo yum install git -y -git clone https://github.com/hubblestack/hubble -cd hubble/pkg -./build_debs.sh # note the lack of sudo, that is important -``` -Package will be in the `hubble/pkg/dist/` directory. There's no guarantee of glibc compatibility. - ### Buidling Hubble packages through Dockerfile Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles can be found at `hubblestack/hubble/pkg`. To build an image From 188137ee3d57555e0cc02f7893d4cd25ea37c7b1 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 15:25:25 -0600 Subject: [PATCH 09/35] Add first pass at splunk log handler --- hubblestack/splunklogging.py | 319 +++++++++++++++++++++++++++++++++++ 1 file changed, 319 insertions(+) create mode 100644 hubblestack/splunklogging.py diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py new file mode 100644 index 000000000..aa26616f9 --- /dev/null +++ b/hubblestack/splunklogging.py @@ -0,0 +1,319 @@ +''' +Hubblestack python log handler for splunk + +Uses the same configuration as the rest of the splunk returners, returns to +the same destination but with an alternate sourcetype (``hubble_log`` by +default) + +.. code-block:: yaml + + hubblestack: + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_log: hubble_log + +You can also add an `custom_fields` argument which is a list of keys to add to events +with using the results of config.get(). These new keys will be prefixed +with 'custom_' to prevent conflicts. The values of these keys should be +strings or lists (will be sent as CSV string), do not choose grains or pillar values with complex values or they will +be skipped: + +.. code-block:: yaml + + hubblestack: + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_log: hubble_log + custom_fields: + - site + - product_group +''' +import socket +# Import cloud details +from hubblestack.extmods.returners.cloud_details import get_cloud_details + +# Imports for http event forwarder +import requests +import json +import time +from datetime import datetime + +import copy + +import logging + +_max_content_bytes = 100000 +http_event_collector_SSL_verify = False +http_event_collector_debug = False + +hec = None + + +class SplunkHandler(logging.Handler): + ''' + Log handler for splunk + ''' + def __init__(self): + try: + super(SplunkHandler, self).__init__() + + self.opts_list = _get_options() + self.clouds = get_cloud_details() + self.endpoint_list = [] + + for opts in self.opts_list: + http_event_collector_key = opts['token'] + http_event_collector_host = opts['indexer'] + http_event_collector_port = opts['port'] + hec_ssl = opts['http_event_server_ssl'] + proxy = opts['proxy'] + timeout = opts['timeout'] + custom_fields = opts['custom_fields'] + + # Set up the fields to be extracted at index time. The field values must be strings. + # Note that these fields will also still be available in the event data + index_extracted_fields = ['aws_instance_id', 'aws_account_id', 'azure_vmId'] + try: + index_extracted_fields.extend(opts['index_extracted_fields']) + except TypeError: + pass + + # Set up the collector + hec = http_event_collector(http_event_collector_key, http_event_collector_host, http_event_port=http_event_collector_port, http_event_server_ssl=hec_ssl, proxy=proxy, timeout=timeout) + + minion_id = __grains__['id'] + master = __grains__['master'] + fqdn = __grains__['fqdn'] + # Sometimes fqdn is blank. If it is, replace it with minion_id + fqdn = fqdn if fqdn else minion_id + try: + fqdn_ip4 = __grains__['fqdn_ip4'][0] + except IndexError: + fqdn_ip4 = __grains__['ipv4'][0] + if fqdn_ip4.startswith('127.'): + for ip4_addr in __grains__['ipv4']: + if ip4_addr and not ip4_addr.startswith('127.'): + fqdn_ip4 = ip4_addr + break + + event = {} + event.update({'master': master}) + event.update({'minion_id': minion_id}) + event.update({'dest_host': fqdn}) + event.update({'dest_ip': fqdn_ip4}) + + for cloud in self.clouds: + event.update(cloud) + + for custom_field in custom_fields: + custom_field_name = 'custom_' + custom_field + custom_field_value = __salt__['config.get'](custom_field, '') + if isinstance(custom_field_value, str): + event.update({custom_field_name: custom_field_value}) + elif isinstance(custom_field_value, list): + custom_field_value = ','.join(custom_field_value) + event.update({custom_field_name: custom_field_value}) + + payload = {} + payload.update({'host': fqdn}) + payload.update({'index': opts['index']}) + payload.update({'sourcetype': opts['sourcetype']}) + + # Potentially add metadata fields: + fields = {} + for item in index_extracted_fields: + if item in payload['event'] and not isinstance(payload['event'][item], (list, dict, tuple)): + fields[item] = str(payload['event'][item]) + if fields: + payload.update({'fields': fields}) + + self.endpoint_list.append((hec, event, payload)) + except Exception as exc: + print('Exception was thrown in splunk loghandler setup! {0}' + .format(exc)) + + def emit(self, record): + ''' + Emit a single record using the hec/event template/payload template + generated in __init__() + ''' + log_entry = self.format_record(record) + for hec, event, payload in self.endpoint_list: + event = copy.deepcopy(event) + payload = copy.deepcopy(payload) + event.update(log_entry) + payload['event'] = event + hec.batchEvent(payload) + hec.flushBatch() + return True + + def format_record(self, record): + ''' + Format the log record into a dictionary for easy insertion into a + splunk event dictionary + ''' + log_entry = {'message': record.message, + 'level': record.levelname, + 'timestamp': record.asctime, + 'loggername': record.name, + } + return log_entry + + +def _get_options(): + if __salt__['config.get']('hubblestack:returner:splunk'): + splunk_opts = [] + returner_opts = __salt__['config.get']('hubblestack:returner:splunk') + if not isinstance(returner_opts, list): + returner_opts = [returner_opts] + for opt in returner_opts: + processed = {} + processed['token'] = opt.get('token') + processed['indexer'] = opt.get('indexer') + processed['port'] = str(opt.get('port', '8088')) + processed['index'] = opt.get('index') + processed['custom_fields'] = opt.get('custom_fields', []) + processed['sourcetype'] = opt.get('sourcetype_log', 'hubble_log') + processed['http_event_server_ssl'] = opt.get('hec_ssl', True) + processed['proxy'] = opt.get('proxy', {}) + processed['timeout'] = opt.get('timeout', 9.05) + processed['index_extracted_fields'] = opt.get('index_extracted_fields', []) + splunk_opts.append(processed) + return splunk_opts + else: + splunk_opts = {} + splunk_opts['token'] = __salt__['config.get']('hubblestack:nova:returner:splunk:token').strip() + splunk_opts['indexer'] = __salt__['config.get']('hubblestack:nova:returner:splunk:indexer') + splunk_opts['port'] = __salt__['config.get']('hubblestack:nova:returner:splunk:port', '8088') + splunk_opts['index'] = __salt__['config.get']('hubblestack:nova:returner:splunk:index') + splunk_opts['custom_fields'] = __salt__['config.get']('hubblestack:nova:returner:splunk:custom_fields', []) + splunk_opts['sourcetype'] = __salt__['config.get']('hubblestack:nova:returner:splunk:sourcetype') + splunk_opts['http_event_server_ssl'] = __salt__['config.get']('hubblestack:nova:returner:splunk:hec_ssl', True) + splunk_opts['proxy'] = __salt__['config.get']('hubblestack:nova:returner:splunk:proxy', {}) + splunk_opts['timeout'] = __salt__['config.get']('hubblestack:nova:returner:splunk:timeout', 9.05) + splunk_opts['index_extracted_fields'] = __salt__['config.get']('hubblestack:nova:returner:splunk:index_extracted_fields', []) + + return [splunk_opts] + + +# Thanks to George Starcher for the http_event_collector class (https://github.com/georgestarcher/) +# Default batch max size to match splunk's default limits for max byte +# See http_input stanza in limits.conf; note in testing I had to limit to 100,000 to avoid http event collector breaking connection +# Auto flush will occur if next event payload will exceed limit + +class http_event_collector: + + def __init__(self, token, http_event_server, host='', http_event_port='8088', http_event_server_ssl=True, max_bytes=_max_content_bytes, proxy=None, timeout=9.05): + self.timeout = timeout + self.token = token + self.batchEvents = [] + self.maxByteLength = max_bytes + self.currentByteLength = 0 + self.server_uri = [] + if proxy and http_event_server_ssl: + self.proxy = {'https': 'https://{0}'.format(proxy)} + elif proxy: + self.proxy = {'http': 'http://{0}'.format(proxy)} + else: + self.proxy = {} + + # Set host to specified value or default to localhostname if no value provided + if host: + self.host = host + else: + self.host = socket.gethostname() + + # Build and set server_uri for http event collector + # Defaults to SSL if flag not passed + # Defaults to port 8088 if port not passed + + servers = http_event_server + if not isinstance(servers, list): + servers = [servers] + for server in servers: + if http_event_server_ssl: + self.server_uri.append(['https://%s:%s/services/collector/event' % (server, http_event_port), True]) + else: + self.server_uri.append(['http://%s:%s/services/collector/event' % (server, http_event_port), True]) + + if http_event_collector_debug: + print self.token + print self.server_uri + + def sendEvent(self, payload, eventtime=''): + # Method to immediately send an event to the http event collector + + headers = {'Authorization': 'Splunk ' + self.token} + + # If eventtime in epoch not passed as optional argument use current system time in epoch + if not eventtime: + eventtime = str(int(time.time())) + + # Fill in local hostname if not manually populated + if 'host' not in payload: + payload.update({'host': self.host}) + + # Update time value on payload if need to use system time + data = {'time': eventtime} + data.update(payload) + + # send event to http event collector + r = requests.post(self.server_uri, data=json.dumps(data), headers=headers, verify=http_event_collector_SSL_verify, proxies=self.proxy) + + # Print debug info if flag set + if http_event_collector_debug: + logger.debug(r.text) + logger.debug(data) + + def batchEvent(self, payload, eventtime=''): + # Method to store the event in a batch to flush later + + # Fill in local hostname if not manually populated + if 'host' not in payload: + payload.update({'host': self.host}) + + payloadLength = len(json.dumps(payload)) + + if (self.currentByteLength + payloadLength) > self.maxByteLength: + self.flushBatch() + # Print debug info if flag set + if http_event_collector_debug: + print 'auto flushing' + else: + self.currentByteLength = self.currentByteLength + payloadLength + + # If eventtime in epoch not passed as optional argument use current system time in epoch + if not eventtime: + eventtime = str(int(time.time())) + + # Update time value on payload if need to use system time + data = {'time': eventtime} + data.update(payload) + + self.batchEvents.append(json.dumps(data)) + + def flushBatch(self): + # Method to flush the batch list of events + + if len(self.batchEvents) > 0: + headers = {'Authorization': 'Splunk ' + self.token} + self.server_uri = [x for x in self.server_uri if x[1] is not False] + for server in self.server_uri: + try: + r = requests.post(server[0], data=' '.join(self.batchEvents), headers=headers, verify=http_event_collector_SSL_verify, proxies=self.proxy, timeout=self.timeout) + r.raise_for_status() + server[1] = True + break + except requests.exceptions.RequestException: + log.info('Request to splunk server "%s" failed. Marking as bad.' % server[0]) + server[1] = False + except Exception as e: + log.error('Request to splunk threw an error: {0}'.format(e)) + self.batchEvents = [] + self.currentByteLength = 0 From 1c65a1374d96730f67eb24f52a3e2e2b936b063f Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 15:52:33 -0600 Subject: [PATCH 10/35] Add SplunkHandler to daemon.py --- hubblestack/daemon.py | 9 +++ hubblestack/splunklogging.py | 150 +++++++++++++++++------------------ 2 files changed, 82 insertions(+), 77 deletions(-) diff --git a/hubblestack/daemon.py b/hubblestack/daemon.py index cbe2a1399..d0cb5931b 100644 --- a/hubblestack/daemon.py +++ b/hubblestack/daemon.py @@ -18,6 +18,7 @@ import salt.utils import salt.utils.jid import salt.log.setup +import hubblestack.splunklogging from hubblestack import __version__ log = logging.getLogger(__name__) @@ -376,6 +377,14 @@ def load_config(): __salt__ = salt.loader.minion_mods(__opts__, utils=__utils__) __returners__ = salt.loader.returners(__opts__, __salt__) + if __salt__['config.get']('hubblestack:splunklogging', False): + hubblestack.splunklogging.__grains__ = __grains__ + hubblestack.splunklogging.__salt__ = __salt__ + root_logger = logging.getLogger() + handler = hubblestack.splunklogging.SplunkHandler() + handler.setLevel(logging.ERROR) + root_logger.addHandler(handler) + def parse_args(): ''' diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py index aa26616f9..a52246f70 100644 --- a/hubblestack/splunklogging.py +++ b/hubblestack/splunklogging.py @@ -60,83 +60,79 @@ class SplunkHandler(logging.Handler): Log handler for splunk ''' def __init__(self): - try: - super(SplunkHandler, self).__init__() - - self.opts_list = _get_options() - self.clouds = get_cloud_details() - self.endpoint_list = [] - - for opts in self.opts_list: - http_event_collector_key = opts['token'] - http_event_collector_host = opts['indexer'] - http_event_collector_port = opts['port'] - hec_ssl = opts['http_event_server_ssl'] - proxy = opts['proxy'] - timeout = opts['timeout'] - custom_fields = opts['custom_fields'] - - # Set up the fields to be extracted at index time. The field values must be strings. - # Note that these fields will also still be available in the event data - index_extracted_fields = ['aws_instance_id', 'aws_account_id', 'azure_vmId'] - try: - index_extracted_fields.extend(opts['index_extracted_fields']) - except TypeError: - pass - - # Set up the collector - hec = http_event_collector(http_event_collector_key, http_event_collector_host, http_event_port=http_event_collector_port, http_event_server_ssl=hec_ssl, proxy=proxy, timeout=timeout) - - minion_id = __grains__['id'] - master = __grains__['master'] - fqdn = __grains__['fqdn'] - # Sometimes fqdn is blank. If it is, replace it with minion_id - fqdn = fqdn if fqdn else minion_id - try: - fqdn_ip4 = __grains__['fqdn_ip4'][0] - except IndexError: - fqdn_ip4 = __grains__['ipv4'][0] - if fqdn_ip4.startswith('127.'): - for ip4_addr in __grains__['ipv4']: - if ip4_addr and not ip4_addr.startswith('127.'): - fqdn_ip4 = ip4_addr - break - - event = {} - event.update({'master': master}) - event.update({'minion_id': minion_id}) - event.update({'dest_host': fqdn}) - event.update({'dest_ip': fqdn_ip4}) - - for cloud in self.clouds: - event.update(cloud) - - for custom_field in custom_fields: - custom_field_name = 'custom_' + custom_field - custom_field_value = __salt__['config.get'](custom_field, '') - if isinstance(custom_field_value, str): - event.update({custom_field_name: custom_field_value}) - elif isinstance(custom_field_value, list): - custom_field_value = ','.join(custom_field_value) - event.update({custom_field_name: custom_field_value}) - - payload = {} - payload.update({'host': fqdn}) - payload.update({'index': opts['index']}) - payload.update({'sourcetype': opts['sourcetype']}) - - # Potentially add metadata fields: - fields = {} - for item in index_extracted_fields: - if item in payload['event'] and not isinstance(payload['event'][item], (list, dict, tuple)): - fields[item] = str(payload['event'][item]) - if fields: - payload.update({'fields': fields}) - - self.endpoint_list.append((hec, event, payload)) - except Exception as exc: - print('Exception was thrown in splunk loghandler setup! {0}' - .format(exc)) + super(SplunkHandler, self).__init__() + + self.opts_list = _get_options() + self.clouds = get_cloud_details() + self.endpoint_list = [] + + for opts in self.opts_list: + http_event_collector_key = opts['token'] + http_event_collector_host = opts['indexer'] + http_event_collector_port = opts['port'] + hec_ssl = opts['http_event_server_ssl'] + proxy = opts['proxy'] + timeout = opts['timeout'] + custom_fields = opts['custom_fields'] + + # Set up the fields to be extracted at index time. The field values must be strings. + # Note that these fields will also still be available in the event data + index_extracted_fields = ['aws_instance_id', 'aws_account_id', 'azure_vmId'] + try: + index_extracted_fields.extend(opts['index_extracted_fields']) + except TypeError: + pass + + # Set up the collector + hec = http_event_collector(http_event_collector_key, http_event_collector_host, http_event_port=http_event_collector_port, http_event_server_ssl=hec_ssl, proxy=proxy, timeout=timeout) + + minion_id = __grains__['id'] + master = __grains__['master'] + fqdn = __grains__['fqdn'] + # Sometimes fqdn is blank. If it is, replace it with minion_id + fqdn = fqdn if fqdn else minion_id + try: + fqdn_ip4 = __grains__['fqdn_ip4'][0] + except IndexError: + fqdn_ip4 = __grains__['ipv4'][0] + if fqdn_ip4.startswith('127.'): + for ip4_addr in __grains__['ipv4']: + if ip4_addr and not ip4_addr.startswith('127.'): + fqdn_ip4 = ip4_addr + break + + event = {} + event.update({'master': master}) + event.update({'minion_id': minion_id}) + event.update({'dest_host': fqdn}) + event.update({'dest_ip': fqdn_ip4}) + + for cloud in self.clouds: + event.update(cloud) + + for custom_field in custom_fields: + custom_field_name = 'custom_' + custom_field + custom_field_value = __salt__['config.get'](custom_field, '') + if isinstance(custom_field_value, str): + event.update({custom_field_name: custom_field_value}) + elif isinstance(custom_field_value, list): + custom_field_value = ','.join(custom_field_value) + event.update({custom_field_name: custom_field_value}) + + payload = {} + payload.update({'host': fqdn}) + payload.update({'index': opts['index']}) + payload.update({'sourcetype': opts['sourcetype']}) + + # Potentially add metadata fields: + fields = {} + for item in index_extracted_fields: + if item in payload['event'] and not isinstance(payload['event'][item], (list, dict, tuple)): + fields[item] = str(payload['event'][item]) + if fields: + payload.update({'fields': fields}) + + self.endpoint_list.append((hec, event, payload)) def emit(self, record): ''' From b20895e7b93750aaa8ebf559c7bd6230484d321f Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 15:54:56 -0600 Subject: [PATCH 11/35] changes added in the dockerfile build section --- README.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 533ae4b8f..c7c17f04a 100644 --- a/README.md +++ b/README.md @@ -38,15 +38,14 @@ A config template has been placed in `/etc/hubble/hubble`. Modify it to your spe The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. ### Buidling Hubble packages through Dockerfile -Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles can be found at `hubblestack/hubble/pkg`. +Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path 'hubblestack/hubble/pkg/centos6/' To build an image ```sh -1. copy pkg/scripts/pyinstaller-requirements.txt to directory with this Dockerfile -2. docker build -t +docker build -t ``` To run the container ```sh -docker run -it --rm +docker run -it --rm -v `pwd`:/data ``` ## Nova From 0f4e481887d59138a1c8c83c9b5e48fbc9a4463c Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 15:55:43 -0600 Subject: [PATCH 12/35] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index c7c17f04a..f15ae8185 100644 --- a/README.md +++ b/README.md @@ -39,6 +39,7 @@ The first two commands you should run to make sure things are set up correctly a ### Buidling Hubble packages through Dockerfile Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path 'hubblestack/hubble/pkg/centos6/' + To build an image ```sh docker build -t From 5690b1a0887ceecf81b5605e262c924b9fa90e23 Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 15:57:39 -0600 Subject: [PATCH 13/35] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f15ae8185..ac0c14241 100644 --- a/README.md +++ b/README.md @@ -38,7 +38,7 @@ A config template has been placed in `/etc/hubble/hubble`. Modify it to your spe The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. ### Buidling Hubble packages through Dockerfile -Dockerfile aims to make building Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path 'hubblestack/hubble/pkg/centos6/' +Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path 'hubblestack/hubble/pkg/centos6/' To build an image ```sh From e2155a56b7e1e842f7003f200ddf238cc9f9e196 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 15:58:24 -0600 Subject: [PATCH 14/35] Fix some copy pasta --- conf/hubble | 2 ++ hubblestack/splunklogging.py | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/conf/hubble b/conf/hubble index 1df67616b..ce6e21b09 100644 --- a/conf/hubble +++ b/conf/hubble @@ -89,6 +89,8 @@ fileserver_backend: # sourcetype_nova: hubble_audit # sourcetype_nebula: hubble_osquery # sourcetype_pulsar: hubble_fim +# sourcetype_log: hubble_log +# splunklogging: True ## If you are instead using the slack returner, you'll need a block similar to ## this: diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py index a52246f70..33d1fd8a9 100644 --- a/hubblestack/splunklogging.py +++ b/hubblestack/splunklogging.py @@ -127,8 +127,8 @@ def __init__(self): # Potentially add metadata fields: fields = {} for item in index_extracted_fields: - if item in payload['event'] and not isinstance(payload['event'][item], (list, dict, tuple)): - fields[item] = str(payload['event'][item]) + if item in event and not isinstance(event[item], (list, dict, tuple)): + fields[item] = str(event[item]) if fields: payload.update({'fields': fields}) From c7159879ece4176f1e766696337d7ce2e48cd5b1 Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 16:01:51 -0600 Subject: [PATCH 15/35] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ac0c14241..06cdf5eea 100644 --- a/README.md +++ b/README.md @@ -38,7 +38,7 @@ A config template has been placed in `/etc/hubble/hubble`. Modify it to your spe The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. ### Buidling Hubble packages through Dockerfile -Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path 'hubblestack/hubble/pkg/centos6/' +Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path `hubblestack/hubble/pkg/centos6/` To build an image ```sh From 2b0c39f6ab3204e34c7838f224c3b9d34f32049f Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 16:26:53 -0600 Subject: [PATCH 16/35] Add eventtime --- hubblestack/splunklogging.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py index 33d1fd8a9..49025a228 100644 --- a/hubblestack/splunklogging.py +++ b/hubblestack/splunklogging.py @@ -145,7 +145,7 @@ def emit(self, record): payload = copy.deepcopy(payload) event.update(log_entry) payload['event'] = event - hec.batchEvent(payload) + hec.batchEvent(payload, eventtime=time.time()) hec.flushBatch() return True From cbae040f370ae5c31233399a201af013482d47e7 Mon Sep 17 00:00:00 2001 From: rashmip29 Date: Mon, 18 Sep 2017 16:27:46 -0600 Subject: [PATCH 17/35] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 06cdf5eea..4d955ec9a 100644 --- a/README.md +++ b/README.md @@ -38,13 +38,13 @@ A config template has been placed in `/etc/hubble/hubble`. Modify it to your spe The first two commands you should run to make sure things are set up correctly are `hubble --version` and `hubble test.ping`. ### Buidling Hubble packages through Dockerfile -Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `hubblestack/hubble/pkg`. For e.g. for centos6 distribution the dockerfile is at the path `hubblestack/hubble/pkg/centos6/` +Dockerfile aims to build the Hubble v2 packages easier. Dockerfiles for the distribution you want to build can be found at the path `/pkg`. For example, dockerfile for centos6 distribution is at the path `/pkg/centos6/` To build an image ```sh docker build -t ``` -To run the container +To run the container (which will output the package file in your current directory) ```sh docker run -it --rm -v `pwd`:/data ``` From 3c443fae002af009efd77667c063b2400df6b50d Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 16:58:41 -0600 Subject: [PATCH 18/35] Remove old style config --- hubblestack/splunklogging.py | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/hubblestack/splunklogging.py b/hubblestack/splunklogging.py index 49025a228..604df24c3 100644 --- a/hubblestack/splunklogging.py +++ b/hubblestack/splunklogging.py @@ -183,19 +183,7 @@ def _get_options(): splunk_opts.append(processed) return splunk_opts else: - splunk_opts = {} - splunk_opts['token'] = __salt__['config.get']('hubblestack:nova:returner:splunk:token').strip() - splunk_opts['indexer'] = __salt__['config.get']('hubblestack:nova:returner:splunk:indexer') - splunk_opts['port'] = __salt__['config.get']('hubblestack:nova:returner:splunk:port', '8088') - splunk_opts['index'] = __salt__['config.get']('hubblestack:nova:returner:splunk:index') - splunk_opts['custom_fields'] = __salt__['config.get']('hubblestack:nova:returner:splunk:custom_fields', []) - splunk_opts['sourcetype'] = __salt__['config.get']('hubblestack:nova:returner:splunk:sourcetype') - splunk_opts['http_event_server_ssl'] = __salt__['config.get']('hubblestack:nova:returner:splunk:hec_ssl', True) - splunk_opts['proxy'] = __salt__['config.get']('hubblestack:nova:returner:splunk:proxy', {}) - splunk_opts['timeout'] = __salt__['config.get']('hubblestack:nova:returner:splunk:timeout', 9.05) - splunk_opts['index_extracted_fields'] = __salt__['config.get']('hubblestack:nova:returner:splunk:index_extracted_fields', []) - - return [splunk_opts] + raise Exception('Cannot find splunk config at `hubblestack:returner:splunk`!') # Thanks to George Starcher for the http_event_collector class (https://github.com/georgestarcher/) From 26ea500eca2682ab6f1e095f0ce2e07916eb2b99 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 18 Sep 2017 17:22:53 -0600 Subject: [PATCH 19/35] Target v2.2.4 not develop --- pkg/amazonlinux2016.09/Dockerfile | 2 +- pkg/amazonlinux2017.03/Dockerfile | 2 +- pkg/centos6/Dockerfile | 2 +- pkg/centos7/Dockerfile | 2 +- pkg/debian8/Dockerfile | 2 +- pkg/debian9/Dockerfile | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/pkg/amazonlinux2016.09/Dockerfile b/pkg/amazonlinux2016.09/Dockerfile index 2659b040c..4fc65f8c1 100644 --- a/pkg/amazonlinux2016.09/Dockerfile +++ b/pkg/amazonlinux2016.09/Dockerfile @@ -91,7 +91,7 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src diff --git a/pkg/amazonlinux2017.03/Dockerfile b/pkg/amazonlinux2017.03/Dockerfile index f27a163da..a083fc0c5 100644 --- a/pkg/amazonlinux2017.03/Dockerfile +++ b/pkg/amazonlinux2017.03/Dockerfile @@ -91,7 +91,7 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src diff --git a/pkg/centos6/Dockerfile b/pkg/centos6/Dockerfile index ecad4e7aa..a84f9ce0e 100644 --- a/pkg/centos6/Dockerfile +++ b/pkg/centos6/Dockerfile @@ -93,7 +93,7 @@ RUN yum install -y rpmbuild gcc make rh-ruby23 rh-ruby23-ruby-devel \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src diff --git a/pkg/centos7/Dockerfile b/pkg/centos7/Dockerfile index b10fceb79..6f60bbcdc 100644 --- a/pkg/centos7/Dockerfile +++ b/pkg/centos7/Dockerfile @@ -90,7 +90,7 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src diff --git a/pkg/debian8/Dockerfile b/pkg/debian8/Dockerfile index 42a1992a6..3c649996e 100644 --- a/pkg/debian8/Dockerfile +++ b/pkg/debian8/Dockerfile @@ -96,7 +96,7 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src diff --git a/pkg/debian9/Dockerfile b/pkg/debian9/Dockerfile index d20553c0e..1deaf9360 100644 --- a/pkg/debian9/Dockerfile +++ b/pkg/debian9/Dockerfile @@ -92,7 +92,7 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=develop +ENV HUBBLE_CHECKOUT=v2.2.4 ENV HUBBLE_VERSION=2.2.4 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src From 69c024e3d0162ea78b24922deb4e8b179da6d4da Mon Sep 17 00:00:00 2001 From: yassingh Date: Tue, 19 Sep 2017 17:54:39 +0530 Subject: [PATCH 20/35] Updating stat_nova module and added some new misc functions * Updating stat_nova module and added some new misc functions --- hubblestack/files/hubblestack_nova/misc.py | 460 ++++++++++++------ .../files/hubblestack_nova/stat_nova.py | 120 ++++- hubblestack/files/hubblestack_nova/sysctl.py | 2 + 3 files changed, 427 insertions(+), 155 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/misc.py b/hubblestack/files/hubblestack_nova/misc.py index e093b93b7..975f95777 100644 --- a/hubblestack/files/hubblestack_nova/misc.py +++ b/hubblestack/files/hubblestack_nova/misc.py @@ -167,6 +167,14 @@ def _get_tags(data): # Begin function definitions ############################ + +def _execute_shell_command(cmd): + ''' + This function will execute passed command in /bin/shell + ''' + return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) + + def _is_valid_home_directory(directory_path, check_slash_home=False): directory_path = None if directory_path is None else directory_path.strip() if directory_path is not None and directory_path != "" and os.path.isdir(directory_path): @@ -177,11 +185,49 @@ def _is_valid_home_directory(directory_path, check_slash_home=False): return False -def _execute_shell_command(cmd): + +def _is_permission_in_limit(max_permission,given_permission): ''' - This function will execute passed command in /bin/shell + Return true only if given_permission is not more linient that max_permission. In other words, if + r or w or x is present in given_permission but absent in max_permission, it should return False + Takes input two integer values from 0 to 7. ''' - return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) + max_permission = int(max_permission) + given_permission = int(given_permission) + allowed_r = False + allowed_w = False + allowed_x = False + given_r = False + given_w = False + given_x = False + + if max_permission >= 4: + allowed_r = True + max_permission = max_permission - 4 + if max_permission >= 2: + allowed_w = True + max_permission = max_permission - 2 + if max_permission >= 1: + allowed_x = True + + if given_permission >= 4: + given_r = True + given_permission = given_permission - 4 + if given_permission >= 2: + given_w = True + given_permission = given_permission - 2 + if given_permission >= 1: + given_x = True + + if given_r and ( not allowed_r ): + return False + if given_w and ( not allowed_w ): + return False + if given_x and ( not allowed_x ): + return False + + return True + def check_all_ports_firewall_rules(reason=''): ''' @@ -199,63 +245,56 @@ def check_all_ports_firewall_rules(reason=''): if open_port not in firewall_ports: no_firewall_ports.append(open_port) - if len(no_firewall_ports) == 0: - return True - return str(no_firewall_ports) + return True if len(no_firewall_ports) == 0 else str(no_firewall_ports) + def check_password_fields_not_empty(reason=''): ''' Ensure password fields are not empty ''' result = _execute_shell_command('cat /etc/shadow | awk -F: \'($2 == "" ) { print $1 " does not have a password "}\'') - if result == '': - return True - return result + return True if result == '' else result + def ungrouped_files_or_dir(reason=''): ''' Ensure no ungrouped files or directories exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -nogroup') - if result == '': - return True - return result + return True if result == '' else result + def unowned_files_or_dir(reason=''): ''' Ensure no unowned files or directories exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -nouser') - if result == '': - return True - return result + return True if result == '' else result + def world_writable_file(reason=''): ''' Ensure no world writable files exist ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -type f -perm -0002') - if result == '': - return True - return result + return True if result == '' else result + def system_account_non_login(reason=''): ''' Ensure system accounts are non-login ''' result = _execute_shell_command('egrep -v "^\+" /etc/passwd | awk -F: \'($1!="root" && $1!="sync" && $1!="shutdown" && $1!="halt" && $3<500 && $7!="/sbin/nologin" && $7!="/bin/false") {print}\'') - if result == '': - return True - return result + return True if result == '' else result + def sticky_bit_on_world_writable_dirs(reason=''): ''' Ensure sticky bit is set on all world-writable directories ''' result = _execute_shell_command('df --local -P | awk {\'if (NR!=1) print $6\'} | xargs -I \'{}\' find \'{}\' -xdev -type d \( -perm -0002 -a ! -perm -1000 \) 2>/dev/null') - if result == '': - return True - return "There are failures" + return True if result == '' else "There are failures" + def default_group_for_root(reason=''): ''' @@ -263,38 +302,16 @@ def default_group_for_root(reason=''): ''' result = _execute_shell_command('grep "^root:" /etc/passwd | cut -f4 -d:') result = result.strip() - if result == '0': - return True - return False + return True if result == '0' else False + def root_is_only_uid_0_account(reason=''): ''' Ensure root is the only UID 0 account ''' result = _execute_shell_command('cat /etc/passwd | awk -F: \'($3 == 0) { print $1 }\'') - if result.strip() == 'root': - return True - return result + return True if result.strip() == 'root' else result -def test_success(): - ''' - Automatically returns success - ''' - return True - - -def test_failure(): - ''' - Automatically returns failure, no reason - ''' - return False - - -def test_failure_reason(reason): - ''' - Automatically returns failure, with a reason (first arg) - ''' - return reason def test_mount_attrs(mount_name,attribute,check_type='hard'): ''' @@ -302,13 +319,13 @@ def test_mount_attrs(mount_name,attribute,check_type='hard'): If check_type is soft, then in absence of volume, True will be returned If check_type is hard, then in absence of volume, False will be returned ''' - #check that the path exists on system + # check that the path exists on system command = 'test -e ' + mount_name + ' ; echo $?' output = _execute_shell_command( command) if output.strip() == '1': return True if check_type == "soft" else (mount_name + " folder does not exist") - #if the path exits, proceed with following code + # if the path exits, proceed with following code output = _execute_shell_command('mount | grep ' + mount_name) if output.strip() == '': return True if check_type == "soft" else (mount_name + " is not mounted") @@ -317,6 +334,7 @@ def test_mount_attrs(mount_name,attribute,check_type='hard'): else: return True + def check_time_synchronization(reason=''): ''' Ensure that some service is running to synchronize the system clock @@ -342,49 +360,6 @@ def restrict_permissions(path,permission): return given_permission -def _is_permission_in_limit(max_permission,given_permission): - ''' - Return true only if given_permission is not more linient that max_permission. In other words, if - r or w or x is present in given_permission but absent in max_permission, it should return False - Takes input two integer values from 0 to 7. - ''' - max_permission = int(max_permission) - given_permission = int(given_permission) - allowed_r = False - allowed_w = False - allowed_x = False - given_r = False - given_w = False - given_x = False - - if max_permission >= 4: - allowed_r = True - max_permission = max_permission - 4 - if max_permission >= 2: - allowed_w = True - max_permission = max_permission - 2 - if max_permission >= 1: - allowed_x = True - - if given_permission >= 4: - given_r = True - given_permission = given_permission - 4 - if given_permission >= 2: - given_w = True - given_permission = given_permission - 2 - if given_permission >= 1: - given_x = True - - if given_r and ( not allowed_r ): - return False - if given_w and ( not allowed_w ): - return False - if given_x and ( not allowed_x ): - return False - - return True - - def check_path_integrity(reason=''): ''' Ensure that system PATH variable is not malformed. @@ -428,10 +403,7 @@ def check_path_integrity(reason=''): """ output = _execute_shell_command(script) - if output.strip() == '': - return True - else: - return output + return True if output.strip() == '' else output def check_duplicate_uids(reason=''): @@ -443,7 +415,6 @@ def check_duplicate_uids(reason=''): duplicate_uids = [k for k,v in Counter(uids).items() if v>1] if duplicate_uids is None or duplicate_uids == []: return True - return str(duplicate_uids) @@ -456,7 +427,6 @@ def check_duplicate_gids(reason=''): duplicate_gids = [k for k,v in Counter(gids).items() if v>1] if duplicate_gids is None or duplicate_gids == []: return True - return str(duplicate_gids) @@ -469,7 +439,6 @@ def check_duplicate_unames(reason=''): duplicate_unames = [k for k,v in Counter(unames).items() if v>1] if duplicate_unames is None or duplicate_unames == []: return True - return str(duplicate_unames) @@ -482,7 +451,6 @@ def check_duplicate_gnames(reason=''): duplicate_gnames = [k for k,v in Counter(gnames).items() if v>1] if duplicate_gnames is None or duplicate_gnames == []: return True - return str(duplicate_gnames) @@ -497,10 +465,7 @@ def check_directory_files_permission(path,permission): per = restrict_permissions(file_in_directory, permission) if per is not True: bad_permission_files += [file_in_directory + ": Bad Permission - " + per + ":"] - if bad_permission_files == []: - return True - - return str(bad_permission_files) + return True if bad_permission_files == [] else str(bad_permission_files) def check_core_dumps(reason=''): @@ -553,15 +518,13 @@ def check_unowned_files(reason=''): unowned_files = _execute_shell_command("df --local -P | awk 'NR!=1 {print $6}' | xargs -I '{}' find '{}' -xdev -nouser 2>/dev/null").strip() unowned_files = unowned_files.split('\n') if unowned_files != "" else [] - # The command above only searches local filesystems, there may still be compromised items on network mounted partitions. + # The command above only searches local filesystems, there may still be compromised items on network + # mounted partitions. # Following command will check each partition for unowned files unowned_partition_files = _execute_shell_command("mount | awk '{print $3}' | xargs -I '{}' find '{}' -xdev -nouser 2>/dev/null").strip() unowned_partition_files = unowned_partition_files.split('\n') if unowned_partition_files != "" else [] unowned_files = unowned_files + unowned_partition_files - if unowned_files == []: - return True - - return str(list(set(unowned_files))) + return True if unowned_files == [] else str(list(set(unowned_files))) def check_ungrouped_files(reason=''): @@ -571,15 +534,13 @@ def check_ungrouped_files(reason=''): ungrouped_files = _execute_shell_command("df --local -P | awk 'NR!=1 {print $6}' | xargs -I '{}' find '{}' -xdev -nogroup 2>/dev/null").strip() ungrouped_files = ungrouped_files.split('\n') if ungrouped_files != "" else [] - # The command above only searches local filesystems, there may still be compromised items on network mounted partitions. + # The command above only searches local filesystems, there may still be compromised items on network + # mounted partitions. # Following command will check each partition for unowned files ungrouped_partition_files = _execute_shell_command("mount | awk '{print $3}' | xargs -I '{}' find '{}' -xdev -nogroup 2>/dev/null").strip() ungrouped_partition_files = ungrouped_partition_files.split('\n') if ungrouped_partition_files != "" else [] ungrouped_files = ungrouped_files + ungrouped_partition_files - if ungrouped_files == []: - return True - - return str(list(set(ungrouped_files))) + return True if ungrouped_files == [] else str(list(set(ungrouped_files))) def check_all_users_home_directory(max_system_uid): @@ -596,14 +557,11 @@ def check_all_users_home_directory(max_system_uid): if len(user_uid_dir) < 3: user_uid_dir = user_uid_dir + ['']*(3-len(user_uid_dir)) if user_uid_dir[1].isdigit(): - if not _is_valid_home_directory(user_uid_dir[2], True) and int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] is not "nfsnobody": + if not _is_valid_home_directory(user_uid_dir[2], True) and int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] != "nfsnobody": error += ["Either home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is invalid or does not exist."] else: error += ["User " + user_uid_dir[0] + " has invalid uid " + user_uid_dir[1]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_home_directory_permissions(reason=''): @@ -623,10 +581,7 @@ def check_users_home_directory_permissions(reason=''): if result is not True: error += ["permission on home directory " + user_dir[1] + " of user " + user_dir[0] + " is wrong: " + result] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_own_their_home(max_system_uid): @@ -647,17 +602,14 @@ def check_users_own_their_home(max_system_uid): if not _is_valid_home_directory(user_uid_dir[2]): if int(user_uid_dir[1]) >= max_system_uid: error += ["Either home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is invalid or does not exist."] - elif int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] is not "nfsnobody": + elif int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] != "nfsnobody": owner = _execute_shell_command("stat -L -c \"%U\" \"" + user_uid_dir[2] + "\"") if owner != user_uid_dir[0]: error += ["The home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is owned by " + owner] else: error += ["User " + user_uid_dir[0] + " has invalid uid " + user_uid_dir[1]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_dot_files(reason=''): @@ -685,10 +637,7 @@ def check_users_dot_files(reason=''): if file_permission[2] in ["2", "3", "6", "7"]: error += ["Other Write permission set on file " + dot_file + " for user " + user_dir[0]] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_forward_files(reason=''): @@ -708,10 +657,7 @@ def check_users_forward_files(reason=''): if forward_file is not None and os.path.isfile(forward_file): error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has " + forward_file + " file"] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_users_netrc_files(reason=''): @@ -731,10 +677,7 @@ def check_users_netrc_files(reason=''): if netrc_file is not None and os.path.isfile(netrc_file): error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has .netrc file"] - if error == []: - return True - - return str(error) + return True if error == [] else str(error) def check_groups_validity(reason=''): @@ -751,10 +694,8 @@ def check_groups_validity(reason=''): if str(group_presence_validity) != "0": invalid_groups += ["Invalid groupid: " + group_id + " in /etc/passwd file"] - if invalid_groups == []: - return True + return True if invalid_groups == [] else str(invalid_groups) - return str(invalid_groups) def ensure_reverse_path_filtering(reason=''): ''' @@ -767,7 +708,7 @@ def ensure_reverse_path_filtering(reason=''): error_list.append( "net.ipv4.conf.all.rp_filter not found") search_results = re.findall("rp_filter = (\d+)",output) result = int(search_results[0]) - if( result < 1): + if result < 1: error_list.append( "net.ipv4.conf.all.rp_filter value set to " + str(result)) command = "sysctl net.ipv4.conf.default.rp_filter 2> /dev/null" output = _execute_shell_command(command) @@ -775,7 +716,7 @@ def ensure_reverse_path_filtering(reason=''): error_list.append( "net.ipv4.conf.default.rp_filter not found") search_results = re.findall("rp_filter = (\d+)",output) result = int(search_results[0]) - if( result < 1): + if result < 1: error_list.append( "net.ipv4.conf.default.rp_filter value set to " + str(result)) if len(error_list) > 0 : return str(error_list) @@ -783,6 +724,225 @@ def ensure_reverse_path_filtering(reason=''): return True +def check_users_rhosts_files(reason=''): + ''' + Ensure no users have .rhosts files + ''' + + users_dirs = _execute_shell_command("cat /etc/passwd | egrep -v '(root|halt|sync|shutdown)' | awk -F: '($7 != \"/sbin/nologin\") {print $1\" \"$6}'").strip() + users_dirs = users_dirs.split('\n') if users_dirs != "" else [] + error = [] + for user_dir in users_dirs: + user_dir = user_dir.split() + if len(user_dir) < 2: + user_dir = user_dir + ['']*(2-len(user_dir)) + if _is_valid_home_directory(user_dir[1]): + rhosts_file = _execute_shell_command("find " + user_dir[1] + " -maxdepth 1 -name \".rhosts\"").strip() + if rhosts_file is not None and os.path.isfile(rhosts_file): + error += ["Home directory: " + user_dir[1] + ", for user: " + user_dir[0] + " has .rhosts file"] + return True if error == [] else str(error) + + +def check_netrc_files_accessibility(reason=''): + ''' + Ensure users' .netrc Files are not group or world accessible + ''' + + script = """ + for dir in `cat /etc/passwd | egrep -v '(root|sync|halt|shutdown)' | awk -F: '($7 != "/sbin/nologin") { print $6 }'`; do + for file in $dir/.netrc; do + if [ ! -h "$file" -a -f "$file" ]; then + fileperm=`ls -ld $file | cut -f1 -d" "` + if [ `echo $fileperm | cut -c5` != "-" ]; then + echo "Group Read set on $file" + fi + if [ `echo $fileperm | cut -c6` != "-" ]; then + echo "Group Write set on $file" + fi + if [ `echo $fileperm | cut -c7` != "-" ]; then + echo "Group Execute set on $file" + fi + if [ `echo $fileperm | cut -c8` != "-" ]; then + echo "Other Read set on $file" + fi + if [ `echo $fileperm | cut -c9` != "-" ]; then + echo "Other Write set on $file" + fi + if [ `echo $fileperm | cut -c10` != "-" ]; then + echo "Other Execute set on $file" + fi + fi + done + done + + """ + output = _execute_shell_command(script) + return True if output.strip() == '' else output + + +def _grep(path, + pattern, + *args): + ''' + Grep for a string in the specified file + + .. note:: + This function's return value is slated for refinement in future + versions of Salt + + path + Path to the file to be searched + + .. note:: + Globbing is supported (i.e. ``/var/log/foo/*.log``, but if globbing + is being used then the path should be quoted to keep the shell from + attempting to expand the glob expression. + + pattern + Pattern to match. For example: ``test``, or ``a[0-5]`` + + opts + Additional command-line flags to pass to the grep command. For example: + ``-v``, or ``-i -B2`` + + .. note:: + The options should come after a double-dash (as shown in the + examples below) to keep Salt's own argument parser from + interpreting them. + + CLI Example: + + .. code-block:: bash + + salt '*' file.grep /etc/passwd nobody + salt '*' file.grep /etc/sysconfig/network-scripts/ifcfg-eth0 ipaddr -- -i + salt '*' file.grep /etc/sysconfig/network-scripts/ifcfg-eth0 ipaddr -- -i -B2 + salt '*' file.grep "/etc/sysconfig/network-scripts/*" ipaddr -- -i -l + ''' + path = os.path.expanduser(path) + + if args: + options = ' '.join(args) + else: + options = '' + cmd = ( + r'''grep {options} {pattern} {path}''' + .format( + options=options, + pattern=pattern, + path=path, + ) + ) + + try: + ret = __salt__['cmd.run_all'](cmd, python_shell=False, ignore_retcode=True) + except (IOError, OSError) as exc: + raise CommandExecutionError(exc.strerror) + + return ret + + +def check_list_values(file_path, match_pattern, value_pattern, grep_arg, white_list, black_list, value_delimter): + ''' + This function will first get the line matching given match_pattern. + After this value pattern will be extracted from the above line. + value pattern will be splitted by value_delimiter to get the list of values. + match_pattern will be regex patter for grep command. + value_pattern will be regex for re module of python to get matched values. + Only one of white_list and blacklist is allowed. + white_list and black_list should have comma(,) seperated values. + + Example for CIS-2.2.1.2 + ensure_ntp_configured: + data: + CentOS Linux-7: + tag: 2.2.1.2 + function: check_list_values + args: + - /etc/ntp.conf + - '^restrict.*default' + - '^restrict.*default(.*)$' + - null + - kod,nomodify,notrap,nopeer,noquery + - null + - ' ' + description: Ensure ntp is configured + ''' + + list_delimter = "," + + if black_list is not None and white_list is not None: + return "Both black_list and white_list values are not allowed." + + grep_args = [] if grep_arg is None else [grep_arg] + matched_lines = _grep(file_path, match_pattern, *grep_args).get('stdout') + if not matched_lines: + return "No match found for the given pattern: " + str(match_pattern) + + matched_lines = matched_lines.split('\n') if matched_lines is not None else [] + error = [] + for matched_line in matched_lines: + regexp = re.compile(value_pattern) + matched_values = regexp.search(matched_line).group(1) + matched_values = matched_values.strip().split(value_delimter) if matched_values is not None else [] + if white_list is not None: + values_not_in_white_list = list(set(matched_values) - set(white_list.strip().split(list_delimter))) + if values_not_in_white_list != []: + error += ["values not in whitelist: " + str(values_not_in_white_list)] + else: + values_in_black_list = list(set(matched_values).intersection(set(black_list.strip().split(list_delimter)))) + if values_in_black_list != []: + error += ["values in blacklist: " + str(values_in_black_list)] + + return True if error == [] else str(error) + + +def mail_conf_check(reason=''): + ''' + Ensure mail transfer agent is configured for local-only mode + ''' + valid_addresses = ["localhost", "127.0.0.1", "::1"] + mail_addresses = _execute_shell_command("grep '^[[:blank:]]*inet_interfaces' /etc/postfix/main.cf | awk -F'=' '{print $2}'").strip() + mail_addresses = mail_addresses.split(',') if mail_addresses != "" else [] + mail_addresses = map(str.strip, mail_addresses) + invalid_addresses = list(set(mail_addresses) - set(valid_addresses)) + + return str(invalid_addresses) if invalid_addresses != [] else True + +def check_if_any_pkg_installed(args): + ''' + :param args: Comma separated list of packages those needs to be verified + :return: True if any of the input package is installed else False + ''' + result = False + for pkg in args.split(','): + if __salt__['pkg.version'](pkg): + result = True + break + return result + + +def test_success(): + ''' + Automatically returns success + ''' + return True + + +def test_failure(): + ''' + Automatically returns failure, no reason + ''' + return False + + +def test_failure_reason(reason): + ''' + Automatically returns failure, with a reason (first arg) + ''' + return reason + + FUNCTION_MAP = { 'check_all_ports_firewall_rules': check_all_ports_firewall_rules, 'check_password_fields_not_empty': check_password_fields_not_empty, @@ -818,4 +978,10 @@ def ensure_reverse_path_filtering(reason=''): 'check_users_netrc_files': check_users_netrc_files, 'check_groups_validity': check_groups_validity, 'ensure_reverse_path_filtering': ensure_reverse_path_filtering, + 'check_users_rhosts_files': check_users_rhosts_files, + 'check_netrc_files_accessibility': check_netrc_files_accessibility, + 'check_list_values': check_list_values, + 'mail_conf_check': mail_conf_check, + 'check_if_any_pkg_installed':check_if_any_pkg_installed, + } diff --git a/hubblestack/files/hubblestack_nova/stat_nova.py b/hubblestack/files/hubblestack_nova/stat_nova.py index 995d792af..cbd382489 100644 --- a/hubblestack/files/hubblestack_nova/stat_nova.py +++ b/hubblestack/files/hubblestack_nova/stat_nova.py @@ -28,6 +28,8 @@ - '/etc/grub2/grub.cfg': tag: 'CIS-1.5.1' user: 'root' + mode: 644 + allow_more_strict: True # file permissions can be 644 or more strict [default = False ] uid: 0 group: 'root' gid: 0 @@ -83,10 +85,18 @@ def audit(data_list, tags, debug=False, **kwargs): continue name = tag_data['name'] expected = {} - for e in ['mode', 'user', 'uid', 'group', 'gid']: + for e in ['mode', 'user', 'uid', 'group', 'gid','allow_more_strict']: if e in tag_data: expected[e] = tag_data[e] + if 'allow_more_strict' in expected.keys() and 'mode' not in expected.keys(): + reason_dict = {} + reason = "'allow_more_strict' tag can't be specified without mode 'tag'" + reason_dict['allow_more_strict'] = reason + tag_data['reason'] = reason_dict + ret['Failure'].append(tag_data) + continue + #getting the stats using salt salt_ret = __salt__['file.stats'](name) if not salt_ret: @@ -99,14 +109,36 @@ def audit(data_list, tags, debug=False, **kwargs): passed = True reason_dict = {} for e in expected.keys(): + if e == 'allow_more_strict': + continue r = salt_ret[e] - if e == 'mode' and r != '0': - r = r[1:] - if str(expected[e]) != str(r): - passed = False - reason = { 'expected': str(expected[e]), - 'current': str(r) } - reason_dict[e] = reason + + if e == 'mode': + if r != '0': + r = r[1:] + allow_more_strict = False + if 'allow_more_strict' in expected.keys(): + allow_more_strict = expected['allow_more_strict'] + if not isinstance(allow_more_strict, bool ): + passed = False + reason = (str(allow_more_strict) + ' is not a valid boolean') + reason_dict[e] = reason + + else: + subcheck_passed = _check_mode(str(expected[e]), str(r), allow_more_strict) + if not subcheck_passed: + passed = False + reason = { 'expected': str(expected[e]), + 'allow_more_strict': str(allow_more_strict), + 'current': str(r) } + reason_dict[e] = reason + else: + subcheck_passed = (str(expected[e]) == str(r)) + if not subcheck_passed: + passed = False + reason = { 'expected': str(expected[e]), + 'current': str(r) } + reason_dict[e] = reason if reason_dict: tag_data['reason'] = reason_dict @@ -177,3 +209,75 @@ def _get_tags(data): formatted_data.pop('data') ret[tag].append(formatted_data) return ret + +def _check_mode( max_permission , given_permission, allow_more_strict): + ''' + Checks whether a file's permission are equal to a given permission or more restrictive. + Permission is a string of 3 digits [0-7]. 'given_permission' is the actual permission on file, + 'max_permission' is the expected permission on this file. Set 'allow_more_strict' to True, + to allow more restrictive permissions as well. Example: + + _check_mode('644', '644', False) returns True + _check_mode('644', '600', False) returns False + _check_mode('644', '644', True) returns True + _check_mode('644', '600', True) returns True + _check_mode('644', '655', True) returns False + + ''' + + if given_permission == '0': + return True + + if not allow_more_strict: + return (max_permission == given_permission) + + if (_is_permission_in_limit(max_permission[0],given_permission[0]) and _is_permission_in_limit(max_permission[1],given_permission[1]) and _is_permission_in_limit(max_permission[2],given_permission[2])): + return True + + return False + + +def _is_permission_in_limit(max_permission,given_permission): + ''' + Return true only if given_permission is not more linient that max_permission. In other words, if + r or w or x is present in given_permission but absent in max_permission, it should return False + Takes input two integer values from 0 to 7. + ''' + max_permission = int(max_permission) + given_permission = int(given_permission) + allowed_r = False + allowed_w = False + allowed_x = False + given_r = False + given_w = False + given_x = False + + if max_permission >= 4: + allowed_r = True + max_permission = max_permission - 4 + if max_permission >= 2: + allowed_w = True + max_permission = max_permission - 2 + if max_permission >= 1: + allowed_x = True + + if given_permission >= 4: + given_r = True + given_permission = given_permission - 4 + if given_permission >= 2: + given_w = True + given_permission = given_permission - 2 + if given_permission >= 1: + given_x = True + + if given_r and ( not allowed_r ): + return False + if given_w and ( not allowed_w ): + return False + if given_x and ( not allowed_x ): + return False + + return True + + + diff --git a/hubblestack/files/hubblestack_nova/sysctl.py b/hubblestack/files/hubblestack_nova/sysctl.py index 03b35bfcc..8cd89ee91 100644 --- a/hubblestack/files/hubblestack_nova/sysctl.py +++ b/hubblestack/files/hubblestack_nova/sysctl.py @@ -82,6 +82,7 @@ def audit(data_list, tags, debug=False, **kwargs): if str(salt_ret).startswith('error'): passed = False if str(salt_ret) != str(match_output): + tag_data['failure_reason'] = str(salt_ret) passed = False if passed: ret['Success'].append(tag_data) @@ -148,3 +149,4 @@ def _get_tags(data): formatted_data.pop('data') ret[tag].append(formatted_data) return ret + From b1b6541e0a2853a32e93838d3f889a94ec347cd1 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Tue, 19 Sep 2017 16:53:59 -0600 Subject: [PATCH 21/35] Remove tabs and trailing whitespace --- .../files/hubblestack_nova/systemctl.py | 31 +++++++++---------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/systemctl.py b/hubblestack/files/hubblestack_nova/systemctl.py index db67250d4..ade5e7486 100644 --- a/hubblestack/files/hubblestack_nova/systemctl.py +++ b/hubblestack/files/hubblestack_nova/systemctl.py @@ -61,11 +61,11 @@ def audit(data_list, tags, debug=False, **kwargs): __tags__ = _get_tags(__data__) if debug: - log.debug('systemctl audit __data__:') + log.debug('systemctl audit __data__:') log.debug(__data__) log.debug('systemctl audit __tags__:') log.debug(__tags__) - + ret = {'Success': [], 'Failure': [], 'Controlled': []} for tag in __tags__: if fnmatch.fnmatch(tag, tags): @@ -75,23 +75,23 @@ def audit(data_list, tags, debug=False, **kwargs): continue name = tag_data['name'] audittype = tag_data['type'] - disabled_states = ["disabled", "not_found", "indirect"] + disabled_states = ["disabled", "not_found", "indirect"] status_code, status = _systemctl(name) # Blacklisted service (must not be running or not found) if audittype == 'blacklist': - if status_code == "1" or status in disabled_states: - ret['Success'].append(tag_data) - else: - tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code - ret['Failure'].append(tag_data) + if status_code == "1" or status in disabled_states: + ret['Success'].append(tag_data) + else: + tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code + ret['Failure'].append(tag_data) # Whitelisted pattern (must be found and running) elif audittype == 'whitelist': - if status_code == "0": - ret['Success'].append(tag_data) - else: - tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code - ret['Failure'].append(tag_data) + if status_code == "0": + ret['Success'].append(tag_data) + else: + tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code + ret['Failure'].append(tag_data) return ret @@ -162,7 +162,7 @@ def _get_tags(data): formatted_data.update(audit_data) formatted_data.pop('data') ret[tag].append(formatted_data) - + return ret def _execute_shell_command(cmd): @@ -180,6 +180,5 @@ def _systemctl(service_name): output = _execute_shell_command('systemctl is-enabled ' + service_name + ' 2>/dev/null; echo $?').strip() output = output.split('\n') if output != "" else [] if output == []: - return False + return False return (output[1], output[0]) if len(output) == 2 else (output[0], "not_found") - From 64ec69fb07b01b43c12263d61ce34863d0d5a6d0 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Tue, 19 Sep 2017 16:57:10 -0600 Subject: [PATCH 22/35] Use service.enabled instead of shelling out --- .../files/hubblestack_nova/systemctl.py | 27 +++---------------- 1 file changed, 3 insertions(+), 24 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/systemctl.py b/hubblestack/files/hubblestack_nova/systemctl.py index ade5e7486..7fb48a9f6 100644 --- a/hubblestack/files/hubblestack_nova/systemctl.py +++ b/hubblestack/files/hubblestack_nova/systemctl.py @@ -75,22 +75,19 @@ def audit(data_list, tags, debug=False, **kwargs): continue name = tag_data['name'] audittype = tag_data['type'] - disabled_states = ["disabled", "not_found", "indirect"] - status_code, status = _systemctl(name) + enabled = __salt__['service.enabled'](name) # Blacklisted service (must not be running or not found) if audittype == 'blacklist': - if status_code == "1" or status in disabled_states: + if not enabled: ret['Success'].append(tag_data) else: - tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code ret['Failure'].append(tag_data) # Whitelisted pattern (must be found and running) elif audittype == 'whitelist': - if status_code == "0": + if enabled: ret['Success'].append(tag_data) else: - tag_data["failure_reason"] = "Service Status: " + status + ", return code: " + status_code ret['Failure'].append(tag_data) return ret @@ -164,21 +161,3 @@ def _get_tags(data): ret[tag].append(formatted_data) return ret - -def _execute_shell_command(cmd): - ''' - This function will execute passed command in /bin/shell - ''' - return __salt__['cmd.run'](cmd, python_shell=True, shell='/bin/bash', ignore_retcode=True) - - -def _systemctl(service_name): - ''' - Return service status. - Return object will be like ('status_code', 'status') if {service_is_present} else ('status_code',) - ''' - output = _execute_shell_command('systemctl is-enabled ' + service_name + ' 2>/dev/null; echo $?').strip() - output = output.split('\n') if output != "" else [] - if output == []: - return False - return (output[1], output[0]) if len(output) == 2 else (output[0], "not_found") From 818d87dea58923e10c87cdf253e14bffdaeea3ab Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Tue, 19 Sep 2017 17:00:48 -0600 Subject: [PATCH 23/35] Remove trailing whitespace in mount.py --- hubblestack/files/hubblestack_nova/mount.py | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/mount.py b/hubblestack/files/hubblestack_nova/mount.py index eb33489f5..b12c9fc9a 100644 --- a/hubblestack/files/hubblestack_nova/mount.py +++ b/hubblestack/files/hubblestack_nova/mount.py @@ -76,17 +76,17 @@ def audit(data_list, tags, debug=False, **kwargs): if 'control' in tag_data: ret['Controlled'].append(tag_data) continue - + name = tag_data.get('name') audittype = tag_data.get('type') - + if 'attribute' not in tag_data: log.error('No attribute found for mount audit {0}, file {1}' .format(tag,name)) tag_data = copy.deepcopy(tag_data) - tag_data['error'] = 'No pattern found'.format(mod) + tag_data['error'] = 'No pattern found'.format(mod) ret['Failure'].append(tag_data) continue @@ -195,16 +195,16 @@ def _get_tags(data): def _check_mount_attribute(path,attribute, check_type): ''' - This function checks if the partition at a given path is mounted with a particular attribute or not. - If 'check_type' is 'hard', the function returns False if he specified path does not exist, or if it - is not a mounted partition. If 'check_type' is 'soft', the functions returns True in such cases. + This function checks if the partition at a given path is mounted with a particular attribute or not. + If 'check_type' is 'hard', the function returns False if he specified path does not exist, or if it + is not a mounted partition. If 'check_type' is 'soft', the functions returns True in such cases. ''' if not os.path.exists(path): if check_type == 'hard': return False else: - return True + return True mount_object = __salt__['mount.active']() @@ -221,5 +221,5 @@ def _check_mount_attribute(path,attribute, check_type): if check_type == 'hard': return False else: - return True + return True From 31e9324ea6d6064c0275f5a418907a2ad4d729c4 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Tue, 19 Sep 2017 17:02:19 -0600 Subject: [PATCH 24/35] Fix docs --- hubblestack/files/hubblestack_nova/systemctl.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hubblestack/files/hubblestack_nova/systemctl.py b/hubblestack/files/hubblestack_nova/systemctl.py index 7fb48a9f6..2e1ff6711 100644 --- a/hubblestack/files/hubblestack_nova/systemctl.py +++ b/hubblestack/files/hubblestack_nova/systemctl.py @@ -3,7 +3,7 @@ HubbleStack Nova plugin for using systemctl to verify status of a given service. Supports both blacklisting and whitelisting patterns. Blacklisted services must -not be running. Whitelisted services must be running. +not be enabled. Whitelisted services must be enabled. :maintainer: HubbleStack / basepi :maturity: 2017.8.29 From 1a3b1e0490949762c3aebfc4a6a29b035b6a4280 Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Tue, 19 Sep 2017 17:36:37 -0600 Subject: [PATCH 25/35] Add splunkindex grain --- hubblestack/extmods/grains/splunkindex.py | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 hubblestack/extmods/grains/splunkindex.py diff --git a/hubblestack/extmods/grains/splunkindex.py b/hubblestack/extmods/grains/splunkindex.py new file mode 100644 index 000000000..83620db21 --- /dev/null +++ b/hubblestack/extmods/grains/splunkindex.py @@ -0,0 +1,17 @@ +# -*- coding: utf-8 -*- + +import salt.modules.config + +__salt__ = {'config.get': salt.modules.config.get} + + +def splunkindex(): + ''' + Return splunk index from config file in grain + ''' + # Provides: + # splunkindex + grains = {} + index = __salt__['config.get']('hubblestack:returner:splunk:index', default='unknown') + grains['splunkindex'] = index + return grains From cc970310d57add7553276727eeb0a36668baf870 Mon Sep 17 00:00:00 2001 From: Anurag Paliwal Date: Wed, 20 Sep 2017 16:55:20 +0530 Subject: [PATCH 26/35] Making changes as per review --- hubblestack/files/hubblestack_nova/misc.py | 2 +- hubblestack/files/hubblestack_nova/stat_nova.py | 14 ++++++-------- hubblestack/files/hubblestack_nova/sysctl.py | 3 +-- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/misc.py b/hubblestack/files/hubblestack_nova/misc.py index 975f95777..dffe9db1e 100644 --- a/hubblestack/files/hubblestack_nova/misc.py +++ b/hubblestack/files/hubblestack_nova/misc.py @@ -188,7 +188,7 @@ def _is_valid_home_directory(directory_path, check_slash_home=False): def _is_permission_in_limit(max_permission,given_permission): ''' - Return true only if given_permission is not more linient that max_permission. In other words, if + Return true only if given_permission is not more lenient that max_permission. In other words, if r or w or x is present in given_permission but absent in max_permission, it should return False Takes input two integer values from 0 to 7. ''' diff --git a/hubblestack/files/hubblestack_nova/stat_nova.py b/hubblestack/files/hubblestack_nova/stat_nova.py index cbd382489..7ea2da4ba 100644 --- a/hubblestack/files/hubblestack_nova/stat_nova.py +++ b/hubblestack/files/hubblestack_nova/stat_nova.py @@ -85,13 +85,13 @@ def audit(data_list, tags, debug=False, **kwargs): continue name = tag_data['name'] expected = {} - for e in ['mode', 'user', 'uid', 'group', 'gid','allow_more_strict']: + for e in ['mode', 'user', 'uid', 'group', 'gid', 'allow_more_strict']: if e in tag_data: expected[e] = tag_data[e] if 'allow_more_strict' in expected.keys() and 'mode' not in expected.keys(): reason_dict = {} - reason = "'allow_more_strict' tag can't be specified without mode 'tag'" + reason = "'allow_more_strict' tag can't be specified without 'mode' tag" reason_dict['allow_more_strict'] = reason tag_data['reason'] = reason_dict ret['Failure'].append(tag_data) @@ -210,7 +210,8 @@ def _get_tags(data): ret[tag].append(formatted_data) return ret -def _check_mode( max_permission , given_permission, allow_more_strict): + +def _check_mode(max_permission, given_permission, allow_more_strict): ''' Checks whether a file's permission are equal to a given permission or more restrictive. Permission is a string of 3 digits [0-7]. 'given_permission' is the actual permission on file, @@ -239,7 +240,7 @@ def _check_mode( max_permission , given_permission, allow_more_strict): def _is_permission_in_limit(max_permission,given_permission): ''' - Return true only if given_permission is not more linient that max_permission. In other words, if + Return true only if given_permission is not more lenient that max_permission. In other words, if r or w or x is present in given_permission but absent in max_permission, it should return False Takes input two integer values from 0 to 7. ''' @@ -277,7 +278,4 @@ def _is_permission_in_limit(max_permission,given_permission): if given_x and ( not allowed_x ): return False - return True - - - + return True \ No newline at end of file diff --git a/hubblestack/files/hubblestack_nova/sysctl.py b/hubblestack/files/hubblestack_nova/sysctl.py index 8cd89ee91..84f608ac0 100644 --- a/hubblestack/files/hubblestack_nova/sysctl.py +++ b/hubblestack/files/hubblestack_nova/sysctl.py @@ -148,5 +148,4 @@ def _get_tags(data): formatted_data.update(audit_data) formatted_data.pop('data') ret[tag].append(formatted_data) - return ret - + return ret \ No newline at end of file From 8800c949a8565bc96f23892c66121e5cb3609ca8 Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Wed, 20 Sep 2017 18:22:15 -0600 Subject: [PATCH 27/35] Update splunkindex grain --- hubblestack/extmods/grains/splunkindex.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/hubblestack/extmods/grains/splunkindex.py b/hubblestack/extmods/grains/splunkindex.py index 83620db21..7f60e0923 100644 --- a/hubblestack/extmods/grains/splunkindex.py +++ b/hubblestack/extmods/grains/splunkindex.py @@ -2,6 +2,9 @@ import salt.modules.config +salt.modules.config.__pillar__ = {} +salt.modules.config.__grains__ = {} + __salt__ = {'config.get': salt.modules.config.get} @@ -11,6 +14,7 @@ def splunkindex(): ''' # Provides: # splunkindex + salt.modules.config.__opts__ = __opts__ grains = {} index = __salt__['config.get']('hubblestack:returner:splunk:index', default='unknown') grains['splunkindex'] = index From 9ba688b6bccd0fe35f81f3f3c779a2a403c76ee3 Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Fri, 22 Sep 2017 14:28:40 -0600 Subject: [PATCH 28/35] Change to config-defined grains --- hubblestack/extmods/grains/configgrains.py | 25 ++++++++++++++++++++++ hubblestack/extmods/grains/splunkindex.py | 21 ------------------ 2 files changed, 25 insertions(+), 21 deletions(-) create mode 100644 hubblestack/extmods/grains/configgrains.py delete mode 100644 hubblestack/extmods/grains/splunkindex.py diff --git a/hubblestack/extmods/grains/configgrains.py b/hubblestack/extmods/grains/configgrains.py new file mode 100644 index 000000000..16f3c1a73 --- /dev/null +++ b/hubblestack/extmods/grains/configgrains.py @@ -0,0 +1,25 @@ +# -*- coding: utf-8 -*- + +import salt.modules.config + +salt.modules.config.__pillar__ = {} +salt.modules.config.__grains__ = {} + +__salt__ = {'config.get': salt.modules.config.get} + + +def splunkindex(): + ''' + Given a list of config values, create custom grains with custom names. + The list comes from config. + ''' + grains = {} + salt.modules.config.__opts__ = __opts__ + + grains_to_make = __salt__['config.get']('hubblestack:grains', default=[]) + for grain in grains_to_make: + for k, v in grain.iteritems(): + grain_value = __salt__['config.get'](v, default=None) + if grain_value: + grains[k] = grain_value + return grains diff --git a/hubblestack/extmods/grains/splunkindex.py b/hubblestack/extmods/grains/splunkindex.py deleted file mode 100644 index 7f60e0923..000000000 --- a/hubblestack/extmods/grains/splunkindex.py +++ /dev/null @@ -1,21 +0,0 @@ -# -*- coding: utf-8 -*- - -import salt.modules.config - -salt.modules.config.__pillar__ = {} -salt.modules.config.__grains__ = {} - -__salt__ = {'config.get': salt.modules.config.get} - - -def splunkindex(): - ''' - Return splunk index from config file in grain - ''' - # Provides: - # splunkindex - salt.modules.config.__opts__ = __opts__ - grains = {} - index = __salt__['config.get']('hubblestack:returner:splunk:index', default='unknown') - grains['splunkindex'] = index - return grains From 7511fc239452277334da14dee806fb4f6dd50805 Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Fri, 22 Sep 2017 14:30:00 -0600 Subject: [PATCH 29/35] Rename function --- hubblestack/extmods/grains/configgrains.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/hubblestack/extmods/grains/configgrains.py b/hubblestack/extmods/grains/configgrains.py index 16f3c1a73..7e2fc37d6 100644 --- a/hubblestack/extmods/grains/configgrains.py +++ b/hubblestack/extmods/grains/configgrains.py @@ -8,7 +8,7 @@ __salt__ = {'config.get': salt.modules.config.get} -def splunkindex(): +def configgrains(): ''' Given a list of config values, create custom grains with custom names. The list comes from config. @@ -23,3 +23,4 @@ def splunkindex(): if grain_value: grains[k] = grain_value return grains + \ No newline at end of file From 6949ccc393f86b09c45b7674ecda16f3b3921f6b Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Fri, 22 Sep 2017 14:30:38 -0600 Subject: [PATCH 30/35] Extra space --- hubblestack/extmods/grains/configgrains.py | 1 - 1 file changed, 1 deletion(-) diff --git a/hubblestack/extmods/grains/configgrains.py b/hubblestack/extmods/grains/configgrains.py index 7e2fc37d6..90b480f23 100644 --- a/hubblestack/extmods/grains/configgrains.py +++ b/hubblestack/extmods/grains/configgrains.py @@ -23,4 +23,3 @@ def configgrains(): if grain_value: grains[k] = grain_value return grains - \ No newline at end of file From 0c026de54820e22179c36e663fece664c305745e Mon Sep 17 00:00:00 2001 From: Chandler Newby Date: Mon, 25 Sep 2017 16:19:54 -0600 Subject: [PATCH 31/35] Add docstring --- hubblestack/extmods/grains/configgrains.py | 29 ++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/hubblestack/extmods/grains/configgrains.py b/hubblestack/extmods/grains/configgrains.py index 90b480f23..48cba419d 100644 --- a/hubblestack/extmods/grains/configgrains.py +++ b/hubblestack/extmods/grains/configgrains.py @@ -1,4 +1,28 @@ # -*- coding: utf-8 -*- +''' +Custom config-defined grains module + +:maintainer: HubbleStack +:platform: All +:requires: SaltStack + +Allow users to collect a list of config directives and set them as custom grains. +The list should be defined under the `hubblestack` key. + +The `grains` value should be a list of dictionaries. Each dictionary should have +a single key which will be set as the grain name. The dictionary's value will +be the grain's value. + +hubblestack: + grains: + - splunkindex: "hubblestack:returner:splunk:index" + returner: + splunk: + - token: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + indexer: splunk-indexer.domain.tld + index: hubble + sourcetype_nova: hubble_audit +''' import salt.modules.config @@ -12,6 +36,11 @@ def configgrains(): ''' Given a list of config values, create custom grains with custom names. The list comes from config. + + Example: + hubblestack: + grains: + - splunkindex: "hubblestack:returner:splunk:index" ''' grains = {} salt.modules.config.__opts__ = __opts__ From 4036280d97c7b41583bb5cb2a4a402292ce96762 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 25 Sep 2017 16:12:36 -0600 Subject: [PATCH 32/35] Remove trailing whitespace --- hubblestack/files/hubblestack_nova/misc.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hubblestack/files/hubblestack_nova/misc.py b/hubblestack/files/hubblestack_nova/misc.py index dffe9db1e..9cdc36314 100644 --- a/hubblestack/files/hubblestack_nova/misc.py +++ b/hubblestack/files/hubblestack_nova/misc.py @@ -906,7 +906,7 @@ def mail_conf_check(reason=''): mail_addresses = mail_addresses.split(',') if mail_addresses != "" else [] mail_addresses = map(str.strip, mail_addresses) invalid_addresses = list(set(mail_addresses) - set(valid_addresses)) - + return str(invalid_addresses) if invalid_addresses != [] else True def check_if_any_pkg_installed(args): From cb500bb6f04ab211fec319d6178de7123573b5b9 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 25 Sep 2017 16:39:30 -0600 Subject: [PATCH 33/35] Lint + remove command injection vulnerabilities --- hubblestack/files/hubblestack_nova/misc.py | 31 ++++++++++++---------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/hubblestack/files/hubblestack_nova/misc.py b/hubblestack/files/hubblestack_nova/misc.py index 9cdc36314..6e11b8b27 100644 --- a/hubblestack/files/hubblestack_nova/misc.py +++ b/hubblestack/files/hubblestack_nova/misc.py @@ -313,26 +313,29 @@ def root_is_only_uid_0_account(reason=''): return True if result.strip() == 'root' else result -def test_mount_attrs(mount_name,attribute,check_type='hard'): +def test_mount_attrs(mount_name, attribute, check_type='hard'): ''' Ensure that a given directory is mounted with appropriate attributes If check_type is soft, then in absence of volume, True will be returned If check_type is hard, then in absence of volume, False will be returned ''' # check that the path exists on system - command = 'test -e ' + mount_name + ' ; echo $?' - output = _execute_shell_command( command) - if output.strip() == '1': + command = 'test -e ' + mount_name + results = __salt__['cmd.run_all'](command) + output = results['stdout'] + retcode = results['retcode'] + if str(retcode) == '1': return True if check_type == "soft" else (mount_name + " folder does not exist") # if the path exits, proceed with following code - output = _execute_shell_command('mount | grep ' + mount_name) - if output.strip() == '': + output = __salt__['cmd.run']('mount') #| grep ' + mount_name) + if mount_name not in output: return True if check_type == "soft" else (mount_name + " is not mounted") - elif attribute not in output: - return str(output) else: - return True + for line in output.splitlines(): + if mount_name in line and attribute not in line: + return str(line) + return True def check_time_synchronization(reason=''): @@ -454,7 +457,7 @@ def check_duplicate_gnames(reason=''): return str(duplicate_gnames) -def check_directory_files_permission(path,permission): +def check_directory_files_permission(path, permission): ''' Check all files permission inside a directory ''' @@ -489,11 +492,11 @@ def check_service_status(service_name, state): Return True otherwise state can be enabled or disabled. ''' - output = _execute_shell_command('systemctl is-enabled ' + service_name + ' >/dev/null 2>&1; echo $?') - if (state == "disabled" and output.strip() == "1") or (state == "enabled" and output.strip() == "0"): + output = __salt__['cmd.retcode']('systemctl is-enabled ' + service_name) + if (state == "disabled" and str(output) == "1") or (state == "enabled" and str(output) == "0"): return True else: - return _execute_shell_command('systemctl is-enabled ' + service_name + ' 2>/dev/null') + return __salt__['cmd.run_stdout']('systemctl is-enabled ' + service_name) def check_ssh_timeout_config(reason=''): ''' @@ -603,7 +606,7 @@ def check_users_own_their_home(max_system_uid): if int(user_uid_dir[1]) >= max_system_uid: error += ["Either home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is invalid or does not exist."] elif int(user_uid_dir[1]) >= max_system_uid and user_uid_dir[0] != "nfsnobody": - owner = _execute_shell_command("stat -L -c \"%U\" \"" + user_uid_dir[2] + "\"") + owner = __salt__['cmd.run']("stat -L -c \"%U\" \"" + user_uid_dir[2] + "\"") if owner != user_uid_dir[0]: error += ["The home directory " + user_uid_dir[2] + " of user " + user_uid_dir[0] + " is owned by " + owner] else: From f2b94f99b202fc6f2309e114bf2fcc5fcad31400 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Mon, 25 Sep 2017 17:20:11 -0600 Subject: [PATCH 34/35] Fix pulsar.top --- hubblestack/extmods/modules/pulsar.py | 2 +- hubblestack/extmods/modules/win_pulsar.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/hubblestack/extmods/modules/pulsar.py b/hubblestack/extmods/modules/pulsar.py index 7f2f6cf76..72d6d5bab 100644 --- a/hubblestack/extmods/modules/pulsar.py +++ b/hubblestack/extmods/modules/pulsar.py @@ -393,7 +393,7 @@ def _dict_update(dest, upd, recursive_update=True, merge_lists=False): dest_subkey = None if isinstance(dest_subkey, collections.Mapping) \ and isinstance(val, collections.Mapping): - ret = update(dest_subkey, val, merge_lists=merge_lists) + ret = _dict_update(dest_subkey, val, merge_lists=merge_lists) dest[key] = ret elif isinstance(dest_subkey, list) \ and isinstance(val, list): diff --git a/hubblestack/extmods/modules/win_pulsar.py b/hubblestack/extmods/modules/win_pulsar.py index 6c1c2f78a..6d447bfa1 100644 --- a/hubblestack/extmods/modules/win_pulsar.py +++ b/hubblestack/extmods/modules/win_pulsar.py @@ -545,7 +545,7 @@ def _dict_update(dest, upd, recursive_update=True, merge_lists=False): dest_subkey = None if isinstance(dest_subkey, collections.Mapping) \ and isinstance(val, collections.Mapping): - ret = update(dest_subkey, val, merge_lists=merge_lists) + ret = _dict_update(dest_subkey, val, merge_lists=merge_lists) dest[key] = ret elif isinstance(dest_subkey, list) \ and isinstance(val, list): From 598fbad20a73e4b96f4c12dde8f48c2d9ddf6f93 Mon Sep 17 00:00:00 2001 From: Colton Myers Date: Tue, 26 Sep 2017 14:08:58 -0600 Subject: [PATCH 35/35] Rev to v2.2.5 (and get rid of old build processes) --- hubblestack/__init__.py | 2 +- pkg/amazonlinux2016.09/Dockerfile | 4 +- pkg/amazonlinux2017.03/Dockerfile | 4 +- pkg/build_debs.sh | 47 ---------------- pkg/build_rpms.sh | 44 --------------- pkg/centos6/Dockerfile | 4 +- pkg/centos7/Dockerfile | 4 +- pkg/coreos/Dockerfile | 4 +- pkg/debian7/Dockerfile | 4 +- pkg/debian8/Dockerfile | 4 +- pkg/debian9/Dockerfile | 4 +- pkg/specs/hubblestack-el6.spec | 92 ------------------------------- pkg/specs/hubblestack-el7.spec | 92 ------------------------------- 13 files changed, 17 insertions(+), 292 deletions(-) delete mode 100755 pkg/build_debs.sh delete mode 100755 pkg/build_rpms.sh delete mode 100644 pkg/specs/hubblestack-el6.spec delete mode 100644 pkg/specs/hubblestack-el7.spec diff --git a/hubblestack/__init__.py b/hubblestack/__init__.py index 83b6ab028..3db1b9fed 100644 --- a/hubblestack/__init__.py +++ b/hubblestack/__init__.py @@ -1 +1 @@ -__version__ = '2.2.4' +__version__ = '2.2.5' diff --git a/pkg/amazonlinux2016.09/Dockerfile b/pkg/amazonlinux2016.09/Dockerfile index 4fc65f8c1..c4a4b73bf 100644 --- a/pkg/amazonlinux2016.09/Dockerfile +++ b/pkg/amazonlinux2016.09/Dockerfile @@ -91,8 +91,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/amazonlinux2017.03/Dockerfile b/pkg/amazonlinux2017.03/Dockerfile index a083fc0c5..6f9046209 100644 --- a/pkg/amazonlinux2017.03/Dockerfile +++ b/pkg/amazonlinux2017.03/Dockerfile @@ -91,8 +91,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/build_debs.sh b/pkg/build_debs.sh deleted file mode 100755 index 4498bd6c9..000000000 --- a/pkg/build_debs.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -set -x # echo on - -_user=`id -u` - -# Check if the current user is root -if [ "$_user" == "0" ] -then - echo "This script should not be run as root ..." - echo "Please run this script as regular user with sudo privileges ..." - echo "Exiting ..." - exit -fi - -rm -rf build -rm -rf dist - -mkdir -p build -mkdir -p dist - -bash ./init_pkg.sh -y -cp ../hubble.tar.gz dist/hubble.tar.gz -mv ../hubble.tar.gz build/hubble.tar.gz -mkdir build/hubblestack-2.2.4 -tar -xzvf build/hubble.tar.gz -C build/hubblestack-2.2.4 -mkdir -p build/hubblestack-2.2.4/etc/init.d -cp ./hubble build/hubblestack-2.2.4/etc/init.d -mkdir -p build/hubblestack-2.2.4/usr/lib/systemd/system -cp ./hubble.service build/hubblestack-2.2.4/usr/lib/systemd/system -cp -f ../conf/hubble build/hubblestack-2.2.4/etc/hubble/hubble -cd build/hubblestack-2.2.4 - -sudo apt-get install -y ruby ruby-dev rubygems gcc make -sudo gem install --no-ri --no-rdoc fpm -mkdir -p usr/bin -ln -s /opt/hubble/hubble usr/bin/hubble -ln -s /opt/osquery/osqueryd usr/bin/osqueryd -ln -s /opt/osquery/osqueryi usr/bin/osqueryi -fpm -s dir -t deb \ - -n hubblestack \ - -v 2.2.4-1 \ - -d 'git' \ - --config-files /etc/hubble/hubble --config-files /etc/osquery/osquery.conf \ - --deb-no-default-config-files \ - etc/hubble etc/osquery etc/init.d opt usr/bin -cp hubblestack_2.2.4-1_amd64.deb ../../dist/ diff --git a/pkg/build_rpms.sh b/pkg/build_rpms.sh deleted file mode 100755 index 3401fb55b..000000000 --- a/pkg/build_rpms.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash - -set -x # echo on - -_user=`id -u` - -# Check if the current user is root -if [ "$_user" == "0" ] -then - echo "This script should not be run as root ..." - echo "Please run this script as regular user with sudo privileges ..." - echo "Exiting ..." - exit -fi - -rm -rf build -rm -rf dist - -mkdir -p build -mkdir -p dist - -bash ./init_pkg.sh -y -cp ../hubble.tar.gz dist/hubble.tar.gz -mv ../hubble.tar.gz build/hubble.tar.gz -mkdir build/hubblestack-2.2.4 -tar -xzvf build/hubble.tar.gz -C build/hubblestack-2.2.4 -mkdir -p build/hubblestack-2.2.4/etc/init.d -cp ./hubble build/hubblestack-2.2.4/etc/init.d -mkdir -p build/hubblestack-2.2.4/usr/lib/systemd/system -cp ./hubble.service build/hubblestack-2.2.4/usr/lib/systemd/system -cp -f ../conf/hubble build/hubblestack-2.2.4/etc/hubble/hubble -cd build -tar -czvf hubblestack-2.2.4.tar.gz hubblestack-2.2.4/ -mkdir -p rpmbuild/{RPMS,SRPMS,BUILD,SOURCES,SPECS,tmp} - -cp hubblestack-2.2.4.tar.gz rpmbuild/SOURCES/ -cd rpmbuild - -cp ../../specs/* SPECS/ - -rpmbuild --define "_topdir $(pwd)" --define "_tmppath %{_topdir}/tmp" -ba SPECS/hubblestack-el6.spec -cp RPMS/x86_64/hubblestack-2.2.4-1.x86_64.rpm ../../dist/hubblestack-2.2.4-1.el6.x86_64.rpm -rpmbuild --define "_topdir $(pwd)" --define "_tmppath %{_topdir}/tmp" -ba SPECS/hubblestack-el7.spec -cp RPMS/x86_64/hubblestack-2.2.4-1.x86_64.rpm ../../dist/hubblestack-2.2.4-1.el7.x86_64.rpm diff --git a/pkg/centos6/Dockerfile b/pkg/centos6/Dockerfile index a84f9ce0e..8e212b5f4 100644 --- a/pkg/centos6/Dockerfile +++ b/pkg/centos6/Dockerfile @@ -93,8 +93,8 @@ RUN yum install -y rpmbuild gcc make rh-ruby23 rh-ruby23-ruby-devel \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/centos7/Dockerfile b/pkg/centos7/Dockerfile index 6f60bbcdc..031f5f735 100644 --- a/pkg/centos7/Dockerfile +++ b/pkg/centos7/Dockerfile @@ -90,8 +90,8 @@ RUN yum install -y ruby ruby-devel rpmbuild rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/coreos/Dockerfile b/pkg/coreos/Dockerfile index 5a844c356..5406ac1d5 100644 --- a/pkg/coreos/Dockerfile +++ b/pkg/coreos/Dockerfile @@ -88,8 +88,8 @@ RUN pip install --upgrade pip \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian7/Dockerfile b/pkg/debian7/Dockerfile index f0b260410..a27fdbdc1 100644 --- a/pkg/debian7/Dockerfile +++ b/pkg/debian7/Dockerfile @@ -115,8 +115,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian8/Dockerfile b/pkg/debian8/Dockerfile index 3c649996e..b3e8286ac 100644 --- a/pkg/debian8/Dockerfile +++ b/pkg/debian8/Dockerfile @@ -96,8 +96,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/debian9/Dockerfile b/pkg/debian9/Dockerfile index 1deaf9360..47a130409 100644 --- a/pkg/debian9/Dockerfile +++ b/pkg/debian9/Dockerfile @@ -92,8 +92,8 @@ RUN apt-get install -y ruby ruby-dev rubygems gcc make \ #pyinstaller start #commands specified for ENTRYPOINT and CMD are executed when the container is run, not when the image is built #use the following variables to choose the version of hubble -ENV HUBBLE_CHECKOUT=v2.2.4 -ENV HUBBLE_VERSION=2.2.4 +ENV HUBBLE_CHECKOUT=v2.2.5 +ENV HUBBLE_VERSION=2.2.5 ENV HUBBLE_GIT_URL=https://github.com/hubblestack/hubble.git ENV HUBBLE_SRC_PATH=/hubble_src ENV _HOOK_DIR="./pkg/" diff --git a/pkg/specs/hubblestack-el6.spec b/pkg/specs/hubblestack-el6.spec deleted file mode 100644 index 3083b2e83..000000000 --- a/pkg/specs/hubblestack-el6.spec +++ /dev/null @@ -1,92 +0,0 @@ -# Don't try fancy stuff like debuginfo, which is useless on binary-only -# packages. Don't strip binary too -# Be sure buildpolicy set to do nothing -%define __spec_install_post %{nil} -%define debug_package %{nil} -%define __os_install_post %{_dbpath}/brp-compress -# Don't fail out because we're not packaging the other distro's service files -%define _unpackaged_files_terminate_build 0 - -Summary: Hubblestack is a module, open-source security compliance framework -Name: hubblestack -Version: 2.2.4 -Release: 1 -License: Apache 2.0 -Group: Development/Tools -SOURCE0: %{name}-%{version}.tar.gz -URL: https://hubblestack.io -Autoreq: 0 -Autoprov: 0 -Requires: git -Provides: osquery - -BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root - -%description -%{summary} - -%prep -%setup -q - -%build -# Empty section. - -%install -rm -rf %{buildroot} -mkdir -p %{buildroot} -mkdir -p %{buildroot}/usr/bin -ln -s /opt/hubble/hubble %{buildroot}/usr/bin/hubble -ln -s /opt/osquery/osqueryi %{buildroot}/usr/bin/osqueryi -ln -s /opt/osquery/osqueryd %{buildroot}/usr/bin/osqueryd - -# in builddir -cp -a * %{buildroot} - - -%clean -rm -rf %{buildroot} - - -%files -%config(noreplace) /etc/hubble/hubble -%config /etc/osquery/osquery.conf -%config /etc/osquery/osquery.flags -/etc/init.d/hubble -/opt/* -/usr/bin/* - -%changelog -* Fri Apr 7 2017 Colton Myers 2.1.7-1 -- Force config and logs to 600 permissions to hide tokens -- Splunk returners: Fix for hosts with misconfigured FQDN (no localhost IPs, please!) - -* Mon Apr 3 2017 Colton Myers 2.1.6-1 -- Fix pulsar loading -- Fix splay in scheduler - -* Wed Mar 22 2017 Colton Myers 2.1.5-1 -- Reduce fileserver frequency by default -- Fix pidfile management -- Add %config macros -- Multi-endpoint support in splunk returners - -* Tue Mar 7 2017 Colton Myers 2.1.4-1 -- Consolidate pillar and allow for multiple splunk endpoints in splunk returners -- Move output formatting code out of nova modules into hubble.py -- Add error handling for problems in config file - -* Tue Feb 28 2017 Colton Myers 2.1.3-1 -- Add Debian packaging -- Add uptime fallback query -- Fix for blank hosts when fqdn doesn't return anything in returners -- Add AWS information to events from splunk returners -- Turn off http_event_collector_debug everywhere - -* Mon Feb 13 2017 Colton Myers 2.1.2-1 -- Fix the changelog order - -* Mon Feb 13 2017 Colton Myers 2.1.1-1 -- Remove autoreq, add unit files - -* Wed Feb 8 2017 Colton Myers 2.1.0-1 -- First Build diff --git a/pkg/specs/hubblestack-el7.spec b/pkg/specs/hubblestack-el7.spec deleted file mode 100644 index 2fedfec49..000000000 --- a/pkg/specs/hubblestack-el7.spec +++ /dev/null @@ -1,92 +0,0 @@ -# Don't try fancy stuff like debuginfo, which is useless on binary-only -# packages. Don't strip binary too -# Be sure buildpolicy set to do nothing -%define __spec_install_post %{nil} -%define debug_package %{nil} -%define __os_install_post %{_dbpath}/brp-compress -# Don't fail out because we're not packaging the other distro's service files -%define _unpackaged_files_terminate_build 0 - -Summary: Hubblestack is a module, open-source security compliance framework -Name: hubblestack -Version: 2.2.4 -Release: 1 -License: Apache 2.0 -Group: Development/Tools -SOURCE0: %{name}-%{version}.tar.gz -URL: https://hubblestack.io -Autoreq: 0 -Autoprov: 0 -Requires: git -Provides: osquery - -BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root - -%description -%{summary} - -%prep -%setup -q - -%build -# Empty section. - -%install -rm -rf %{buildroot} -mkdir -p %{buildroot} -mkdir -p %{buildroot}/usr/bin -ln -s /opt/hubble/hubble %{buildroot}/usr/bin/hubble -ln -s /opt/osquery/osqueryi %{buildroot}/usr/bin/osqueryi -ln -s /opt/osquery/osqueryd %{buildroot}/usr/bin/osqueryd - -# in builddir -cp -a * %{buildroot} - - -%clean -rm -rf %{buildroot} - - -%files -%config(noreplace) /etc/hubble/hubble -%config /etc/osquery/osquery.conf -%config /etc/osquery/osquery.flags -/opt/* -/usr/bin/* -/usr/lib/* - -%changelog -* Fri Apr 7 2017 Colton Myers 2.1.7-1 -- Force config and logs to 600 permissions to hide tokens -- Splunk returners: Fix for hosts with misconfigured FQDN (no localhost IPs, please!) - -* Mon Apr 3 2017 Colton Myers 2.1.6-1 -- Fix pulsar loading -- Fix splay in scheduler - -* Wed Mar 22 2017 Colton Myers 2.1.5-1 -- Reduce fileserver frequency by default -- Fix pidfile management -- Add %config macros -- Multi-endpoint support in splunk returners - -* Tue Mar 7 2017 Colton Myers 2.1.4-1 -- Consolidate pillar and allow for multiple splunk endpoints in splunk returners -- Move output formatting code out of nova modules into hubble.py -- Add error handling for problems in config file - -* Tue Feb 28 2017 Colton Myers 2.1.3-1 -- Add Debian packaging -- Add uptime fallback query -- Fix for blank hosts when fqdn doesn't return anything in returners -- Add AWS information to events from splunk returners -- Turn off http_event_collector_debug everywhere - -* Mon Feb 13 2017 Colton Myers 2.1.2-1 -- Fix the changelog order - -* Mon Feb 13 2017 Colton Myers 2.1.1-1 -- Remove autoreq, add unit files - -* Wed Feb 8 2017 Colton Myers 2.1.0-1 -- First Build