This document tries provides information useful for ComplianceAsCode/content project contributors. We will guide you through the structure of the project. We will explain the directory layout, used formats and the build system.
Join the mailing list at https://fedorahosted.org/mailman/listinfo/scap-security-guide.
In May 2014 the SCAP Security Guide project moved the underlying source repository from FedoraHosted to GitHub. To register for a free GitHub account, visit https://github.com/join. Registering for a GitHub account should not be troublesome.
If you envision committing code, and needing direct push access to the repository (vs GitHub’s pull request system), send a quick hello to the mailing list introducing yourself. The community may already know you, but this is an opportunity to reintroduce yourself and update the community on areas you’d like to contribute to. This need not be formal; though don’t forget to include your GitHub account name! Pending approval from an existing maintainer, you will be added to the OpenSCAP Content Developers group on GitHub: https://github.com/orgs/OpenSCAP/teams/content-authors.
-
intro on GitHub pull request system …..
Visit SSG’s GitHub webpage at https://github.com/OpenSCAP/scap-security-guide. In the top-right corner, you will see a button that says "Fork." Click it.
If you’re a member of multiple GitHub groups/accounts, you will be asked for the fork destination. For most users it is sufficient to fork into their local account. To do so, click on your username/icon. For example:
Congratulations, you’ve created a localized repository! Any changes you make will be localized, consider it your own sandbox. At this point you can 'git clone' the source over SSH, HTTPS, or Subversion.
GitHub dynamically generates the appropriate git URLs, and dynamically generates zip-compressed archives should you desire them. In the right-most column, near the bottom, you will see the various git clone options:
When you begin to work on your patch, make sure that you are on the right and up-to-date branch.
Typically it is the master
branch, so you can
git checkout master git pull upstream master
Then, create a new branch for your fix - e.g. git checkout -b my_new_feature
.
Proceed with your work.
If your work can be logically divided into multiple parts, try to structure it in a way that you avoid creating huge commits that affect logically unrelated parts of the project.
When you are ready to submit a pull request, push your branch to your forked repository using git push -u origin my_new_feature
.
The GitHub website supports in-line editing of files. This is extremely convenient when making small changes, such as fixing typos. When you’ve found a file in need of edits, note the "Edit" button within the file’s toolbar:
This will bring you to an in-line editor. Make your changes and scroll to the bottom of the webpage. You will notice a "Commit Changes" form. The first field is a one-line description of the change, while within the second (main body) you are expected to provide a detailed description of any changes. Your entries in this field should be as concise as possible while providing enough description for a community member to properly evaluate your changes, and the logic for making them. For example:
Click on "Commit changes," which will push the change to your local repository.
When you’re ready for your patches to be merged upstream, you must issue a "Pull Request."
1) Return to your local repositories webpage on GitHub. NOTE: If you’ve created local branches, ensure you’ve selected the appropriate branch that you’d like to submit patches against. For most people, this step can be ignored.
2) Click on "Pull Request," located in the top-right of the frame which lists your directory contents:
3) You will be brought to a listing of your commits. Click the green button labelled "Create Pull Request":
4) You will be requested to input a patch title and description. Be concise, but thorough enough for a community member to understand logic behind your changes. Paste into the description field testing evidence (e.g. running testoval.py on any submitted OVAL, or before/after for remediation scripts).
If you work on a feature or a bugfix that has an associated issue:
-
Assign yourself to the issue (if you have rights and no-one is assigned), or contact the assignee that you are working on the fix (so the issue can be reassigned).
-
Mention the issue number in the pull request.
This will improve the odds that multiple people won’t work on a single issue without being aware of each other’s work. After completing the form, select "Send pull request":
5) Don’t use git commands that alter the commit history during your work on the pull request. If you e.g. squash commits, the pull request page will be broken - if you made some mistakes, got feedback and corrected those mistakes based on the feedback, nobody will be able to learn from your pull request, because commits introducing mistakes will disappear and comments of reviewers therefore won’t make sense. Squash unnecessary commits when merging the pull request, or close a complicated pull request and create a new one (in another branch) with streamlined commits. Reference the old PR in the new streamlined pull request so it is possible to backtrack what went on.
6) A community member will review your patch. They will either merge the patch upstream, indicate additional changes/documentation needed, or decline the patch. You’ll automatically be notified via e-mail of any status updates.
The following steps are required to build an official release of the SCAP Security Guide RPM. Please note that exceptionally few people have such access.
1) Update main scap-security-guide.spec file (scap-security-guide/scap-security-guide.spec) with new version (value of "redhatssgversion" variable). Ensure that "Release:" field contains 1%{?dist} (1 as release version). Add particular changelog entry (possibly verify for & fix whitespace noise).
2) Build and test the content (i.e. run 'make', 'make srpm', 'make rpm') to verify it builds successfully. Also try to scan some systems with selected profiles to see if the content works.
3) If it works, 'make clean' in the git repository to start with clean table. Make the source tarball via "make tarball". Upload the tarball to repos.ssgproject.org.
1) file-in new EPEL-6 bugzilla (Summary = "Upgrade scap-security-guide package to scap-security-guide-X.Y.Z"). NOTE: That bugzilla is required later when creating Bodhi update request. See below. NOTE: It would be created automa(g,t}ically once the "latest upstream source tarball checking Red Hat Bugzilla functionality" would realize there is new source tarball available. But since we want immediate upgrade, we create that big manually. 2) Take that BugZilla (state change NEW⇒ASSIGNED) 3) Clone the scap-security-guide git repository via fedpkg tool (as documented in: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers#Import.2C_Commit.2Cand_Build_Your_Package) section "Check out the module" and later ones). Split into coins for our case it means:
$ fedpkg clone scap-security-guide $ cd scap-security-guide/
4) Ensure to change the git branch from master/ to origin/el6 via 'switch-branch' fedpkg’s option (this ensures the changes will be actually committed into EPEL-6 branch, and not into the master, IOW F-21 branch, which we don’t want. To see the list of available branches, issue the following:
$ fedpkg switch-branch -l Locals: * master Remotes: orgin/el6 origin/epel7 origin/f18 origin/f19 origin/f20 origin/master
To switch to the el6 branch, issue:
$ fedpkg switch-branch el6
Branch el6 set up to track remote branch el6 from origin Now it’s possible to actually see the actual content of EPEL-6 branch:
$ ls scap-security-guide.spec sources
scap-security-guide.spec is the SPEC file used for build of EPEL-6’s RPM, sources text file contains md5sum of scap-security-guide tarball, which will be built during SRPM / RPM build.
5) To refresh both of them (*.spec & content of source) at once, it’s possible to create source RPM package & import it into fedpkg. Two important notes to mention here:
-
The spec file needs to be the updated one ⇒ it’s necessary to update the actual epel-6 one with changes from upstream or replace the epel-6 one with upstream one (the latter is still possible because as of right now there aren’t epel-6 downstream specific patches that wouldn’t be present in upstream already. But should there be changes in the future, the epel-6 spec should be updated to include changes from upstream spec but simultaneoously to keep epel-6 custom patches. IOW just replacing epel-6’s spec with upsteeam’s one wouldn’t work, but manual changes would be necessary).
-
The new source tarball needs to be the last one uploaded to repos.ssgproject.org (so md5sum would match during package build).
This means: * start with clean /rpmbuild directory structure * download latest tarball from repos.ssgproject.org into /rpmbuild/SOURCES * place the modified epel-6 spec file into /rpmbuild/SPECS * build the source RPM (result will be in /rpmbuild/SRPMS)
Next, return back to fedpkg & import the SRPM created in the previous step:
$ fedpkg import path_to_rpm
This will change content of 'sources' file (include new md5sum) & update scap-security-guide.spec.
$ git status [to see what will get committed] $ git commit [to confirm changes. The commit message should contain the string "Resolves: rh bz# id_of_epel_bug_we_created_before"
Make scratch build to see the uploaded content (spec + tarball) would actually build in the Koji build system via:
$ fedpkg scratch-build --srpm path_to_srpm_created_locally_before
Note
|
scratch-build to work with actually committed git repository content, it requires the new content to already be "git push-ed" to the repository. But since we want to verify if the content would build ye before pushing changes into the EPEL-6 repository, we need to provide the --srpm option pointing fedpkg to the local source RPM package we have created one step before. |
Once the scratch build passes (visible in Koji web interface, or also on command line), we can push the changes to the git repository via:
$ git push origin el6
After successful push, our / latest push should be visibile at (in el6 branch) http://pkgs.fedoraproject.org/cgit/scap-security-guide.git/
Now it’s safe (scratch build succeeded & we pushed the changes to the Fedora’s git) to build real new package via:
$ fedpkg build
This again generates clickable link, at which point it’s possible to see the progress / result of the build. Once the new package build in Koji finishes successfully, we flip the previously created EPEL-6 bug to MODIFIED (ASSIGNED ⇒ MODIFIED) and mention the new package name-version-release in the "Fixed in Version:" field of that bug.
6) Having new build available, it’s necessary to schedule new Bodhi update (something like advisory to be tied with new package). I am using UI:
but there’s command-line interface too (see [1] for further details).
Add New Update screen is shown (containing the following fields / items):
Add New Update
Package: name-version-release of Koji build goes here (e.g. scap-security-guide-0.1-16.el6)
Type: select one of
-
bugfix (intented for updates fixing bugs)
-
enhancement (intended for adding new features)
-
security (intended for fixing security flaws)
-
newpackage (intended for updates introducing new RPM packages)
options Request: select "testing" option of
-
testing (intended for udpates that should reach -testing repo first, before -stable))
-
stable (updates directly into -stable (maybe fore critical))
-
none (don’t use this)
Bugs: Provide previously created EPEL-6 RH BZ#, ensure the "Close bugs when update is stable" option is checked!
Note: Describe the changes in this text field (i.e. which bugs got fixed, which new functionality, etc). The content of this field appear in the advisory (sent on fedora-package-announce mailing list), when the build is pushed to -stable.
Suggest Reboot: [] (generally leave unchecked)
Enable karma automatism [v] (If to use the karma threshhold the updates push system to use to decide if the build should be pushed to -stable channel or not)
Threshold for pushing to -stable [3]
(Minimum level of karma build needs to obtain from package testers to be able to push it into -stable channel)
Threshhold for unpushing [-3]
(Lower bound for negative karma, which is a sign for the push system to move the package from the -testing repository. IOW the build has received so much negative karma/experiences, it’s not usable even for the -testing repository and should be rebuilt)
Once all the information is filed, click "Save Update." This will generate automated EMail about the build being pushed to -testing. After some time at the same day (depending on TZ) the build is pushed to -testing repository.
The maintainer should check Bohdi packages for that update for positive/negative karma/comments. If the build has reached positive karma >=3 it can be pushed to -stable (if it hasn’t reavhed positive karma in >= 3 in 7 days, it will be pushed to -stable; 7 days is considered sufficient period). If there are signs of negative karma, the build should be either unpushed / deleted & new one made.
After 7 days the build can be pushed to -stable (under assumption it didn’t reach positive karma >= 3 sooner), meaning in the next day or two it’s reachable via yum subscribed to epel-6 repository directly.
Under the top level directory, there are directories and/or files for different products, shared content, documentation, READMEs, Licenses, build files/configuration, etc.
For example:
$ ls -1 scap-security-guide
build
build_config.yml.in
BUILD.md
build-scripts
chromium
cmake
CMakeLists.txt
Contributors.md
Contributors.xml
debian8
DISCLAIMER
Dockerfile
docs
eap6
fedora
firefox
fuse6
jre
LICENSE
linux_os
ocp3
ol7
opensuse
README.md
rhel6
rhel7
rhel-osp7
shared
sle11
sle12
tests
ubuntu1404
ubuntu1604
utils
wrlinux
Directory | Description |
---|---|
|
Can be used to build the content using CMake. |
|
Scripts used by the build system. |
|
Contains the CMake build configuration files. |
|
Contains the Markdown Manuals, MAN pages, etc. |
|
Contains content, tools and utilities, scripts, etc. that can be used for multiple products. |
|
Contains the test suite for content validation and testing. |
|
Miscellaneous scripts used for development but not used by the build system. |
The remaining directories such as fedora
, rhel7
, etc. are product
directories.
File | Description |
---|---|
|
Contains the content build instructions |
|
Top-level CMake build configuration file |
|
DO NOT MANUALLY EDIT script-generated file |
|
DO NOT MANUALLY EDIT script-generated file |
|
Disclaimer for usage of content |
|
CentOS7 Docker build file |
|
Content license |
|
Project README file |
When creating a new product, use the guidelines below for the directory layout:
-
Do not use capital letters
-
If product versions are required, use major versions only. For example,
rhel7
,ubuntu16
, etc. -
If the content to be produced does not matter on versions, do not add version numbers. For example:
fedora
,firefox
, etc. -
In addition, use only a maxdepth of 3 directories.
-
See the the README inside the scap-security-guide/example folder for more information about the changes needed.
Following these guidelines help with the usability and browsability of using and navigating the content.
For example:
$ tree -d rhel7
rhel7
├── checks
│ └── oval
├── cpe
├── fixes
│ ├── ansible
│ └── bash
├── kickstart
├── overlays
├── profiles
├── templates
│ ├── csv
├── transforms
└── utils
13 directories
Directory | Description |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Important
|
For any of the |
stig_overlay.xml
maps an official product/version STIG release with a
SSG product/version STIG release.
stig_overlay.xml
should never be manually created or updated. It should
always be generated using create-stig-overlay.py
.
To create stig_overlay.xml
, there are two things that are required: an
official non-draft STIG release from DISA containing a XCCDF file
(e.g. U_Red_Hat_Enterprise_Linux_7_STIG_V1R1_Manual-xccdf.xml
and a SSG
generated XCCDF file (e.g. ssg-rhel7-xccdf.xml
)
Example using create-stig-overlay.py
:
$ PYTHONPATH=`./.pyenv.sh` utils/create-stig-overlay.py --disa-xccdf=disa-stig-rhel7-v1r12-xccdf-manual.xml --ssg-xccdf=ssg-rhel7-xccdf.xml -o rhel7/overlays/stig_overlay.xml
To update stig_overlay.xml
, use the create-stig-overlay.py
script as
mentioned above. Then, submit a pull request to replace the stig_overlay.xml
file that is needing to be updated. Please note that as a part of this
update rules that have been removed from the official STIG will be removed
here as well.
To run the Python utilities (those ending in .py
), you will need to have the
PYTHONPATH environment variable set. This can be accomplished one of two ways: by
prefixing all commands with a local variable (PYTHONPATH=/path/to/scap-security-guide
),
or by exporting PYTHONPATH
in your shell environment. We provide a script
for making this easier: .pyenv.sh
. To set PYTHONPATH
correctly for the
current shell, simply call source .pyenv.sh
. For more information on how to
use this script, please see the comments at the top of the file.
Located in utils
directory, the testoval.py
script allows easy testing of oval
definitions. It wraps the definition and makes up an oval file ready for
scanning, very useful for testing new OVAL content or modifying existing ones.
Example usage:
$ PYTHONPATH=`./.pyenv.sh` ./utils/testoval.py install_hid.xml
Create or add an alias to the script so that you don’t have to type out the full path
everytime that you would like to use the testoval.py
script.
$ alias testoval='/home/_username_/scap-security-guide/utils/testoval.py'
An alternative is adding the directory where testoval.py
resides to your PATH.
$ export PATH=$PATH:/home/_username_/scap-security-guide/utils/
There are three main types of content in SSG, they are rules, defined using the XCCDF standard, checks, usually written in OVAL format, and remediations, that can be executed on ansible, bash, anaconda installer and puppet. SSG also has its own templating mechanism, allowing content writers to create models and use it to generate a number of checks and remediations.
The SSG content is primarily divided by platform and it can be seen on its directory structure:
*scap-security-guide/* ├── _build_ ├── _build-scripts_ ├── chromium ├── debian8 ├── _docs_ ├── fedora ├── firefox ├── jboss_eap6 ├── jboss_fuse6 ├── jre ├── opensuse ├── rhel6 ├── rhel7 ├── rhosp7 ├── shared ├── sle11 ├── sle12 ├── ubuntu14 ├── ubuntu16 ├── _utils_ ├── wrlinux
Except for build and docs, each directory contains checks and remediations that are useful and make sense to be used on that platform. The shared directory contains checks and remediations that can be used by more than one platform.
Contributions can be made for rules, checks, remediations or even utilities. There are different sets of guidelines for each type, for this reason there is a different topic for each of them.
Rules are input described in YAML which mirrors the XCCDF format (an XML container).
Rules are translated to become members of a Group
in an XML file.
All existing rules for Linux products can be found in the linux_os/guide
directory. For non-Linux products (e.g., jre
), this content can be found in the <product>/guide
.
The exact location depends on the group (or category) that a rule belongs to.
For an example of rule group, see linux_os/guide/system/software/disk_partitioning/partition_for_tmp.rule
.
The id of this rule is partition_for_tmp.rule
; this rule belongs to the disk_partitioning
group, which in turn belongs to the software
group (which in turn belongs to the system
group).
Because this rule is in linux_os/guide
, it can be shared by all Linux products.
Rules describe the desired state of the system and may contain references if they are parts of higher-level standards. All rules should reflect only a single configuration change for compliance purposes.
Structurally, a rule is a YAML file (which can contain Jinja macros) that represents a dictionary.
A rule YAML file has one implied attribute:
-
id
: The primary identifier for the rule to be referenced from profiles. This is inferred from the file name and links it to checks and fixes with the same file name.
A rule itself contains these attributes:
-
severity
: Can beunknown
,low
,medium
, orhigh
.
A rule must contain these attributes:
-
title
: Human-readable title of the rule. -
rationale
: Human-readable HTML description of the reason why the rule exists and why it is important from the technical point of view. For example, rationale of thepartition_for_tmp
rule states that:The <tt>/tmp</tt> partition is used as temporary storage by many programs. Placing <tt>/tmp</tt> in its own partition enables the setting of more restrictive mount options, which can help protect programs which use it.
-
description
: Human-readable HTML description, which provides broader context for non-experts than the rationale. For example, description of thepartition_for_tmp
rule states that:The <tt>/var/tmp</tt> directory is a world-writable directory used for temporary file storage. Ensure it has its own partition or logical volume at installation time, or migrate it using LVM.
-
ocil
: Defines asserting statements to check whether or not the rule is valid. -
ocil_clause
attribute contains the statement which describes how to determine whether the statement is true or false. Check outencrypt_partitions.rule
inlinux_os/guide/system/software/disk_partitioning/
: this contains apartitions do not have a type of crypto_LUKS
value forocil_clause
. This clause is prefixed with the phrase "It is the case that".
A rule may contain those reference-type attributes:
-
identifiers
: This is related to products/CCEs that the rule applies to; this is a dictionary, whose keys should becce
and a value. Ifcce
is modified with a product (e.g.,cce@rhel6
), it restricts what products that identifiers applies to. -
references
: This is related to the compliance document line items that the rule applies to. These can be attributes such asstig
,srg
,nist
, etc., whose keys may be modified with a product (e.g.,stig@rhel6
) to restrict what products an identifiers applies to.When the rule is related to RHEL, it should have a CCE. Available CCEs that can be assigned to new rules are listed in the
shared/references/cce-rhel-avail.txt
file. Seelinux_os/guide/system/software/disk_partitioning/encrypt_partitions.rule
for an example of reference-type attributes.
Some of existing rule definitions contain attributes that use macros. There are two implementations of macros:
-
Jinja macros, that are defined in
shared/macros.jinja
, andshared/macros-highlevel.jinja
. -
Legacy XSLT macros, which are defined in
shared/transforms/*.xslt
.
For example, the ocil
attribute of service_ntpd_enabled
uses the ocil_service_enabled
jinja macro.
Due to the need of supporting Ansible output, which also uses jinja, we had to modify control sequences, so macro operations require one more curly brace.
For example, invocation of the partition macro looks like {{{ complete_ocil_entry_separate_partition(part="/tmp") }}}
- there are three opening and closing curly braces instead of the two that are documented in the Jinja guide.
shared/macros.jinja
contains specific low-level macros s.a. systemd_ocil_service_enabled
, whereas shared/macros-highlevel.jinja
contains general macros s.a. ocil_service_enabled
, that decide which one of the specialized macros to call based on the actual product being used.
The macros that are likely to be used in descriptions begin by describe_
, whereas macros likely to be used in OCIL entries begin with ocil_
.
Sometimes, a rule requires ocil
and ocil_clause
to be specified, and they depend on each other.
Macros that begin with complete_ocil_entry_
were designed for exactly this purpose, as they make sure that OCIL and OCIL clauses are defined and consistent.
Macros that begin with underscores are not meant to be used in descriptions.
To parametrize rules and remediations as well as Jinja macros, you can use product-specific variables defined in product.yml
in product root directory.
Moreover, you can define implied properties which are variables inferred from them.
For example, you can define a condition that checks if the system uses yum
or dnf
as a package manager and based on that populate a variable containing correct path to the configuration file.
The inferring logic is implemented in _get_implied_properties
in ssg/yaml.py
.
Constants and mappings used in implied properties should be defined in ssg/constants.py
.
Rules are unselected by default - even if the scanner reads rule definitions, they are effectively ignored during the scan or remediation.
A rule may be selected by any number of profiles, so when the scanner is scanning using a profile the rule is included in, the rule is taken into account.
For example, the rule identified by partition_for_tmp
defined in shared/xccdf/system/software/disk_partitioning.xml
is included in the RHEL7 C2S
profile in rhel7/profiles/C2S.xml
.
Checks are connected to rules by the oval
element and the filename in which it is found.
Remediations (i.e. fixes) are assigned to rules based on their basename.
Therefore, the rule sshd_print_last_log
has a bash
fix associated as there is a bash
script shared/fixes/bash/sshd_print_last_log.sh
. As there is an Ansible playbook shared/fixes/ansible/sshd_print_last_log.yml
, the rule has also an Ansible fix associated.
The rule directory simplifies the structure of a rule and all of its associated content by placing it all under a common directory. The structure of a rule directory looks like the following example:
linux_os/guide/system/group/rule_id/rule.yml linux_os/guide/system/group/rule_id/bash/ol7.sh linux_os/guide/system/group/rule_id/bash/shared.sh linux_os/guide/system/group/rule_id/oval/rhel7.xml linux_os/guide/system/group/rule_id/oval/shared.xml
To be considered a rule directory, it must be a directory contained in a
benchmark pointed to by some product. The directory must have a name that
is the id of the rule, and must contain a file called rule.yml
which
is a YAML Rule description as described above. This directory can then
contain the following subdirectories:
-
anaconda
— for Anaconda remediation content, ending in.anaconda
-
ansible
— for Ansible remediation content, ending in.yml
-
bash
— for Bash remediation content, ending in.sh
-
oval
— for OVAL check content, ending in.xml
-
puppet
— for Puppet remediation content, ending in.pp
In each of these subdirectories, a file named shared.ext
will apply to all
products and be included in all builds, but {{{ product }}}.ext
will
only get included in the build for {{{ product }}}
(e.g., rhel7.xml
above
will only be included in the build of the rhel7
guide content and not in the
ol7
content). Note that .ext
must be substituted for the correct
extension for content of that type (e.g., .sh
for bash
content). Further,
all of these directories are optional and will only be searched for content if
present. Lastly, the product naming of content will not override the contents
of platform
or prodtype
fields in the content itself (e.g., if rhel7
is
not present in the rhel7.xml
OVAL check platform specifier, it will be
included in the build artifacts but later removed because it doesn’t match
the platform).
Currently the build system supports both rule files (discussed above) and rule
directories. For example content in this format, please see rules in
linux_os/guide
.
To interact with build directories, the ssg.rules
and ssg.rule_dir_stats
modules have been created, as well as three utilities:
-
utils/rule_dir_json.py
— to generate a JSON tree describing the current content of all guides -
utils/rule_dir_stats.py
— for analyzing the JSON tree and finding information about specific rules, products, or summary statistics -
utils/rule_dir_diff.py
— for diffing two JSON trees (e.g., before and after a major change), using the same interface asrule_dir_stats.py
.
For more information about these utilities, please see their help text.
To interact with rule.yml
files and the OVALs inside a rule directory, the
following utilities are provided:
This utility modifies the prodtype field of rules. It supports several commands:
-
mod_prodtype.py <rule_id> list
- list the computed and actual prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> add <product> [<product> …]
- add additional products to the prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> remove <product> [<product> …]
- remove products to the prodtype of the rule specified byrule_id
. -
mod_prodtype.py <rule_id> replace <replacement> [<replacement> …]
- do the specified replacement transformations. A replacement transformation is of the formmatch~replace
wherematch
andreplace
are a comma separated list of products. If all of the products inmatch
exist in the originalprodtype
of the rule, they are removed and the products inreplace
are added.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
This utility modifies the <affected>
element of an OVAL check. It supports
several commands on a given rule:
-
mod_checks.py <rule_id> list
- list all OVALs, their computed products, and their actual platforms. -
mod_checks.py <rule_id> delete <product>
- delete the OVAL for the the specified product. -
mod_checks.py <rule_id> make_shared <product>
- moves the product OVAL to the shared OVAL (e.g.,rhel7.xml
toshared.xml
). -
mod_checks.py <rule_id> diff <product> <product>
- Performs a diff between two OVALs (product can beshared
to diff against the shared OVAL).
In addition, the mod_checks.py
utility supports modifying the shared OVAL
with the following commands:
-
mod_checks.py <rule_id> add <platform> [<platform> …]
- adds the specified platforms to the shared OVAL for the rule specified byrule_id
. -
mod_checks.py <rule_id> remove <platform> [<platform> …]
- removes the specified platforms from the shared OVAL. -
mod_checks.py <rule_id> replace <replacement> [<replacement …]
- do the specified replacement against the platforms in the shared OVAL. See the description ofreplace
undermod_prodtype.py
for more information about the format of a replacement.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
This utility modifies the <affected>
element of a remediation. It supports
several commands on a given rule and for the specified remediation language:
-
mod_fixes.py <rule_id> <lang> list
- list all fixes, their computed products, and their actual platforms. -
mod_fixes.py <rule_id> <lang> delete <product>
- delete the fix for the specified product. -
mod_fixes.py <rule_id> <lang> make_shared <product>
- moves the product fix to the shared fix (e.g.,rhel7.sh
toshared.sh
). -
mod_fixes.py <rule_id> <lang> diff <product> <product>
- Performs a diff between two fixes (product can beshared
to diff against the shared fix).
In addition, the mod_fixes.py
utility supports modifying the shared fixes
with the following commands:
-
mod_fixes.py <rule_id> <lang> add <platform> [<platform> …]
- adds the specified platforms to the shared fix for the rule specified byrule_id
. -
mod_fixes.py <rule_id> <lang> remove <platform> [<platform> …]
- removes the specified platforms from the shared fix. -
mod_fixes.py <rule_id> <lang> replace <replacement> [<replacement …]
- do the specified replacement against the platforms in the shared fix. See the description ofreplace
undermod_prodtype.py
for more information about the format of a replacement.
This utility requires an up to date JSON tree created by rule_dir_json.py
.
Checks are used to evaluate a Rule. They are written using a custom OVAL syntax and are stored as xml files inside the checks/oval directory for the desired platform. During the building process, SSG will transform the checks in OVAL compliant checks.
In order to create a new check, you must create a file in the appropriate directory, and name it the same as the Rule id. This id will also be used as the OVAL id attribute. The content of the file should follow the OVAL specification with these exceptions:
-
The root tag must be
<def-group>
-
If the OVAL check has to be a certain OVAL version, you can add
oval_version="oval_version_number"
as an attribute to the root tag. Otherwise ifoval_version
does not exist in<def-group>
, it is assumed that the OVAL file applies to any OVAL version. -
Don’t use the tags
<definitions>
<tests>
<objects>
<states>
, instead, put the tags<definition>
<*_test>
<*_object>
<*_state>
directly inside the<def-group>
tag. -
TODO Namespaces
This is an example of a check, written using the custom OVAL syntax, that checks if the group that owns the file /etc/cron.allow is the root:
<def-group oval_version="5.11">
<definition class="compliance" id="file_groupowner_cron_allow" version="1">
<metadata>
<title>Verify group who owns 'cron.allow' file</title>
<affected family="unix">
<platform>Red Hat Enterprise Linux 7</platform>
</affected>
<description>The /etc/cron.allow file should be owned by the appropriate
group.</description>
</metadata>
<criteria>
<criterion test_ref="test_groupowner_etc_cron_allow" />
</criteria>
</definition>
<unix:file_test check="all" check_existence="any_exist"
comment="Testing group ownership /etc/cron.allow" id="test_groupowner_etc_cron_allow"
version="1">
<unix:object object_ref="object_groupowner_cron_allow_file" />
<unix:state state_ref="state_groupowner_cron_allow_file" />
</unix:file_test>
<unix:file_state id="state_groupowner_cron_allow_file" version="1">
<unix:group_id datatype="int">0</unix:group_id>
</unix:file_state>
<unix:file_object comment="/etc/cron.allow"
id="object_groupowner_cron_allow_file" version="1">
<unix:filepath>/etc/cron.allow</unix:filepath>
</unix:file_object>
Remediations, also called fixes, are used to change the state of the machine, so that previously non-passing rules can pass. There can be multiple versions of the same remediation meant to be executed by different applications, more specifically Ansible, Bash, Anaconda and Puppet. They also have to be idempotent, meaning that they must be able to be executed multiple times without causing the fixes to accumulate. The Ansible’s language works in such a way that this behavior is built-in, however, for the other versions, the remediations must have it implemented explicitly. Remediations also carry metadata that should be present at the beginning of the files. This meta data will be converted in XCCDF tags during the building process. That is how it looks like and what it means:
# platform = multi_platform_all # reboot = false # strategy = restrict # complexity = low # disruption = low
Field | Description | Accepted values |
---|---|---|
platform |
CPE name, CPE applicability language expression or even SSG wildcards declaring which platforms the fix can be applied |
Default CPE dictionary is packaged along with openscap. Custom CPE dictionaries can be used. SSG wildcards are multi_platform_[all, oval, fedora, debian, ubuntu, linux, rhel, openstack, opensuse, rhev, sle]. |
reboot |
Whether or not a reboot is necessary after the fix |
true, false |
strategy |
The method or approach for making the described fix. Only informative for now |
unknown, configure, disable, enable, patch, policy, restrict, update |
complexity |
The estimated complexity or difficulty of applying the fix to the target. Only informative for now |
unknown, low, medium, high |
disruption |
An estimate of the potential for disruption or operational degradation that the application of this fix will impose on the target. Only informative for now |
unknown, low, medium, high |
Important
|
The minimum version of Ansible must be at least version 2.3 |
Ansible remediations are stored as yml files in directory /template/static/ansible under the targeted platform. They are meant to be executed by Ansible itself when requested by openscap, so they are written using Ansible’s own language with the following exceptions:
-
The remediation content must be only the tasks section of what would be a playbook
-
The tags section must be present in each task as shown in the example, it’ll be replaced during the building process
-
Notifications and handlers are not supported
Here is an example of a Ansible remediation that ensures the SELinux is enabled in grub:
# platform = multi_platform_rhel,multi_platform_fedora # reboot = false # strategy = restrict # complexity = low # disruption = low - name: Ensure SELinux Not Disabled in /etc/default/grub replace: dest: /etc/default/grub regexp: selinux=0 tags: @ANSIBLE_TAGS@
Bash remediations are stored as shell script files in directory /template/static/bash under the targeted platform. You can make use of any available command, but beware of too specific or complex solutions, as it may lead to a narrow range of supported platforms. There are a number of already written bash remediations functions available in shared/bash_remediation_functions/ directory, it is possible one of them is exactly what you are looking for.
Following, you can see an example of a bash remediation that sets the maximum number of days a password may be used:
# platform = Red Hat Enterprise Linux 7 . /usr/share/scap-security-guide/remediation_functions populate var_accounts_maximum_age_login_defs grep -q ^PASS_MAX_DAYS /etc/login.defs && \ sed -i "s/PASS_MAX_DAYS.*/PASS_MAX_DAYS $var_accounts_maximum_age_login_defs/g" /etc/login.defs if [ $? -ne 0 ]; then echo "PASS_MAX_DAYS $var_accounts_maximum_age_login_defs" >> /etc/login.defs fi
When writing new bash remediations content, please follow the following guidelins:
-
Use tabs for indentation rather than spaces.
-
Prefer to use
sed
rather thanawk
. -
Try to keep expressions simple, avoid double negations. Use compound lists with moderation and only if you understand them.
-
Test your script in the "strict mode" with
set -e -o pipefail
specified at the top of it. Make sure that the script doesn’t end prematurely in the strict mode. -
Beware of constructs such as
[ $x = 1 ] && echo "$x is one"
as they violate the previous point.[ $x != 1 ] || echo "$x is one"
is OK. -
Use the
die
function defined inremediation_functions
to handle exceptions, such as[ -f "$config_file" ] || die "Couldn’t find the configuration file '$config_file'"
. -
Run
shellcheck
over your remediation script. Make sure that you fix all warnings that are applicable. If you are not sure, mention those warnings in the pull request description. -
Use POSIX syntax in regular expressions, so prefer
grep '^*something'
overgrep '^\s*something'
.
Often, a set of very related checks and/or remediations needs to be created. Instead of creating them individually, you can use the templating mechanism provided by the SSG. It supports OVAL checks and Ansible, Bash, Anaconda and Puppet remediations. In order to use this mechanism, you have to:
1) Create the template files, one for each type of file. Each one should be named template_<TYPE>_<NAME>.jinja
. Where <TYPE>
should be OVAL, ANSIBLE, BASH, ANACONDA or PUPPET and <NAME>
is the what we will call hereafter the template name.
Use the jinja syntax we use elsewhere in the project; refer to the earlier section on jinja macros for more information.
This is an example of an OVAL template file called template_OVAL_package_installed
include::{templatesdir}/template_OVAL_package_installed
And here is the Ansible template file called template_ANSIBLE_package_installed:
include::{templatesdir}/template_ANSIBLE_package_installed
2) Create a csv (comma-separated-values) file in the PRODUCT/template/csv directory with the same name of the template followed by the extension .csv. It should contain all the instances you want to generate from the template, one per line. Use the line to supply values to the variables. You can use #
to make a comment until the end of the line.
This is the file rhel7/template/csv/packages_installed.csv:
include::{rhel7dir}/template/csv/packages_installed.csv
3) Create a python file containing the generator class. The name of the file should start with create_ and then be followed by the template name and the extension .py. The generator class name should also be the template name, in Camel case, followed by Generator.
You have to define the function generate(self, target, argv), where the second argument represents the type of template being used in that moment and the third argument is an array containing all the values in a single line of the csv file. Therefore, this function will be called once for each type of template and each line of the csv file.
Inside the generate function, you must call the other function file_from_template passing as parameter one of the template files you’ve created, the variables you’ve defined and their values, and the name of the output file, that should be named in the same manner as if it was created manually.
This is the file with the generator class for the installed package template, it’s called create_package_installed.py:
include::{templatesdir}/create_package_installed.py
4) Finally, you have to ensure the SSG knows your template. To do that, you have to edit the file ssg/build_templates.py and include the generator class you’ve just created and declare which csv file to use along with it.
This is an example of a patch to add a new template into the templating system:
@@ -21,6 +21,7 @@
from create_sysctl import SysctlGenerator
from create_services_disabled import ServiceDisabledGenerator
from create_services_enabled import ServiceEnabledGenerator
+from create_package_installed import PackageInstalledGenerator
@@ -43,6 +44,7 @@ def __init__(self):
"sysctl_values.csv": SysctlGenerator(),
"services_disabled.csv": ServiceDisabledGenerator(),
"services_disabled.csv": ServiceDisabledGenerator(),
"services_enabled.csv": ServiceEnabledGenerator(),
+ "packages_installed.csv": PackageInstalledGenerator(),
}
self.supported_ovals = ["oval_5.10"]
SCAP Security Guide uses ctest to orchestrate testing upstream. To run the test suite go to the build folder and execute ctest
:
cd build/ ctest -j 4
Check out the various ctest
options to perform specific testing, you can rerun just one test or skip all tests that match a regex. (See -R, -E and other options in the ctest man page)
Tests are added using the add_test cmake call. Each test should finish with a 0 exit-code in case everything went well and a non-zero if something failed. Output (both stdout and stderr) are collected by ctest and stored in logs or displayed. Make sure you never hard-code a path to any tool when doing testing (or anything really) in the cmake code. Always use configuration to find all the paths and then use the respective variable.
See some of the existing testing code in cmake/SSGCommon.cmake
.
The SSG build and templating system is mostly written in Python.
-
The common pattern is to dynamically add the
shared/modules
to the import path. Thessgcommon
module has many useful utility functions and predefined constants. See scripts at./build-scripts
as an example of this practice. -
Follow the PEP8 standard.
-
Try to keep most of your lines length under 80 characters. Although the 99 character limit is within PEP8 requirements, there is no reason for most lines to be that long.
This project has been created by renaming SCAP Security Guide Project (SSG). It was a project that provides security policies in SCAP format. Project outgrown former name SCAP Security Guide, and changed its name to imply broader scope than just SCAP. Therefore, the SCAP Security Guide has been transformed into ComplianceAsCode/content, which better describes the goal of the project.
This git repository was created by simply renaming and moving the SCAP Security Guide (SSG) repository to a different GitHub organization.
Due to this history, the repository contains many mentions of SCAP Security Guide. These legacy mentions are continuously cleaned up as the project develops. Some of them are kept due to backwards compatibility.
For example, the output files produced by our build system still start by ssg-
prefix.
Various Linux distributions still ship our files in scap-security-guide
package.