Introduction
The following document is guiding the administrator in the installation of Nemo version 5.
Prerequisites
Some prerequisites must be met to have a successful installation.
- At least 1 VM/Server for the Nemo central server must be available
- At least 1 VM/Server for the Nemo probe must be available, if applicable
- On all VMs Linux RedHat 8 must be installed for compatibility with all the installation types. Nemo 5 is still backward compatible with RedHat 7 and CentOS 7. In case the customer has purchased Netaxis support on the Linux system, RedHat 8 is mandatory.
- Nemo probe must have at least 2 network interfaces, one for management and one to receive the mirrored traffic
- The partition layout depends on the amount of data that the administrator wants to hold on Nemo. See the next section as an example.
Servers/VMs requirements
The servers/VMs requirements are highly dependent on the performance requested (for the probes for example it depends on packets per second received, bandwidth, packet rate on each interface, maximum simultaneous calls signaling/media/statistics). As general reference, the administrator should consider 4 Cores, 2.4+ GHz clock speed and 16 GB RAM.
Partition layout
The following partitions must be created in the Nemo central server and probes. Netaxis advise to use LVM, to allow any resizing if needed. The amount of disk space depends on the amount of data which the administrator wants to store.
Netaxis can deliver an estimation of the partitions' size if the following information are shared in advance:
- Number of CDRs per day
- CDR retention (in days)
- CDR type (Oracle, Audiocodes, Broadsoft, probes only, ...)
- Stats retention (in days)
- Traffic type (business, residential, mixed)
The sizes shown in the table below are the minimum suggested.
Mount point | FS type | Host type | Minimum size | Description |
---|---|---|---|---|
/opt/nemo | ext4 | all | 5 GB | Nemo sw |
/var/log/nemo | ext4 | all | 5 GB | Nemo logs |
/data/db | ext4 | Central server | 50 GB | Mongodb database |
/data/cdr | ext4 | Central server | 20 GB | CDR collection (if applicable) |
/data/backup | ext4 | Central server | 50 GB | Backups of database and CDRs (if applicable) |
/data/traces | ext4 | probes | 50 GB | pcap traces |
/ | ext4 | all | 10 GB |
TIP
/data/db
, /data/cdr
, and /data/backup
are not necessary on the probe server(s) /data/traces
is necessary only on probe server(s). The amount of disk space for the traces depends on the amount of traces captured with RTP, the length of the calls, and the codec (when RTP is captured). A larger partition is recommended.
Firewall configuration
The ports shown in the table below are used by Nemo software.
Source | Destination | Transport | Port | Protocol | Notes |
---|---|---|---|---|---|
Probe | Central Server | TCP | 11000 | Nemo | |
Probe | Central Server | TCP | 11001 | Nemo | |
Central Server | Probe | TCP | 8081 | Nemo | |
Central Server | Probe | TCP | 443 | https | |
SBC | Central Server | UDP | 514 | syslog | optional, Audiocodes CDRs only |
SBC | Central Server | TCP | 22 | sftp | optional, Oracle CDRs only |
SBC | Central Server | UDP | 1813 | radius | optional, Oracle CDRs only |
Broadsoft | Central Server | TCP | 21 | ftp | optional, Oracle and Broadsoft only |
Remote Access | Central Server | TCP | 22 | ssh/sftp | |
Remote Access | Central Server | TCP | 443 | https | http or https should be chosen; port can be customized |
Remote Access | Central Server | TCP | 8080 | http | http or https should be chosen; port can be customized |
Remote Access | Probe | TCP | 22 | ssh/sftp | |
IT system | Central Server | TCP | 8081 | http | optinal, REST API |
RPM Packages
The following RPM packages are required to perform Nemo installation. In this table only the base packages are listed, the dependancies will be installed through the normal RHEL installation tools.
Package | Certified version RHEL 7 | Certified version RHEL 8 | Notes |
---|---|---|---|
mongodb-org | 5.0.x | 5.0.x | |
mongodb-org-shell | 5.0.x | 5.0.x | |
mongodb-org-mongos | 5.0.x | 5.0.x | |
mongodb-org-database-tools-extra | 5.0.x | 5.0.x | |
mongodb-org-tools | 5.0.x | 5.0.x | |
mongodb-org-server | 5.0.x | 5.0.x | |
mongodb-mongosh | 5.0.x | 5.0.x | |
mongodb-org-database | 5.0.x | 5.0.x | |
mongodb-database-tools | 5.0.x | 5.0.x | |
pfring | 8.6.1 | 8.6.1 | required only in case of probes installation (standalone or hybrid deployment) |
pfring-dkms | 8.6.1 | 8.6.1 | required only in case of probes installation (standalone or hybrid deployment) |
ndpi | 4.8.0 | 4.8.0 | required only in case of probes installation (standalone or hybrid deployment) |
zeromq | 4.1.4-6 | 4.3.4-3 | required only in case of probes installation (standalone or hybrid deployment) |
TIP
Any version of mongodb 5.0.x is compatible with Nemo: the latest minor version can be installed.
Installation procedure
In the following sections, all the steps to install Nemo are described.
Preliminary step for any type of installation
The administrator must have root access to the Linux machine where the software must be installed.
- Install the following support packages:
yum install -y python36 net-tools wget vim screen man tcpdump at ntp rsync hdparm libxslt openssl libpcap pango cairo ansible pciutils glib2
dnf install -y python36 net-tools wget vim tmux man tcpdump at chrony rsync hdparm libxslt openssl libpcap pango cairo ansible-core pciutils glib2 pkg-config compat-openssl10
TIP
ntp can be replaced by chrony or any other time sync tool
In case SAML authentication is required, the following extra packages must be installed:
yum install -y xmlsec1-openssl xmlsec1 libtool-ltdl
dnf install -y xmlsec1-openssl xmlsec1 libtool-ltdl
In case LDAP authentication is required, the following extra packages must be installed:
yum install -y openldap openldap-clients
dnf install -y openldap openldap-clients
- Enable the autostart of time synchronization server and job scheduler
chkconfig ntpd on
chkconfig atd on
systemctl enable chronyd
systemctl enable atd
- Disable SELINUX, editing the document /etc/sysconfig/selinux
Example:
sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
- Configure the NTP synchronization
RedHat 7
Adapt the file /etc/ntp.conf
. Especially check the lines starting with server
and eventually adapt accordingly with your NTP server IP address or FQDN. If the public servers can be used, no change to be done.
Redhat 8
If the public NTP servers can be used, no change to be done as NTP pool are already mentioned in the chrony configuration file. Check if the synchronization is active by checking on which NTP pool sources we are synchronizing or by displaying the system's clock performance:
chronyc sources
chronyc tracking
If the use of a public server is not allowed, edit the file /etc/chrony.conf
, and add the server directive with their IP address or FQDN.
Example:
server ntp.example.com
server 192.0.2.1
- Reboot the server/VM.
Preliminary steps for central server deployment
Files needed to complete this section:
Package | Certified version RHEL 7 | Certified version RHEL 8 | Notes |
---|---|---|---|
mongodb-org | 5.0.x | 5.0.x | |
mongodb-org-shell | 5.0.x | 5.0.x | |
mongodb-org-mongos | 5.0.x | 5.0.x | |
mongodb-org-database-tools-extra | 5.0.x | 5.0.x | |
mongodb-org-tools | 5.0.x | 5.0.x | |
mongodb-org-server | 5.0.x | 5.0.x | |
mongodb-mongosh | 5.0.x | 5.0.x | |
mongodb-org-database | 5.0.x | 5.0.x | |
mongodb-database-tools | 5.0.x | 5.0.x | |
zeromq | 4.1.4-6 | 4.3.4-3 | required only in case of probes installation (standalone or hybrid deployment) |
Transfer the above files on the server/VM.
Enable the EPEL repository so that dependencies can be feteched.
yum install epel-release
dnf install epel-release
- Install the following zeromq packages with yum command:
yum install -y zeromq-*.rpm
dnf install -y zeromq-*.rpm
- Install MongoDB
Create a new file /etc/yum.repos.d/mongodb-org-5.0.repo with this content:
[mongodb-org-5.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/5.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://pgp.mongodb.com/server-5.0.asc
[mongodb-org-5.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/8/mongodb-org/5.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://pgp.mongodb.com/server-5.0.asc
INFO
Alternatively, the packages can be downloaded manually from https://repo.mongodb.org/yum/redhat/.
Install the packages:
yum install mongodb-org
dnf install mongodb-org
- Modify the
/etc/mongod.conf
file to use the directory/data/db
to store its files (parameterdbpath
)
Example:
sed -i 's#dbPath: .*#dbPath: /data/db#g' /etc/mongod.conf
- Modify in the same file at the previous point the bindIp parameter. By default the DB listens on localhost only and thus can only acceptn local connections. If the DB should be reachable from other servers (such as probes or other Nemo servers), ensure that the DB listens on all addresses.
Example:
sed -i 's/bindIp: .*/bindIp: 0.0.0.0/g' /etc/mongod.conf
WARNING
As the DB will be reachable from anywhere, ensure that proper firewalling is set up at customer site or adapt the configuration of firewalld/iptables on the Linux system hosting Nemo central server
- Change the rights of
/data/db
to be accessible by mongodb.
Example:
chown mongod:mongod /data/db
cd /data/db/
chmod 755 ..
- Enable mongodb at startup.
chkconfig mongod on
systemctl enable mongod
Reboot
Verify that mongodb is running correctly, for example launching mongodb client.
Example:
# mongo
MongoDB shell version v5.0.24
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("71a4e893-944a-45d4-a460-38a1831c97ab") }
MongoDB server version: 5.0.24
================
Warning: the "mongo" shell has been superseded by "mongosh",
which delivers improved usability and compatibility.The "mongo" shell has been deprecated and will be removed in
an upcoming release.
For installation instructions, see
https://docs.mongodb.com/mongodb-shell/install/
================
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
---
The server generated these startup warnings when booting:
2024-02-07T12:36:55.395+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2024-02-07T12:36:55.396+00:00: You are running this process as the root user, which is not recommended
2024-02-07T12:36:55.396+00:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning
---
>
INFO
If you encounter an issue, check the MongoDB log (in the directory /var/log/mongodb
). It might happen that the locales were not correctly set during OS installation, which prevents MongoDB from starting. In this case, edit the file /etc/environment
and set:
LANG=en_US.utf8
LC_CTYPE=en_US.utf8
Preliminary steps for a probe deployment
Files needed to complete these section:
Package | Certified version RHEL 7 | Certified version RHEL 8 | Notes |
---|---|---|---|
pfring | 8.6.1 | 8.6.1 | |
pfring-dkms | 8.6.1 | 8.6.1 | |
ndpi | 4.8.0 | 4.8.0 | |
zeromq | 4.1.4-6 | 4.3.4-3 |
Transfer the above files on the server/VM.
Another repository, EPEL (Extra Packages for Enterprise Linux) must be activated to be able to install packages which are not distributed with the base Redhat install (such as dkms, required for
PF_RING
installation).
If linux is connected to internet:
yum install epel-release
- Install the following zeromq packages with yum command:
yum install -y zeromq-*.rpm
dnf install -y zeromq-*.rpm
- Update the kernel
Example:
yum update kernel
Reboot
After reboot, install the kernel-headers so that the custom version of pfring drivers for the NICs can be installed.
Example:
yum install kernel-headers
Reboot
Install the kernel-devel package.
Example:
yum install kernel-devel
Reboot
PF_RING is a collection of libraries and drivers which allows Nemo to read packets directly from the NIC and bypass the kernel network stack. Install the provided pfring packages provided.
Example:
yum install -y pfring-*.rpm ndpi-*.rpm
WARNING
The installed package version must be the one indicated in this guide. If a wrong version of pfring has been installed, it must be removed and replaced by the one provided by Netaxis.
- Execute the following commands:
mkdir /etc/pf_ring
touch /etc/pf_ring/pf_ring.conf
touch /etc/pf_ring/pf_ring.start
systemctl enable pf_ring
- Reboot
Nemo software installation
Files needed to complete this section:
Package | Version |
---|---|
nemo | 5.0.x |
Install the provided Nemo package on all the servers/VMs (both central servers and probes).
Example:
rpm -ihv nemo-5.0.0-1.el7.x86_64.rpm
rpm -ihv nemo-5.0.0-1.el8.x86_64.rpm
Post-installation for central servers
What is needed to complete this section:
Nemo licenses
- Initialize the mongodb.
mongo < /opt/nemo/scripts/mongodb-databases.js
Adapt the file /opt/nemo/etc/global.conf, following the section Global configuration file
Restart nemo service.
Example:
service nemo restart
- Enable nemo autostart.
systemctl enable nemo
Nemo licenses installation
Add licenses with the following command: /opt/nemo/bin/nemo-admin licenses add <license>
At least 2 licenses must be added: 1 license is to activate the GUI, and it is the same for all the deployments. The second depends on the deployment. Contact Netaxis to get your license.
TIP
After the installation of the licenses, the GUI should be reachable. By default the gui process is listening on http://<ip address>:8080
. If the GUI is not reachable, check that the linux firewall is properly configured (firewalld, iptables, ...) or deactivated.
Mongodb log rotation
Create the text file /etc/cron.d/nemo
with the following content
0 1 * * * mongod echo "db.adminCommand( { logRotate : 1 } )" | mongo
5 1 * * * mongod find /var/log/mongo* -type f -name "*.log.*" -mtime +3 -exec rm -f {} \;
The argument mtime +3
can be adapted to keep more than 3 days of logs.
Post-installation for probes
Adapt the file /opt/nemo/etc/global.conf, following the section Global configuration file. Particularly important are the information related to the section [database], pointing to the central servers, and the section [CaptureEngine] to set up the capture interface.
Restart Nemo service.
Example:
service nemo restart
- Activate the auto startup of nemo service.
Example:
chkconfig nemo on
- Verify that the capture-probe process is running properly (do it several times and verify that the PID is always the same).
Example:
service nemo status
nemo.service - SYSV: Nemo GUI
Loaded: loaded (/etc/rc.d/init.d/nemo; bad; vendor preset: disabled)
Active: active (running) since Wed 2018-10-17 15:26:15 CEST; 31s ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/nemo.service
├─2231 /opt/nemo/bin/python /opt/nemo/bin/nemo-watchdog
├─2232 /opt/nemo/bin/python /opt/nemo/bin/nemo-health-monitor
├─2233 /opt/nemo/bin/python /opt/nemo/bin/nemo-capture-engine
└─2271 /opt/nemo/bin/nemo-capture-probe -i ens192 -d 78.110.197.121 -o
78.110.197.121 -sip -rtpStats -rtpCapture -rtpNotificationInterval 5m
-sipNotif...
- Check the logfile
/var/log/nemo/capture-probe-<ifname>.log
you'll see that the capture-probe is listening and will tell you the amount of UDP/TCP packets received.
Example:
tail -f capture-probe-ens192
2018-10-17 15:27:05.460 WARN (1173) TCP streams: 0 (0 sip, 0 denied) - segments: denied: 0, purged: 0, max purged: 0, avg purged: NaN
2018-10-17 15:27:05.461 WARN (1320) stats rates (/sec): ring: 0.0, non-RTP: 0.0, UDP: 0.0, TCP: 0.0 purged: 0.0 === tracked: calls: 0,
media: 0, TCP: 0 === processed: calls: 0, UDP: 0, TCP: 0 === ring: rx: 1, dropped: 0 (0.00 %)
TIP
If this is not the case, meaning that the capture-probe process restarts or gives you warnings, it may mean that the installation was not done properly and it needs some check. An error at this stage may involve a wrong installation on pfring.
- After having set up the probe(s), the probe's hostname must be set in the Nemo GUI, if the probes are not reachable through to their configured hostnames. To do so, login in Nemo Central server GUI as administrator (ask Netaxis the default credentials), go to settings→system settings and in the field hostname mappings insert the following:
<hostname probe 1>,http://<ip address probe 1>:8081/;<hostname probe 2>,http://<ip address probe 2>:8081/;...;<hostname probe n>,http://<ip address probe n>:8081/
Where: <hostname probe x>
is the hostname for each probe <ip address probe x>
is the IP address for each probe
Example:
nemo-probe,http://10.1.0.230:8081/
Global configuration file
The file global.conf is located in the directory /opt/nemo/etc and it contains the configuration of the Nemo central server and probes.
Global.conf sections descriptions
The file is divided into sections, each one with a specific function. Some of them should be customized to match the license and the desired need.
Section gui
It contains the parameters of the GUI.
- run parameter must be set to
yes
if the GUI must run on the server (typically true for the central servers) - user parameter must be set to root if the GUI uses a port lower than 1024. Can be set to a different user (for example nemo) in case it is using a higher port (for example 8443)
- The following lines must be added, if the live tracing must be enabled
liveTracing=True
liveTracing_orchestrators=<db address 1>;<db address 2>
Where <db address 1> <db address 2>
are the IP addresses where the orchestrators are running (one for each nemo central server).
Section StatsEngine
It contains the configuration for the Stats engine.
- run parameter must be set to
yes
if the stats engine must run on the server (typically true for the central servers) - user parameter must be set to nemo
- groups parameter must match the license. Possible values are:
netnetsd
mediant
capture
sonus
netmatch
broadsoft
metaswitch
italtel
. In case of multi-plugins deployment, the values must be comma-separated.
Section StatsGlobalEngine
It contains the configuration for the Stats global engine.
- run parameter must be set to
yes
if the stats engine must run on the server (typically true for the central servers) - user parameter must be set to
nemo
Section ReportingEngine
It contains the configuration for the Reporting engine.
- run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
- user parameter must be set to
nemo
Section CDRExportEngine
It contains the configuration for the CDR export engine.
- run parameter must be set to
yes
if the stats engine must run on the server (typically true for the central servers) - user parameter must be set to
nemo
Section StatsExportEngine
It contains the configuration for the Stats export engine.
- run parameter must be set to
yes
if the stats engine must run on the server (typically true for the central servers) - user parameter must be set to
nemo
Section AnomaliesEngine
It contains the configuration for the Anomalies engine.
- run parameter must be set to
yes
if the stats engine must run on the server (typically true for the central servers) - user parameter must be set to
nemo
Section HealthMonitor
It contains the configuration for the health monitor.
- run parameter must be set to
yes
if the stats engine must run on the server (all the servers) - user parameter must be set to
root
- monitorDatabase should be set to yes to enable the purge of the old records in the database. It should be disabled on probes if monitoring is already performed on the central server.
- monitorFilesystems parameter is used to monitor and keep the filesystem clean. It is recommended to configure it both on central server and probes. Each filesystem to monitor is a set of 3 parameters comma-separated:
path
,warning threshold
(in %) anddanger threshold
(in %). Suggested values:monitorFilesystems=/data/db,75,90;/data/cdr,75,90
- purgeTraces parameter is used typically on the probes, to cleanup the directory which contains the traces. Automatic purging of traces is composed of 2 parameters, comma-separated: path and target size in GB. The health monitor will delete oldest directories under path to stay below the target size, as reported by df Linux utility. Example:
purgeTraces=/data/traces/,2
- purgeTracesSchedule parameter is used typically on probes, to schedule the execution of the purge at the previous point. It defines hours schedule to purge traces as a set of comma-separated intervals (i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for daily from 05:00 to 05:59). On systems where capture traffic is high, the purge should be scheduled during out of office hours to minimize impact on real-time capture processing. Example:
purgeTracesSchedule=2-4,12
Sections from QueueRunner to NetmatchSLECDRCollector
These sections must be enabled (run=yes
) if the system must be enabled to elaborate CDRs, depending on the license. By default, the collectors will not start: these configuration sections can be omitted.
QueueRunner
is Oracle CDRs via RadiusSMXRCSCDRCollector
is Oracle SMX CDRsSDCDRCSVCollector
is Oracle CDRs via SFTPMediantCDRSyslogCollector
is Audiocodes Mediant via syslogSonusCDRCSVCollector
is Sonus SBC CSV CDRsBWCDRXMLCollector
is Broadworks XML CDRsBWCDRCSVCollector
is Broadworks CSV CDRsMetaswitchCDRXMLCollector
is Metaswitch XML CDRsNetmatchSLECDRCollector
is Italtel Netmatch
Section CaptureOrchestrator
It defines if the server/VM is acting as an orchestrator. Typically is the central server when probes are also configured: in this case the run parameter must be set run=yes. In case the Nemo server is used only with CDRs, the run parameter should be set run=no
.
Section database
The parameter host defines the IP address(es) of the databases. Typically a single address, but it could be multiple comma separated, in case the redundancy is activated. In case redundancy is activated, an extra parameter replicaSet must be added and set to nemo. Redundancy configuration is described in the section Database Replication.
Example with redundancy with 2 sites:
host=10.1.0.215,10.1.18.2
replicaSet=nemo
Example without redundancy:
host=127.0.0.1
WARNING
For Probes it is mandatory to put the IP address(es) of the nodes which contain the database(s). 127.0.0.1 is not valid.
Section plugins
The parameter group defines which plugin must be activated, and it is dependent on the Nemo license installed. Possible values are: netnetsd
mediant
capture
sonus
netmatch
broadsoft
metaswitch
. In case of multi-plugins deployment, the values must be comma-separated.
Section CaptureEngine
DANGER
Change these values only if you know what you are doing.
It defines from which interface and with which options the probe should capture the traffic. It must be configured for the probes only. The following options are possible:
-autoRestartDropRateThreshold int
auto-restart drop rate threshold (%), 0 to disable (default10
)-c int
capture ring cluster id-containerINVITE
group SIP INVITE traces container files (reduces the amount of capture files)-cpuProfile
CPU profiling-debugTCP
log TCP stream reassembly information-filterTCPPorts string
TCP ports ranges to process (i.e.5060-5070
,5060-5060&5070-5070
)-filterUDPPorts string
UDP ports ranges to process (i.e.1024-65535
,5060-5060&10000-10499
)-historyLevel string
history level (critical
,error
,warning
,info
,debug
) (defaultinfo
)-i string
Interface to read packets from (defaulten1
)-liveMaxCalls int
maximum calls for live capture (default1000
)-liveMaxDuration string
maximum time duration for live capture (default1m
)-logLevel string
log level (critical
,error
,warning
,info
,debug
) (defaultwarning
)-longDurationCalls int
long duration calls (secs) (default7200
)-longDurationCallsDumpInterval int
long duration calls dump interval (secs) (default3600
)-maxSipPerCall int
max SIP messages per call (loop detection) (default100
)-memProfile
memory profiling-memStatsInterval int
memory stats logging interval (default300
)-o string
capture orchestrator (multiple orchestrators can be &-separated)(default127.0.0.1
)-oci int
capture orchestrators - check interval for alive status and elect (default30
)-optionsCapture
capture OPTIONS-oto int
capture orchestrator allowed time (seconds) between alive messages (default7
)-preferredCallingHeader string
Preferred calling header (pai
orfrom
) (defaultpai
)-resolveNAT
resolve hosted NAT traversal RTP streams-rtpCapture
capture RTP-rtpNotificationInterval string
RTP notification to orchestrator interval (i.e.30s
5m
1h
) (default5m
)-rtpProcessingOrchestration
enable RTP processing orchestration among different probes-rtpSiblings string
comma-separated list of sibling probes to use for RTP capture synchronization-rtpSiblingsBind string
address:port to bind to in order to receive messages from other probes (default127.0.0.1:11003
)-rtpStats
compute RTP stats-rtpVLANMatch
SIP/RTP VLAN matching (defaulttrue
)-sip
capture SIP-sipINVITETimeout string
SIP INVITE session timeout (i.e.30s
5m
1h
) (default24h
)-sipNotificationInterval string
SIP notification to orchestrator interval (i.e.30s
5m
1h
) (default5m
)-sipUDPPorts string
UDP ports ranges to process for SIP signaling (i.e.5060-5070
,5060-5061&5070-5070
) (default5060-5070
)-tcpCloseConnections string
close pending TCP connections duration (default5m
)-tcpFlushPackets string
flush orphan TCP packets duration (default1m
)
TIP
The options and their values should be comma separated. If more interfaces should be configured, each interface configuration must be separated with a semicolon.
Example with 4 capture interfaces on the probe, 2 orchestrators and 2 central servers:
probes=-i,p3p1,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture;-i,p3p2,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture,-logLevel,info;-i,p3p3,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture;-i,p3p4,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture
Example with 1 capture interface on the probe and a single orchestrator:
probes=-i,p3p1,-o,145.219.189.110,-sip,-rtpStats,-rtpCapture
Section watchdog
Leave the default value.
global.conf examples
In the following sections some examples of global.conf files are shown.
Redundant DB and two orchestrators (probes) and LDAP auth
[GUI]
run=yes
#user=nemo
liveTracing=True
liveTracing_orchestrators=10.1.0.215;10.1.18.2
[GUI.LDAP]
urlLDAP=ldaps://10.1.0.217
usernameMatch=(.*)
usernameSubstitution=uid=\1,ou=People,dc=netaxis,dc=nl
usernameFilterBaseDN=ou=People,dc=netaxis,dc=nl
usernameFilterMatch=(.*)
usernameFilterSubstitution=(&(uid=\1)(memberOf=cn=Oracle-Admin,ou=Groups,dc=netaxis,dc=nl))
[StatsEngine]
run=yes
user=nemo
groups=capture
[StatsGlobalEngine]
run=yes
user=nemo
[ReportingEngine]
run=yes
user=nemo
[CDRExportEngine]
run=yes
[StatsExportEngine]
run=yes
[AnomaliesEngine]
run=yes
user=nemo
[HealthMonitor]
run=yes
user=root
#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
monitorDatabase=yes
#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90
#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,2
#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=2-4,12
[QueueRunner]
run=no
[SMXRCSCDRCollector]
run=no
[SDCDRCSVCollector]
run=no
[MediantCDRSyslogCollector]
run=no
[NetmatchSLECDRCollector]
run=no
[CaptureOrchestrator]
run=yes
[CaptureEngine]
run=no
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth1,-d,10.1.0.215,-o,10.1.0.215,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch,false,-resolveNAT
[database]
host=10.1.0.215,10.1.18.2
replicaSet=nemo
[plugins]
groups=capture
[watchdog]
logLevel=10
Single central server with CDRs only (Oracle via sftp, Audiocodes via syslog)
[GUI]
run=yes
#user=nemo
[StatsEngine]
run=yes
user=nemo
groups=netnetsd,mediant
[StatsGlobalEngine]
run=yes
[ReportingEngine]
run=yes
user=nemo
[CDRExportEngine]
run=yes
[StatsExportEngine]
run=yes
[AnomaliesEngine]
run=yes
user=nemo
[HealthMonitor]
run=yes
user=root
#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
#monitorDatabase=yes
#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90
#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,500
#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=2-4,12
[QueueRunner]
run=no
[SMXRCSCDRCollector]
run=no
[SDCDRCSVCollector]
run=yes
[MediantCDRSyslogCollector]
run=yes
[NetmatchSLECDRCollector]
run=no
[CaptureOrchestrator]
run=no
[CaptureEngine]
run=no
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth2,-d,10.100.0.9,-o,10.100.0.9,-sip,-rtpStats,-rtpCapture
[database]
host=127.0.0.1
[plugins]
groups=netnetsd,mediant
[watchdog]
logLevel=10
Hybrid deployment (two orchestrators/databases and Oracle CDRs via SFTP)
[GUI]
run=yes
user=nemo
liveTracing=True
liveTracing_orchestrators=145.219.189.110;145.219.189.111
[StatsEngine]
run=yes
user=nemo
groups=netnetsd
[ReportingEngine]
run=yes
user=nemo
[CDRExportEngine]
run=yes
[StatsExportEngine]
run=yes
[AnomaliesEngine]
run=yes
user=nemo
[HealthMonitor]
run=yes
user=root
#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
monitorDatabase=yes
#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90
#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,500
#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=1-5
[QueueRunner]
run=no
[SMXRCSCDRCollector]
run=no
[SDCDRCSVCollector]
run=yes
[MediantCDRSyslogCollector]
run=no
[NetmatchSLECDRCollector]
run=no
[CaptureOrchestrator]
run=yes
[CaptureEngine]
run=no
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth2,-d,10.100.0.9,-o,10.100.0.9,-sip,-rtpStats,-rtpCapture
[database]
host=145.219.189.110,145.219.189.111
[plugins]
groups=netnetsd
[watchdog]
logLevel=10
Probe global.conf configuration (with 2 orchestrators, 1 capture interface)
[GUI]
run=no
user=nemo
[StatsEngine]
run=no
user=nemo
[ReportingEngine]
run=no
user=nemo
[CDRExportEngine]
run=no
[StatsExportEngine]
run=no
[AnomaliesEngine]
run=no
user=nemo
[HealthMonitor]
run=yes
user=root
#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
#monitorDatabase=yes
#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
#monitorFilesystems=/data/db,75,90;/data/cdr,75,90
#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
purgeTraces=/data/traces/,10
#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=1-6
[QueueRunner]
run=no
[SMXRCSCDRCollector]
run=no
[SDCDRCSVCollector]
run=no
[MediantCDRSyslogCollector]
run=no
[NetmatchSLECDRCollector]
run=no
[CaptureOrchestrator]
run=no
[CaptureEngine]
run=yes
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
probes=-i,eth1,-o,10.1.0.215&10.1.18.2,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch=false,-resolveNAT,-logLevel,debug,-filterTCPPorts,5060-5090,-optionsCapture
[database]
host=10.1.0.215,10.1.18.2
#host=2607:fc30:100:1000::4
replicaSet=nemo
[plugins]
groups=capture
[watchdog]
logLevel=10
Optional configurations
Depending on the deployment, the administrator should proceed with some of the optional configurations mentioned in the next sections. Contact Netaxis if you are not sure of what configuration should be applicable.
Radius
If Nemo receives CDRs from Oracle SBC via radius, execute the following steps.
Files needed to complete this section:
Package | Certified version RHEL 7 | Certified version RHEL 8 |
---|---|---|
freeradius | 3.0.13-6 | 3.0.20-14 |
freeradius-nemo | 3.0.13-6 | 3.0.20-14 |
- Install freeradius on Nemo central server(s).
Example:
yum install -y freeradius-3.0.13-6.el7.centos.x86_64.rpm
yum install -y freeradius-nemo-3.0.13-6.el7.centos.x86_64.rpm
- Create the directory
/data/cdr/active
and change the owner toradiusd
.
Example:
mkdir /data/cdr/active
chown radiusd:radiusd /data/cdr/active
Set the QueueRunner in
global.conf
as described in section Global configuration file.Modify the file
/etc/raddb/radiusd.conf
setting the parameter radacctdir as the directory previously created.
Example:
radacctdir = /data/cdr/active
- In the same file, increase the parameter max_attributes to
500
.
Example:
max_attributes = 500
- Modify the file the file
/etc/raddb/sites-available/default
commenting the line detail and adding in the line before nemo.
Example:
# Create a 'detail'ed log of the packets.
# Note that accounting requests which are proxied
# are also logged in the detail file.
#detail
nemo
# daily
# Update the wtmp file
#
# If you do
- Create the file
/etc/raddb/mods-enabled/nemo
and add the following content:
nemo {
maxFileSize = 1000000
}
- Edit the content of the file
/etc/raddb/clients.conf
with the information of the SBC which will send the data to Nemo.
client <NAS-ID> {
ipaddr = 10.0.65.19
secret = <secret>
shortname = ACME
nastype = other
}
Where <NAS-ID>
and <secret>
are the corresponding data configured in the account-config of Oracle SBC.
- Restart radius daemon
Example:
service radiusd restart
- Configure the autostart of radius.
Example:
chkconfig radiusd on
- Restart Nemo.
Example:
service nemo restart
Enable CDRs backup and/or metadata
To backup the CDRs (depending on the chosen deployment), execute the following steps.
- Edit the file
/opt/nemo/etc/backup.conf
setting the parametersBACKUP_CDRS
toyes
,ARCHIVED_CDR_PATH
to/data/backup/cdr/
or any other desired directory,ARCHIVED_CDR_RETENTION
to the number corresponding to the days the administrator wants to keep the CDRs.
Example:
#backup CDRs (no|yes)
BACKUP_CDR=yes
#path to active CDR files
ACTIVE_CDR_PATH=/data/cdr/active/
#ignore all CDRs newer than this number of days
ACTIVE_CDR_RETENTION=0
#path to store backups
ARCHIVED_CDR_PATH=/data/backup/cdr/
#number of days of archived CDRs to keep
ARCHIVED_CDR_RETENTION=14
- If not yet done, create the directory where the backed up CDRs will be stored.
Example:
mkdir -p /data/backup/cdr
- Add the following line to the file
/etc/cron.d/nemo
:
0 2 * * * radiusd /opt/nemo/scripts/nemo-backup-cdr
user radiusd
only in case of radius protocol used; root
user can be used.
TIP
Customize the cron schedule with the desired values
To backup database metadata, do for the same metadata
section
#backup metadata (no|yes)
BACKUP_METADATA=yes
#path to store backups
METADATA_BACKUP_PATH=/data/backup/db/metadata/
#backups to keep
METADATA_BACKUPS_COUNT=14
And create the appropriate directory. No need to add another line in /etc/cron.d/nemo
file
Proxy configuration
In case there is a proxy/load balancer in front of Nemo and the redirect is not working as expected (web pages not available after logging in for example) using an FQDN, edit the file /opt/nemo/gui/main.conf
changing/uncommenting the following lines:
tools.proxy.on = True
tools.proxy.base = '<hostname>'
Where <hostname>
is the FQDN in use.
Prevent yum upgrade/update for special packages
The RPMs installed following this guide must not be updated automatically with commands like yum upgrade, because an updated version could be not compatible with the Nemo software. To prevent this, for example the administrator can change the file /etc/yum.conf
adding the following line at the end of the file:
Exclude following packages updates
exclude=zeromq pfring zeromq-devel freeradius mongodb-org
HTTPs configuration
TIP
The examples below are referring to a RedHat 7 installation. In case of a RedHat 8 installation, the commands in the examples may not work.
To activate HTTPs execute the following steps.
- Generate a private key.
Example:
openssl genrsa -out /opt/nemo/gui/privkey.pem
- Generate a self signed certificate or a CSR. In case of CSR, it must be signed by a certificate authority.
Example (self signed certificate, 10 years validity):
openssl req -new -x509 -days 3650 -key /opt/nemo/gui/privkey.pem -out /opt/nemo/gui/cert.pem
In case a CSR is generated, the signed certificate must be placed in the directory /opt/nemo/gui
.
WARNING
Verify that the user nemo
is able to read the pem files. For example:
chown nemo:nemo /opt/nemo/gui/privkey.pem
chown nemo:nemo /opt/nemo/gui/cert.pem
- Add/change/uncomment the following lines of the file
/opt/nemo/gui/main.conf
in global section.
server.socket_port=<port>
server.ssl_module = 'builtin'
server.ssl_certificate = "/opt/nemo/gui/cert.pem"
server.ssl_private_key = "/opt/nemo/gui/privkey.pem"
Where:
<port>
is the port used to connect via the browser (to use a port lower than1024
, the GUI must run asroot
(see Section gui)/opt/nemo/gui/cert.pem
is the certificate, either self signed or signed/opt/nemo/gui/privkey.pem
is the private key
- Restart Nemo.
Example:
service nemo restart
TIP
In case the administrator wants to have more control on the HTTPs configuration, nginx software could be used. In this case /opt/nemo/gui/main.conf
should be configured appropriately. Check also the note in the section Proxy configuration.
TIP
In case NGINX or any other proxy is used, in order to have the right source IP address in the audit.log
file, the following option must be uncommented (remove the hash character):
#tools.proxy.on: True
and restart the nemo process.
Certification renewal
To renew a certificate which is going to expire:
- generate a CSR
Example:
openssl req -new -key /opt/nemo/gui/privkey.pem -out cert.csr
where cert.csr
is the certificate request to be signed by a certificate authority.
Sign the csr with a recognized certificate authority
once the csr is signed, transfer the signed certificate in the directory defined in the parameter
server.ssl_certificate
of the file/opt/nemo/gui/main.conf
, as explained in HTTPs configuration.
Example:
server.ssl_certificate = "/opt/nemo/gui/cert.pem"
WARNING
In case nginx or any other proxy is used, please refer to the related manual to determine where the certificate must be copied, and if any configuration change is needed.
- restart nemo
Example:
systemctl restart nemo
HTTPs hardening with NGINX
The configuration of HTTPs described in the previous section is giving a limited customization in terms of ciphers, SSL protocol version, and other settings which could be crucial for the hardening of the system. If requested, NGINX (acting as a reverse proxy) can be configured, executing the steps described below.
TIP
The following steps are only an example of nginx implementation. The linux administrator is fully responsible of nginx configuration, maintenance and update
Install nginx package and all its dependencies. The file is not delivered with Nemo, since it depends on the Linux version where Nemo is installed. It is suggested to install using yum command, to install the latest version available.
Enable the autostart.
Example:
systemctl enable nginx
- Change the owner of the nginx log directory.
Example:
chown -R nginx:nginx /var/log/nginx
- Create a nginx config file for Nemo
/etc/nginx/conf.d/nemo.conf
. An example file is shown below, but remember to customize the content depending on the server configuration.
server {
listen 80;
server_name 10.1.0.215;
return 301 https://10.1.0.215$request_uri;
}
server {
listen 443 ssl;
server_name 10.1.0.215;
access_log /var/log/nginx/nemo_access.log;
error_log /var/log/nginx/nemo_error.log notice;
ssl_certificate /opt/nemo/gui/cert.pem;
ssl_certificate_key /opt/nemo/gui/privkey.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
#ssl_stapling on;
#ssl_stapling_verify on;
gzip on;
gzip_min_length 2000;
gzip_proxied expired no-cache no-store private auth;
gzip_types *;
#Version Disclosure in server section
server_tokens off;
#enable security-Headers
add_header X-Content-Type-Options "nosniff";
add_header Referrer-Policy "no-referrer";
#STS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
location / {
proxy_pass https://0.0.0.0:8443;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Where:
server_name
parameter defines the IP address or the FQDN of the server (published on the DNS)return
parameter defines the address returned if the http request is donessl_certificate
defines the filename of the certificate, including the pathssl_certificate_key
defines the filename of the private key, including the pathssl_ciphers
defines the ciphers allowed to setup the connection-proxy_pass
defines the address where Nemo GUI is listening.
- Restart nginx.
Example:
systemctl restart nginx
The configuration file above is keeping Nemo using the port 8443 to the internal connection between Nemo and nginx, exposing the port 443 to the external world, redirecting the connection to the secured 443 connection if a standard http connection is attempted. To have nginx working the section HTTPs configuration must be first applied.
Database replication
It is possible to deploy a Nemo database on several (central) servers. In this case a replica set must be configured on mongoDB to set up replication between different servers. The minimum configuration to have a fully working replication is 2 central servers which are hosting the database, one primary and one secondary, and an arbiter, which consists of a server/VM which must decide the role of the other two nodes. Ideally the arbiter must be always running: in case it is not, while the two databases remain fully working, a switchover could cause an unexpected behavior. For thisreason the server which hosts the arbiter shouldn't be the same which hosts one of the 2 central servers. Practically the most common situation is that the arbiter is one of the probes.
In case of a redundant deployment, the stats engine must run only on one of the central server instances. If the stats engine is running on both at the same time, the charts shown on the GUI will not be correctly calculated. In case of switchover of the database, no manual action is requested to the administrator. In case the central server where the stats engine is active is unavailable for any reason, the stats engine must be manually enabled on the secondary central server, until the main central server is back online, to keep the stats updated.
To configure an arbiter the same mongodb packages of a central server must be installed.
The following configuration should be applied.
On central servers
- Edit the file
/etc/mongod.conf
, changing the replication statement.
replication:
replSetName: nemo
- Restart mongod service.
Example:
service mongod restart
- Change
/opt/nemo/etc/global.conf
.
- in
database
section as below:
host=<ip address server 1>,<ip address server 2>,<ip address server 3>,...
replicaSet=nemo
Where <ip address server 1>
… are the IP addresses of the servers hosting the database replicas.
- in
StatsEngine
section of the central server elected as secondary
[StatsEngine]
run=no
user=nemo
- on the primary central server execute the following commands
mongo
MongoDB shell version v4.4.19
rs.initiate({_id : "nemo", members: [{ _id: 0, host: "<ip address server 1>" },{ _id: 1, host: "<ip address server 2>" }, { _id: 2, host: "<ip address server 3>" }]})
Where <ip address server 1>
… are the IP addresses of the servers hosting the database replicas.
- Restart nemo service.
Example:
service nemo restart
On probes
- Change
/opt/nemo/etc/global.conf
.
- in
database
section as in the example below:
host=<ip address server 1>,<ip address server 2>,<ip address server 3>,...
replicaSet=nemo
Where <ip address server 1>
… are the IP addresses of the servers hosting the database replicas.
- in
CaptureEngine
section, adding the IP addresses of all the central servers similarly to the example below:
[CaptureEngine]
run=yes
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the different
probes=-i,eth1,-o,<ip address server 1>&<ip address server 2>,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch=false,-resolveNAT,-logLevel,debug,-filterTCPPorts,5060-5090
- Restart nemo service.
Example:
service nemo restart
On Arbiter
Install MongoDB
Enable MongoDB on boot.
Example:
systemctl enable mongod
- Edit the file
/etc/mongod.conf
, changing the replication statement.
replication:
replSetName: nemo
- Restart mongod service.
Example:
service mongod restart
- Connect to the primary database and add the arbiter.
Example:
mongo
rs.addArb("<IP_ADDR>:27017")
Where <IP_ADDR>
is the IP address of the arbiter.
TIP
The status of the replication can be verified with the command rs.status()
within the mongo interface.
Example:
nemo:ARBITER> rs.status()
{
"set" : "nemo",
"date" : ISODate("2024-12-11T09:53:19.821Z"),
"myState" : 7,
"term" : NumberLong(848),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"votingMembersCount" : 3,
"writableVotingMembersCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"lastCommittedWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"appliedOpTime" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"durableOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastAppliedWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"lastDurableWallTime" : ISODate("1970-01-01T00:00:00Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1733910748, 1),
"members" : [
{
"_id" : 0,
"name" : "10.1.0.215:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1962,
"optime" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"optimeDurable" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"optimeDate" : ISODate("2024-12-11T09:53:15Z"),
"optimeDurableDate" : ISODate("2024-12-11T09:53:15Z"),
"lastAppliedWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"lastDurableWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"lastHeartbeat" : ISODate("2024-12-11T09:53:17.868Z"),
"lastHeartbeatRecv" : ISODate("2024-12-11T09:53:19.544Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1733908756, 1),
"electionDate" : ISODate("2024-12-11T09:19:16Z"),
"configVersion" : 6,
"configTerm" : 848
},
{
"_id" : 1,
"name" : "10.1.18.2:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1962,
"optime" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"optimeDurable" : {
"ts" : Timestamp(1733910795, 1),
"t" : NumberLong(848)
},
"optimeDate" : ISODate("2024-12-11T09:53:15Z"),
"optimeDurableDate" : ISODate("2024-12-11T09:53:15Z"),
"lastAppliedWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"lastDurableWallTime" : ISODate("2024-12-11T09:53:15.184Z"),
"lastHeartbeat" : ISODate("2024-12-11T09:53:18.240Z"),
"lastHeartbeatRecv" : ISODate("2024-12-11T09:53:18.830Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "10.1.0.215:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 6,
"configTerm" : 848
},
{
"_id" : 2,
"name" : "10.1.0.237:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 1967,
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 6,
"configTerm" : 848,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1
}
LDAP/AD login configuration
To activate the LDAP/AD login configuration, a new section GUI.LDAP
must be added in the file /opt/nemo/etc/global.conf
, and the following parameters must be defined
Name | Description |
---|---|
urlLDAP | URL of LDAP/AD server |
usernameMatch | Regex to select the username used to login |
usernameSubstitution | Output of the previous regex, to match the search of the username |
usernameFilterBaseDN | baseDN to match the |
usernameFilterMatch | Regex to select the username used to match the filter (for example to select the ownership of a group |
usernameFilterSubstitution | Filter to select the ownership to a group, using the previous selection |
Notes:
urlLDAP
can be either ldap or ldaps. In case of ldaps ensure to have imported the certificates of the ldap server. Without, the ldapsearch fails.usernameMatch
andusernameSubstitution
are used to find a match between the username used to login and the username present in the ldap database. Typical use case is when to login the user wants to use the user part of the email address, and the ldap uses the full email address to authenticate the userusernameFilterMatch
is similar to the usernameMatch, but in case a filter should be applied. The typical use case of a filter is when the user must be authenticated if the password is correct AND the user belongs to a given groupusernameFilterSubstitution
is the filter itself, which uses the value captured by the usernameFilterMatch- The username, after the filters mentioned above are applied, must be also configured locally in Nemo. The password can be anything, because the authentication will be done by the LDAP server, but it is still necessary to correctly assign the access rights (elements, pages, charts, ...) to the user. It can be still used in case the LDAP connection is unavailable, so it must be chosen carefully.
A configuration example can be found in this section.
mongoDB password setup
The default installation is without setting up any password on mongoDB, because either the database is accessible only from the local host, in case of a standalone installation and without probes or, in case of a redundant installation with or without probes, it is suggested to use the internal linux firewall, to allow the access only from Nemo nodes. Nevertheless, it is possible to set up a password to allow the read/write access only after a successful authentication. The following procedures describe how to set up a password in case of a standalone or reduntant configuration.
Standalone configuration
Login on the Nemo server with root rights
- stop Nemo processes
systemctl stop nemo
2a. login in mongo console
mongo
MongoDB shell version v5.0.30
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5675abc9-727f-46bd-864a-efbafd2726f5") }
MongoDB server version: 5.0.30
>
2b. switch to admin database
use admin
2c. create the admin user with the a password of your choice (in the example: nemo)
db.createUser(
{
user: "nemoadmin",
pwd: "nemo",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }
{ role: "readWriteAnyDatabase", db: "admin" }
]
}
)
2d. give root access to admin database
db.grantRolesToUser('nemoadmin', [{ role: 'root', db: 'admin' }])
- enable the authentication on mongo config file
/etc/mongod.conf
sed -i 's/#security.*/security:\n\ authorization: enabled/g' /etc/mongod.conf
- restart mongod process
systemctl restart mongod
- edit nemo configuration file
/opt/nemo/etc/global.conf
to add the authentication information example:
sed -i '/\[database\]/ausername=nemoadmin\npassword=nemo' global.conf
The section [database]
should appear similar to this (the host could be different, depending on the configuration):
[database]
username=useradmin
password=nemo
host=127.0.0.1
- start nemo processes
systemctl restart nemo
- edit the cron file '/etc/cron.d/mongo' to rotate logs, adding the authentication information. The first line of the file should look like the following example:
0 1 * * * mongod echo "db.adminCommand( { logRotate : 1 } )" | mongo -u nemoadmin -p nemo
probe configuration adaptation
In case one or more probes are present, adapt the configuration of nemo repeating the points 5 and 6 of the previous section on each probe.
Redundant configuration
In case of a redundant configuration, execute the following steps:
- stop Nemo processes on all the servers (central servers, and probes if any)
systemctl stop nemo
2a. Login on one of the central servers as root, and create a keyfile, and update the access rights and the owner:
Example:
openssl rand -base64 756 > /var/lib/mongo/mongo.keyfile
chmod 400 /var/lib/mongo/mongo.keyfile
chown mongod.mongod /var/lib/mongo/mongo.keyfile
2b. copy the mongo.keyfile
just created to the other central server and to the arbiter, and set the correct rights and owner as at point 1a.
INFO
With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment. Only mongod
instances with the correct keyfile can join the replica set.
- Login on both central servers as root, and verify that mongoDB is in a normal state, with a PRIMARY and SECONDARY node. To do so, it is sufficient to enter in the mongo CLI and check the prompt.
Example of a Primary node:
[root@NemoNL-A ~]# mongo
MongoDB shell version v5.0.30
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d8323e84-7431-4fa0-8a40-8b4dfe207543") }
MongoDB server version: 5.0.30
nemo:PRIMARY>
Example of a Secondary node:
[root@NemoNL-B ~]# mongo
MongoDB shell version v5.0.30
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ee1842a0-6323-4dc7-9b89-53ffff10bcac") }
MongoDB server version: 5.0.30
nemo:SECONDARY>
- Login on the central server with the SECONDARY mongoDB, and shut it down.
use admin
db.shutdownServer()
exit
Repeat this step on the arbiter, and as last, on the PRIMARY mongoDB.
WARNING
The primary must be the last member shut down to avoid potential rollbacks.
At the end of this step, all members of the replica set should be offline.
- Edit the mongod configuration file `/etc/mongod.conf' to add the keyfile
Example:
sed -i 's/#security.*/security:\n\ authorization: enabled\n\ keyFile: \/var\/lib\/mongo\/mongo.keyfile/g' mongod.conf
The mongod.conf
file should appear similar to this:
[...]
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.
security:
authorization: enabled
keyFile: /var/lib/mongo/mongo.keyfile
#operationProfiling:
replication:
replSetName: nemo
[...]
- Restart mongoDB on all the nodes: PRIMARY, ARBITER, SECONDARY.
Example:
systemctl restart mongod
7a. login in mongo console of the PRIMARY mongoDB
mongo
MongoDB shell version v5.0.30
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d8323e84-7431-4fa0-8a40-8b4dfe207543") }
MongoDB server version: 5.0.30
nemo:PRIMARY>
7b. switch to admin database
use admin
7c. Create the admin user with the a password of your choice (in the example: nemo)
db.createUser(
{
user: "nemoadmin",
pwd: "nemo",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }
{ role: "readWriteAnyDatabase", db: "admin" }
]
}
)
7d. give root access to admin database
db.grantRolesToUser('nemoadmin', [{ role: 'root', db: 'admin' }])
7e. After the creation of the user, try to authenticate and verify that the output is 1
Example:
nemo:PRIMARY> db.getSiblingDB("admin").auth("nemoadmin", "nemo")
1
- Edit the cron file '/etc/cron.d/mongo' to rotate logs, adding the authentication information on Primary and Secondary servers. The first line of the file should look like the following example:
0 1 * * * mongod echo "db.adminCommand( { logRotate : 1 } )" | mongo -u nemoadmin -p nemo
- Edit nemo configuration file
/opt/nemo/etc/global.conf
to add the authentication information on all the central servers, and eventually the probes.
example:
sed -i '/\[database\]/ausername=nemoadmin\npassword=nemo' global.conf
The section [database]
should appear similar to this (the host could be different, depending on the configuration):
[database]
username=nemoadmin
password=nemo
host=10.1.0.215,10.1.18.2
replicaSet=nemo
- restart nemo processes
systemctl restart nemo