Skip to content

Introduction

The following document is guiding the administrator in the installation of Nemo version 4.

Prerequisites

Some prerequisites must be met to have a successful installation.

  • At least 1 VM/Server for the Nemo central server must be available
  • At least 1 VM/Server for the Nemo probe must be available, if applicable
  • On all VMs Linux RedHat 7 or CentOS 7 must be installed for compatibility with all the installation types. In some cases RedHat 8 and derivates can be used. In case the customer has purchased Netaxis support on the Linux system, RedHat is mandatory.
  • Nemo probe must have at least 2 network interfaces, one for management and one to receive the mirrored traffic
  • The partition layout depends on the amount of data that the administrator wants to hold on Nemo. See the next section as an example.

WARNING

Nemo 4 can be installed on RedHat8/Alma8/Centos8/... but these versions are NOT compatible with a deployment which includes Probe(s), and Oracle Radius plugin. Nemo 4 is certified ONLY with the broadworks plugin.

Servers/VMs requirements

The servers/VMs requirements are highly dependent on the performance requested (for the probes for example it depends on packets per second received, bandwidth, packet rate on each interface, maximum simultaneous calls signaling/media/statistics). As general reference, the administrator should consider 4 Cores, 2.4+ GHz clock speed and 8 GB RAM.

Partition layout

The following partitions must be created in the Nemo central server and probes. Netaxis advise to use LVM, to allow any resizing if needed. The amount of disk space depends on the amount of data which the administrator wants to store.

Netaxis can deliver an estimation of the partitions' size if the following information are shared in advance:

  • Number of CDRs per day
  • CDR retention (in days)
  • CDR type (Oracle, Audiocodes, Broadsoft, probes only, ...)
  • Stats retention (in days)
  • Traffic type (business, residential, mixed)

The sizes shown in the table below are the minimum suggested.

Mount pointFS typeHost typeMinimum sizeDescription
/opt/nemoext4all5 GBNemo sw
/var/log/nemoext4all5 GBNemo logs
/data/dbext4Central server50 GBMongodb database
/data/cdrext4Central server20 GBCDR collection (if applicable)
/data/backupext4Central server50 GBBackups of database and CDRs (if applicable)
/data/tracesext4probes50 GBpcap traces
/ext4all30 GB

TIP

/data/traces is necessary only in case of probe/hybrid deployment. The amount of disk space for the traces depends on the amount of traces captured with RTP, the length of the calls, and the codec (when RTP is captured). A larger partition is recommended.

Firewall configuration

The ports shown in the table below are used by Nemo software.

SourceDestinationTransportPortProtocolNotes
ProbeCentral ServerTCP11000Nemo
ProbeCentral ServerTCP11001Nemo
Central ServerProbeTCP8081Nemo
Central ServerProbeTCP443https
SBCCentral ServerUDP514syslogoptional, Audiocodes CDRs only
SBCCentral ServerTCP22sftpoptional, Oracle CDRs only
SBCCentral ServerUDP1813radiusoptional, Oracle CDRs only
BroadsoftCentral ServerTCP21ftpoptional, Oracle and Broadsoft only
Remote AccessCentral ServerTCP22ssh/sftp
Remote AccessCentral ServerTCP443httpshttp or https should be chosen; port can be customized
Remote AccessCentral ServerTCP8080httphttp or https should be chosen; port can be customized
Remote AccessProbeTCP22ssh/sftp

RPM Packages

The following RPM packages are required to perform Nemo installation

Packagecertified versionNotes
nemo4.x.x
cyrus-sasl2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-gssapi2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-lib2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-plain2.1.26-24dependencies of mongodb-4.4
mongodb-database-tools100.7.0
mongodb-org4.4.19-1
mongodb-org-database-tools-extra4.4.19-1
mongodb-org-mongos4.4.19-1
mongodb-org-server4.4.19-1
mongodb-org-shell4.4.19-1
mongodb-org-tools4.4.19-1
zeromq4.0.5-1required only in case of probes installation (standalone or hybrid deployment)
zeromq-devel4.0.5-1required only in case of probes installation (standalone or hybrid deployment)
pfring-dkms6.4.1-1143required only in case of probes installation (standalone or hybrid deployment)
pfring6.4.1-1143required only in case of probes installation (standalone or hybrid deployment)
freeradius.3.0.13-6required only in case of Oracle Radius plugin deployment
freeradius-nemo3.0.13-6required only in case of Oracle Radius plugin deployment

TIP

Any version of mongodb 4.4.x is compatible with Nemo: the latest minor version can be installed.

WARNING

the packages listed above are for a RedHat 7 installation. In case of istallation on RedHat 8 the packages and the versions could be different

Installation procedure

In the following sections, all the steps to install Nemo are described.

Preliminary step for any type of installation

The administrator must have root access to the Linux machine where the software must be installed.

  1. Install the following packages:
yum install -y net-tools wget vim screen man tcpdump at ntp rsync hdparm libxslt openssl libpcap wireshark pango cairo ansible pciutils glib2
dnf install -y net-tools wget vim tmux man tcpdump at chrony rsync hdparm libxslt openssl libpcap wireshark pango cairo ansible-core pciutils glib2 pkg-config compat-openssl10

TIP

ntp can be replaced by chrony or any other time sync tool

In case SAML authentication is required, the following extra packages must be installed:

yum install -y xmlsec1-openssl xmlsec1 libtool-ltdl
dnf install -y xmlsec1-openssl xmlsec1 libtool-ltdl

In case LDAP authentication is required, the following extra packages must be installed:

yum install -y openldap openldap-clients
dnf install -y openldap openldap-clients
  1. Enable the autostart of time synchronization server and job scheduler
chkconfig ntpd on
chkconfig atd on
systemctl enable chronyd
systemctl enable atd
  1. Disable SELINUX, editing the document /etc/sysconfig/selinux

Example:

sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
  1. Configure the NTP synchronization

RedHat 7

Adapt the file /etc/ntp.conf. Especially check the lines starting with server and eventually adapt accordingly with your NTP server IP address or FQDN. If the public servers can be used, no change to be done.

Redhat 8

If the public NTP servers can be used, no change to be done as NTP pool are already mentioned in the chrony configuration file. Check if the synchronization is active by checking on which NTP pool sources we are synchronizing or by displaying the system's clock performance:

chronyc sources
chronyc tracking

If the use of a public server is not allowed, edit the file /etc/chrony.conf, and add the server directive with their IP address or FQDN.

Example:

server ntp.example.com
server 192.0.2.1
  1. Reboot the server/VM.

Preliminary steps for Central server deployment

Files needed to complete these section:

Packagecertified versionNotes
cyrus-sasl2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-gssapi2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-lib2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-plain2.1.26-24dependencies of mongodb-4.4
mongodb-database-tools100.7.0
mongodb-org4.4.19-1
mongodb-org-database-tools-extra4.4.19-1
mongodb-org-mongos4.4.19-1
mongodb-org-server4.4.19-1
mongodb-org-shell4.4.19-1
mongodb-org-tools4.4.19-1
zeromq4.0.5-1required only in case of probes installation (standalone or hybrid deployment)
zeromq-devel4.0.5-1required only in case of probes installation (standalone or hybrid deployment)

TIP

Any version of mongodb 4.4.x is compatible with Nemo: the latest minor version can be installed.

WARNING

the packages listed above are for a RedHat 7 installation. In case of istallation on RedHat 8 the packages and the versions could be different

  1. Transfer the above files on the server/VM.

  2. Install the following zeromq packages with yum commands, in the right order.

Example:

rpm -Uvh zeromq-*
  1. Install mongodb files.

Example:

yum install cyrus*
yum install mongodb*4.4.19-1.el7.x86_64.rpm
  1. Modify the /etc/mongod.conf file to use the directory /data/db to store its files (parameter dbpath)

Example:

sed -i 's#dbPath: .*#dbPath: /data/db#g' /etc/mongod.conf
  1. Modify in the same file at the previous point the bindIp parameter. By default the DB listens on localhost only and thus can only acceptn local connections. If the DB should be reachable from other serversn (such as probes or other Nemo servers), ensure that the DB listens on all addresses.

Example:

sed -i 's/bindIp: .*/bindIp: 0.0.0.0/g' /etc/mongod.conf

WARNING

As the DB will be reachable from anywhere, ensure that proper firewalling is set up at customer site or adapt the configuration of firewalld/iptables on the Linux system hosting Nemo central server

  1. Change the rights of /data/db to be accessible by mongodb.

Example:

chown mongod:mongod /data/db
cd /data/db/
chmod 755 ..
  1. Enable mongodb at startup.
chkconfig mongod on
systemctl enable mongod
  1. Reboot

  2. Verify that mongodb is running correctly, for example launching mongodb client.

Example:

MongoDB shell version v4.4.19
connecting to:
mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("2adf4095-98ba-42af-b65c-b42e5064fec7") }
MongoDB server version: 4.4.19
---
The server generated these startup warnings when booting:
2023-03-13T14:38:40.665+01:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestriced
2023-03-13T14:38:40.666+01:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
2023-03-13T14:38:40.666+01:00: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

TIP

If the mongo command is executed for the first time, an error message such as E - \[main\] Error loading history file: FileOpenFailed: Unable to fopen() file /root/.dbshell: No such file or directory can be displayed. It is due to the fact the history file does not exist. This error can be ignored and should not appear the next time the command is executed again.

If you encounter an issue, check the MongoDB log (in the directory /var/log/mongodb). It might happen that the locales were not correctly set during OS installation, which prevents MongoDB from starting. In this case, edit the file /etc/environment and set:

LANG=en_US.utf8
LC_CTYPE=en_US.utf8

Preliminary steps for a probe deployment

Files needed to complete these section:

PackageCertified version
zeromq4.0.5-1
zeromq-devel4.0.5-1
pfring-dkms6.4.1-1143
pfring6.4.1-1143
  1. Transfer the above files on the server/VM.

  2. Install the following zeromq packages with yum commands, in the right order.

Example:

yum install -y zeromq-4.0.5-1.el7.centos.x86_64.rpm
yum install -y zeromq-devel-4.0.5-1.el7.centos.x86_64.rpm
  1. Another repository, EPEL (Extra Packages for Enterprise Linux) must be activated to be able to install packages which are not distributed with the base Redhat 7 install (such as dkms, required for PF_RING installation).

If linux is connected to internet:

yum install epel-release

If linux is not connected to internet and you have a package, download the package from a linux machine connected to the internet

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

And then, transfer and install it:

yum install epel-release-latest-7.noarch.rpm
  1. Update the kernel

Example:

yum update kernel
  1. Reboot

  2. After reboot, install the kernel-headers so that the custom version of pfring drivers for the NICs can be installed.

Example:

yum install kernel-headers
  1. Reboot

  2. Install the kernel-devel package.

Example:

yum install kernel-devel
  1. Reboot

  2. PF_RING is a collection of libraries and drivers which allows Nemo to read packets directly from the NIC and bypass the kernel network stack. Install the provided pfring packages provided.

Example:

yum install -y pfring-dkms-6.4.1-1143.noarch.rpm
yum install -y pfring-6.4.1-1143.x86_64.rpm

WARNING

The installed package version must be the one indicated in this guide. If a wrong version of pfring has been installed, it must be removed and replaced by the one provided by Netaxis.

  1. Execute the following commands:
mkdir /etc/pf_ring
touch /etc/pf_ring/pf_ring.conf
touch /etc/pf_ring/pf_ring.start
chkconfig pf_ring on
  1. Reboot

Nemo software installation

Files needed to complete this section:

PackageVersion
nemo4.x.x

Install the provided Nemo package on all the servers/VMs (both central servers and probes).

Example:

rpm -ihv nemo-4.1.0-0.1.x86_64.rpm

Post-installation for central servers

What is needed to complete this section:

Nemo licenses

  1. Initialize the mongodb.
mongo < /opt/nemo/scripts/mongodb-databases.js
  1. Adapt the file /opt/nemo/etc/global.conf, following the section Global configuration file

  2. Restart nemo service.

Example:

service nemo restart
  1. Enable nemo autostart.
chkconfig nemo on
systemctl enable nemo

Nemo licenses installation

Add licenses with the following command: /opt/nemo/bin/nemo-admin licenses add <license>

At least 2 licenses must be added: 1 license is to activate the GUI, and it is the same for all the deployments. The second depends on the deployment. Contact Netaxis to get your license.

TIP

After the installation of the licenses, the GUI should be reachable. By default the gui process is listening on http://<ip address>:8080. If the GUI is not reachable, check that the linux firewall is properly configured (firewalld, iptables, ...) or deactivated.

Mongodb log rotation

Create the text file /etc/cron.d/nemo with the following content

0 1 * * * mongod echo "db.adminCommand( { logRotate : 1 } )" | mongo
5 1 * * * mongod find /var/log/mongo* -type f -name "*.log.*" -mtime +3 -exec rm -f {} \;

The argument mtime +3 can be adapted to keep more than 3 days of logs.

Post-installation for probes

  1. Adapt the file /opt/nemo/etc/global.conf, following the section Global configuration file. Particularly important are the information related to the section [database], pointing to the central servers, and the section [CaptureEngine] to set up the capture interface.

  2. Restart Nemo service.

Example:

service nemo restart
  1. Activate the auto startup of nemo service.

Example:

chkconfig nemo on
  1. Verify that the capture-probe process is running properly (do it several times and verify that the PID is always the same).

Example:

service nemo status
nemo.service - SYSV: Nemo GUI
Loaded: loaded (/etc/rc.d/init.d/nemo; bad; vendor preset: disabled)
Active: active (running) since Wed 2018-10-17 15:26:15 CEST; 31s ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/nemo.service
├─2231 /opt/nemo/bin/python /opt/nemo/bin/nemo-watchdog
├─2232 /opt/nemo/bin/python /opt/nemo/bin/nemo-health-monitor
├─2233 /opt/nemo/bin/python /opt/nemo/bin/nemo-capture-engine
└─2271 /opt/nemo/bin/nemo-capture-probe -i ens192 -d 78.110.197.121 -o
78.110.197.121 -sip -rtpStats -rtpCapture -rtpNotificationInterval 5m
-sipNotif...
  1. Check the logfile /var/log/nemo/capture-probe-<ifname>.log you'll see that the capture-probe is listening and will tell you the amount of UDP/TCP packets received.

Example:

tail -f capture-probe-ens192

2018-10-17 15:27:05.460 WARN (1173) TCP streams: 0 (0 sip, 0 denied) - segments: denied: 0, purged: 0, max purged: 0, avg purged: NaN
2018-10-17 15:27:05.461 WARN (1320) stats rates (/sec): ring: 0.0, non-RTP: 0.0, UDP: 0.0, TCP: 0.0 purged: 0.0 === tracked: calls: 0,
media: 0, TCP: 0 === processed: calls: 0, UDP: 0, TCP: 0 === ring: rx: 1, dropped: 0 (0.00 %)

TIP

If this is not the case, meaning that the capture-probe process restarts or gives you warnings, it may mean that the installation was not done properly and it needs some check. An error at this stage may involve a wrong installation on pfring.

  1. After having set up the probe(s), the probe's hostname must be set in the Nemo GUI, if the probes are not reachable through to their configured hostnames. To do so, login in Nemo Central server GUI as administrator (ask Netaxis the default credentials), go to settings→system settings and in the field hostname mappings insert the following:
<hostname probe 1>,http://<ip address probe 1>:8081/;<hostname probe 2>,http://<ip address probe 2>:8081/;...;<hostname probe n>,http://<ip address probe n>:8081/

Where: <hostname probe x> is the hostname for each probe <ip address probe x> is the IP address for each probe

Example:

nemo-probe,http://10.1.0.230:8081/

Global configuration file

The file global.conf is located in the directory /opt/nemo/etc and it contains the configuration of the Nemo central server and probes.

Global.conf sections descriptions

The file is divided into sections, each one with a specific function. Some of them should be customized to match the license and the desired need.

Section gui

It contains the parameters of the GUI.

  • run parameter must be set to 1 if the GUI must run on the server (typically true for the central servers)
  • user parameter must be set to root if the GUI uses a port lower than 1024. Can be set to a different user (for example nemo) in case it is using a higher port (for example 8443)
  • The following lines must be added, if the live tracing must be enabled
liveTracing=True
liveTracing_orchestrators=<db address 1>;<db address 2>

Where <db address 1> <db address 2> are the IP addresses where the orchestrators are running (one for each nemo central server).

Section StatsEngine

It contains the configuration for the Stats engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo
  • groups parameter must match the license. Possible values are: netnetsd mediant capture sonus netmatch broadsoft metaswitch. In case of multi-plugins deployment, the values must be comma-separated.

Section StatsGlobalEngine

It contains the configuration for the Stats global engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo

Section ReportingEngine

It contains the configuration for the Reporting engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo

Section CDRExportEngine

It contains the configuration for the CDR export engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo

Section StatsExportEngine

It contains the configuration for the Stats export engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo

Section AnomaliesEngine

It contains the configuration for the Anomalies engine.

  • run parameter must be set to 1 if the stats engine must run on the server (typically true for the central servers)
  • user parameter must be set to nemo

Section HealthMonitor

It contains the configuration for the health monitor.

  • run parameter must be set to 1 if the stats engine must run on the server (all the servers)
  • user parameter must be set to root
  • monitorDatabase should be set to yes to enable the purge of the old records in the database. It should be disabled on probes if monitoring is already performed on the central server.
  • monitorFilesystems parameter is used to monitor and keep the filesystem clean. It is recommended to configure it both on central server and probes. Each filesystem to monitor is a set of 3 parameters comma-separated: path, warning threshold (in %) and danger threshold (in %). Suggested values: monitorFilesystems=/data/db,75,90;/data/cdr,75,90
  • purgeTraces parameter is used typically on the probes, to cleanup the directory which contains the traces. Automatic purging of traces is composed of 2 parameters, comma-separated: path and target size in GB. The health monitor will delete oldest directories under path to stay below the target size, as reported by df Linux utility. Example: purgeTraces=/data/traces/,2
  • purgeTracesSchedule parameter is used typically on probes, to schedule the execution of the purge at the previous point. It defines hours schedule to purge traces as a set of comma-separated intervals (i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for daily from 05:00 to 05:59). On systems where capture traffic is high, the purge should be scheduled during out of office hours to minimize impact on real-time capture processing. Example: purgeTracesSchedule=2-4,12

Sections from QueueRunner to NetmatchSLECDRCollector

These sections must be enabled (run=1) if the system must be enabled to elaborate CDRs, depending on the license. By default, the collectors will not start: these configuration sections can be omitted.

  • QueueRunner is Oracle CDRs via Radius
  • SMXRCSCDRCollector is Oracle SMX CDRs
  • SDCDRCSVCollector is Oracle CDRs via SFTP
  • MediantCDRSyslogCollector is Audiocodes Mediant via syslog
  • SonusCDRCSVCollector is Sonus SBC CSV CDRs
  • BWCDRXMLCollector is Broadworks XML CDRs
  • BWCDRCSVCollector is Broadworks CSV CDRs
  • MetaswitchCDRXMLCollector is Metaswitch XML CDRs
  • NetmatchSLECDRCollector is Italtel Netmatch

Section CaptureOrchestrator

It defines if the server/VM is acting as an orchestrator. Typically is the central server when probes are also configured: in this case the run parameter must be set run=1. In case the Nemo server is used only with CDRs, the run parameter should be set run=0.

Section database

The parameter host defines the IP address(es) of the databases. Typically a single address, but it could be multiple comma separated, in case the redundancy is activated. In case redundancy is activated, an extra parameter replicaSet must be added and set to nemo. Redundancy configuration is described in the section Database Replication.

Example with redundancy with 2 sites:

host=10.1.0.215,10.1.18.2
replicaSet=nemo

Example without redundancy:

host=127.0.0.1

WARNING

For Probes it is mandatory to put the IP address(es) of the nodes which contain the database(s). 127.0.0.1 is not valid.

Section plugins

The parameter group defines which plugin must be activated, and it is dependent on the Nemo license installed. Possible values are: netnetsd mediant capture sonus netmatch broadsoft metaswitch. In case of multi-plugins deployment, the values must be comma-separated.

Section CaptureEngine

DANGER

Change these values only if you know what you are doing.

It defines from which interface and with which options the probe should capture the traffic. It must be configured for the probes only. The following options are possible:

  • -autoRestartDropRateThreshold int auto-restart drop rate threshold (%), 0 to disable (default 10)
  • -c int capture ring cluster id
  • -containerINVITE group SIP INVITE traces container files (reduces the amount of capture files)
  • -cpuProfile CPU profiling
  • -debugTCP log TCP stream reassembly information
  • -filterTCPPorts string TCP ports ranges to process (i.e. 5060-5070, 5060-5060&5070-5070)
  • -filterUDPPorts string UDP ports ranges to process (i.e. 1024-65535, 5060-5060&10000-10499)
  • -historyLevel string history level (critical, error, warning, info, debug) (default info)
  • -i string Interface to read packets from (default en1)
  • -liveMaxCalls int maximum calls for live capture (default 1000)
  • -liveMaxDuration string maximum time duration for live capture (default 1m)
  • -logLevel string log level (critical, error, warning, info, debug) (default warning)
  • -longDurationCalls int long duration calls (secs) (default 7200)
  • -longDurationCallsDumpInterval int long duration calls dump interval (secs) (default 3600)
  • -maxSipPerCall int max SIP messages per call (loop detection) (default 100)
  • -memProfile memory profiling
  • -memStatsInterval int memory stats logging interval (default 300)
  • -o string capture orchestrator (multiple orchestrators can be &-separated)(default 127.0.0.1)
  • -oci int capture orchestrators - check interval for alive status and elect (default 30)
  • -optionsCapture capture OPTIONS
  • -oto int capture orchestrator allowed time (seconds) between alive messages (default 7)
  • -preferredCallingHeader string Preferred calling header (pai or from) (default pai)
  • -resolveNAT resolve hosted NAT traversal RTP streams
  • -rtpCapture capture RTP
  • -rtpNotificationInterval string RTP notification to orchestrator interval (i.e. 30s 5m 1h) (default 5m)
  • -rtpProcessingOrchestration enable RTP processing orchestration among different probes
  • -rtpSiblings string comma-separated list of sibling probes to use for RTP capture synchronization
  • -rtpSiblingsBind string address:port to bind to in order to receive messages from other probes (default 127.0.0.1:11003)
  • -rtpStats compute RTP stats
  • -rtpVLANMatch SIP/RTP VLAN matching (default true)
  • -sip capture SIP
  • -sipINVITETimeout string SIP INVITE session timeout (i.e. 30s 5m 1h) (default 24h)
  • -sipNotificationInterval string SIP notification to orchestrator interval (i.e. 30s 5m 1h) (default 5m)
  • -sipUDPPorts string UDP ports ranges to process for SIP signaling (i.e. 5060-5070, 5060-5061&5070-5070) (default 5060-5070)
  • -tcpCloseConnections string close pending TCP connections duration (default 5m)
  • -tcpFlushPackets string flush orphan TCP packets duration (default 1m)

TIP

The options and their values should be comma separated. If more interfaces should be configured, each interface configuration must be separated with a semicolon.

Example with 4 capture interfaces on the probe, 2 orchestrators and 2 central servers:

probes=-i,p3p1,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture;-i,p3p2,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture,-logLevel,info;-i,p3p3,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture;-i,p3p4,-o,145.219.189.110&145.219.189.111,-sip,-rtpStats,-rtpCapture

Example with 1 capture interface on the probe and a single orchestrator:

probes=-i,p3p1,-o,145.219.189.110,-sip,-rtpStats,-rtpCapture

Section watchdog

Leave the default value.

global.conf examples

In the following sections some examples of global.conf files are shown.

Redundant DB and two orchestrators (probes) and LDAP auth

[GUI]
run=1
#user=nemo
liveTracing=True
liveTracing_orchestrators=10.1.0.215;10.1.18.2

[GUI.LDAP]
urlLDAP=ldaps://10.1.0.217
usernameMatch=(.*)
usernameSubstitution=uid=\1,ou=People,dc=netaxis,dc=nl
usernameFilterBaseDN=ou=People,dc=netaxis,dc=nl
usernameFilterMatch=(.*)
usernameFilterSubstitution=(&amp;(uid=\1)(memberOf=cn=Oracle-Admin,ou=Groups,dc=netaxis,dc=nl))
[StatsEngine]
run=1
user=nemo
groups=capture
[StatsGlobalEngine]
run=1
user=nemo

[ReportingEngine]
run=1
user=nemo

[CDRExportEngine]
run=1

[StatsExportEngine]
run=1

[AnomaliesEngine]
run=1
user=nemo

[HealthMonitor]
run=1
user=root

#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
monitorDatabase=yes

#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90

#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,2

#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=2-4,12

[QueueRunner]
run=0

[SMXRCSCDRCollector]
run=0

[SDCDRCSVCollector]
run=0

[MediantCDRSyslogCollector]
run=0

[NetmatchSLECDRCollector]
run=0

[CaptureOrchestrator]
run=1

[CaptureEngine]
run=0
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth1,-d,10.1.0.215,-o,10.1.0.215,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch,false,-resolveNAT

[database]
host=10.1.0.215,10.1.18.2
replicaSet=nemo

[plugins]
groups=capture

[watchdog]
logLevel=10

Single central server with CDRs only (Oracle via sftp, Audiocodes via syslog)

[GUI]
run=1
#user=nemo

[StatsEngine]
run=1
user=nemo
groups=netnetsd,mediant

[StatsGlobalEngine]
run=1

[ReportingEngine]
run=1
user=nemo

[CDRExportEngine]
run=1

[StatsExportEngine]
run=1

[AnomaliesEngine]
run=1
user=nemo

[HealthMonitor]
run=1
user=root

#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
#monitorDatabase=yes

#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90

#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,500

#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=2-4,12

[QueueRunner]
run=0

[SMXRCSCDRCollector]
run=0

[SDCDRCSVCollector]
run=1

[MediantCDRSyslogCollector]
run=1

[NetmatchSLECDRCollector]
run=0

[CaptureOrchestrator]
run=0

[CaptureEngine]
run=0
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth2,-d,10.100.0.9,-o,10.100.0.9,-sip,-rtpStats,-rtpCapture

[database]
host=127.0.0.1

[plugins]
groups=netnetsd,mediant

[watchdog]
logLevel=10

Hybrid deployment (two orchestrators/databases and Oracle CDRs via SFTP)

[GUI]
run=1
user=nemo
liveTracing=True
liveTracing_orchestrators=145.219.189.110;145.219.189.111

[StatsEngine]
run=1
user=nemo
groups=netnetsd

[ReportingEngine]
run=1
user=nemo

[CDRExportEngine]
run=1

[StatsExportEngine]
run=1

[AnomaliesEngine]
run=1
user=nemo

[HealthMonitor]
run=1
user=root

#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
monitorDatabase=yes

#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
monitorFilesystems=/data/db,75,90;/data/cdr,75,90

#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
#purgeTraces=/data/traces/,500

#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=1-5

[QueueRunner]
run=0

[SMXRCSCDRCollector]
run=0

[SDCDRCSVCollector]
run=1

[MediantCDRSyslogCollector]
run=0

[NetmatchSLECDRCollector]
run=0

[CaptureOrchestrator]
run=1

[CaptureEngine]
run=0
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
#probes=-i,eth2,-d,10.100.0.9,-o,10.100.0.9,-sip,-rtpStats,-rtpCapture

[database]
host=145.219.189.110,145.219.189.111

[plugins]
groups=netnetsd

[watchdog]
logLevel=10

Probe global.conf configuration (with 2 orchestrators, 1 capture interface)

[GUI]
run=0
user=nemo

[StatsEngine]
run=0
user=nemo

[ReportingEngine]
run=0
user=nemo

[CDRExportEngine]
run=0

[StatsExportEngine]
run=0

[AnomaliesEngine]
run=0
user=nemo

[HealthMonitor]
run=1
user=root

#Monitoring of database to purge old records. It should be disabled on
probes if monitoring is already performed on central server.
#monitorDatabase=yes

#Monitoring of different filesystems, semicolon-separated. Each
filesystem to monitor is a set of 3 parameters comma-separated: path,
warning threshold (in %) and danger threshold (in %).
#monitorFilesystems=/data/db,75,90;/data/cdr,75,90

#Automatic purging of traces is composed of 2 parameters,
comma-separated: path and target size in GB. The health monitor will
delete oldest directories under path to stay below the target size, as
reported by df Linux utility.
purgeTraces=/data/traces/,10

#Hours schedule to purge traces as a set of comma-separated intervals
(i.e. 2-4 for daily from 02:00 to 04:59) or single hours (i.e. 5 for
daily from 05:00 to 05:59). On system where capture traffic is high, the
purge should be scheduled during out of office hours to minimize impact
on real-time capture processing.
#purgeTracesSchedule=1-6

[QueueRunner]
run=0

[SMXRCSCDRCollector]
run=0

[SDCDRCSVCollector]
run=0

[MediantCDRSyslogCollector]
run=0

[NetmatchSLECDRCollector]
run=0

[CaptureOrchestrator]
run=0

[CaptureEngine]
run=1
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the
different
probes=-i,eth1,-o,10.1.0.215&amp;10.1.18.2,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch=false,-resolveNAT,-logLevel,debug,-filterTCPPorts,5060-5090,-optionsCapture

[database]
host=10.1.0.215,10.1.18.2
#host=2607:fc30:100:1000::4
replicaSet=nemo

[plugins]
groups=capture

[watchdog]
logLevel=10

Optional configurations

Depending on the deployment, the administrator should proceed with some of the optional configurations mentioned in the next sections. Contact Netaxis if you are not sure of what configuration should be applicable.

Radius

If Nemo receives CDRs from Oracle SBC via radius, execute the following steps.

Files needed to complete this section:

PackageCertified version
freeradius3.0.13-6
freeradius-nemo3.0.13-6
  1. Install freeradius on Nemo central server(s).

Example:

yum install -y freeradius-3.0.13-6.el7.centos.x86_64.rpm
yum install -y freeradius-nemo-3.0.13-6.el7.centos.x86_64.rpm
  1. Create the directory /data/cdr/active and change the owner to radiusd.

Example:

mkdir /data/cdr/active
chown radiusd:radiusd /data/cdr/active
  1. Set the QueueRunner in global.conf as described in section Global configuration file.

  2. Modify the file /etc/raddb/radiusd.conf setting the parameter radacctdir as the directory previously created.

Example:

radacctdir = /data/cdr/active
  1. In the same file, increase the parameter max_attributes to 500.

Example:

max_attributes = 500
  1. Modify the file the file /etc/raddb/sites-available/default commenting the line detail and adding in the line before nemo.

Example:

# Create a 'detail'ed log of the packets.
# Note that accounting requests which are proxied
# are also logged in the detail file.


#detail
nemo
# daily

# Update the wtmp file
#
# If you do
  1. Create the file /etc/raddb/mods-enabled/nemo and add the following content:
nemo {
maxFileSize = 1000000
}
  1. Edit the content of the file /etc/raddb/clients.conf with the information of the SBC which will send the data to Nemo.
client <NAS-ID> {
        ipaddr          = 10.0.65.19
        secret          = <secret>
        shortname       = ACME
        nastype         = other
}

Where <NAS-ID> and <secret> are the corresponding data configured in the account-config of Oracle SBC.

  1. Restart radius daemon

Example:

service radiusd restart
  1. Configure the autostart of radius.

Example:

chkconfig radiusd on
  1. Restart Nemo.

Example:

service nemo restart

Enable CDRs backup and/or metadata

To backup the CDRs (depending on the chosen deployment), execute the following steps.

  1. Edit the file /opt/nemo/etc/backup.conf setting the parameters BACKUP_CDRS to yes, ARCHIVED_CDR_PATH to /data/backup/cdr/ or any other desired directory, ARCHIVED_CDR_RETENTION to the number corresponding to the days the administrator wants to keep the CDRs.

Example:

#backup CDRs (no|yes)
BACKUP_CDR=yes
#path to active CDR files
ACTIVE_CDR_PATH=/data/cdr/active/
#ignore all CDRs newer than this number of days
ACTIVE_CDR_RETENTION=0
#path to store backups
ARCHIVED_CDR_PATH=/data/backup/cdr/
#number of days of archived CDRs to keep
ARCHIVED_CDR_RETENTION=14
  1. If not yet done, create the directory where the backed up CDRs will be stored.

Example:

mkdir -p /data/backup/cdr
  1. Add the following line to the file /etc/cron.d/nemo:
0 2 * * * radiusd  /opt/nemo/scripts/nemo-backup-cdr

user radiusd only in case of radius protocol used; root user can be used.

TIP

Customize the cron schedule with the desired values

To backup database metadata, do for the same metadata section

#backup metadata (no|yes)
BACKUP_METADATA=yes
#path to store backups
METADATA_BACKUP_PATH=/data/backup/db/metadata/
#backups to keep
METADATA_BACKUPS_COUNT=14

And create the appropriate directory. No need to add another line in /etc/cron.d/nemo file

Proxy configuration

In case there is a proxy/load balancer in front of Nemo and the redirect is not working as expected (web pages not available after logging in for example) using an FQDN, edit the file /opt/nemo/gui/main.conf changing/uncommenting the following lines:

tools.proxy.on = True
tools.proxy.base = '<hostname>'

Where <hostname> is the FQDN in use.

Prevent yum upgrade/update for special packages

The RPMs installed following this guide must not be updated automatically with commands like yum upgrade, because an updated version could be not compatible with the Nemo software. To prevent this, for example the administrator can change the file /etc/yum.conf adding the following line at the end of the file:

Exclude following Packages Updates

exclude=zeromq pfring zeromq-devel freeradius mongodb

HTTPs configuration

TIP

The examples below are referring to a RedHat 7 installation. In case of a RedHat 8 installation, the commands in the examples may not work.

To activate HTTPs execute the following steps.

  1. Generate a private key.

Example:

openssl genrsa -out /opt/nemo/gui/privkey.pem
  1. Generate a self signed certificate or a CSR. In case of CSR, it must be signed by a certificate authority.

Example (self signed certificate, 10 years validity):

openssl req -new -x509 -days 3650 -key /opt/nemo/gui/privkey.pem -out /opt/nemo/gui/cert.pem

In case a CSR is generated, the signed certificate must be placed in the directory /opt/nemo/gui.

WARNING

Verify that the user nemo is able to read the pem files. For example:

chown nemo:nemo /opt/nemo/gui/privkey.pem
chown nemo:nemo /opt/nemo/gui/cert.pem
  1. Add/change/uncomment the following lines of the file /opt/nemo/gui/main.conf in global section.
server.socket_port=<port>

server.ssl_module = 'builtin'
server.ssl_certificate = "/opt/nemo/gui/cert.pem"
server.ssl_private_key = "/opt/nemo/gui/privkey.pem"

Where:

  • <port> is the port used to connect via the browser (to use a port lower than 1024, the GUI must run as root (see Section gui)
  • /opt/nemo/gui/cert.pem is the certificate, either self signed or signed
  • /opt/nemo/gui/privkey.pem is the private key
  1. Restart Nemo.

Example:

service nemo restart

TIP

In case the administrator wants to have more control on the HTTPs configuration, nginx software could be used. In this case /opt/nemo/gui/main.conf should be configured appropriately. Check also the note in the section Proxy configuration.

TIP

In case NGINX or any other proxy is used, in order to have the right source IP address in the audit.log file, the following option must be uncommented (remove the hash character):

#tools.proxy.on: True

and restart the nemo process.

Certification renewal

To renew a certificate which is going to expire:

  1. generate a CSR

Example:

openssl req -new -key /opt/nemo/gui/privkey.pem -out cert.csr

where cert.csr is the certificate request to be signed by a certificate authority.

  1. Sign the csr with a recognized certificate authority

  2. once the csr is signed, transfer the signed certificate in the directory defined in the parameter server.ssl_certificate of the file /opt/nemo/gui/main.conf, as explained in HTTPs configuration.

Example:

server.ssl_certificate = "/opt/nemo/gui/cert.pem"

WARNING

In case nginx or any other proxy is used, please refer to the related manual to determine where the certificate must be copied, and if any configuration change is needed.

  1. restart nemo

Example:

systemctl restart nemo

HTTPs hardening with NGINX

The configuration of HTTPs described in the previous section is giving a limited customization in terms of ciphers, SSL protocol version, and other settings which could be crucial for the hardening of the system. If requested, NGINX (acting as a reverse proxy) can be configured, executing the steps described below.

TIP

The following steps are only an example of nginx implementation. The linux administrator is fully responsible of nginx configuration, maintenance and update

  1. Install nginx package and all its dependencies. The file is not delivered with Nemo, since it depends on the Linux version where Nemo is installed. It is suggested to install using yum command, to install the latest version available.

  2. Enable the autostart.

Example:

chkconfig nginx on
systemctl enable nginx
  1. Change the owner of the nginx log directory.

Example:

chown -R nginx:nginx /var/log/nginx
  1. Create a nginx config file for Nemo /etc/nginx/conf.d/nemo.conf. An example file is shown below, but remember to customize the content depending on the server configuration.
server {
    listen 80;
    server_name 10.1.0.215;
    return 301 https://10.1.0.215$request_uri;
}
server {
    listen 443 ssl;
    server_name 10.1.0.215;
    access_log  /var/log/nginx/nemo_access.log;
    error_log  /var/log/nginx/nemo_error.log notice;
    ssl_certificate    /opt/nemo/gui/cert.pem;
    ssl_certificate_key /opt/nemo/gui/privkey.pem;
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    #ssl_stapling on;
    #ssl_stapling_verify on;
    gzip                on;
    gzip_min_length     2000;
    gzip_proxied        expired no-cache no-store private auth;
    gzip_types          *;

	#Version Disclosure in server section
    server_tokens off;


    #enable security-Headers
    add_header X-Content-Type-Options "nosniff";
    add_header Referrer-Policy "no-referrer";

    #handled by Nemo 4.1+ when option tools.secureheaders.on is True in main.conf, under section [/]
    #add_header X-Frame-Options "DENY";
    #add_header Content-Security-Policy "frame-ancestors 'none'";
    #add_header X-XSS-Protection "1; mode=block";
     
    #STS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

    location / {
        proxy_pass   https://0.0.0.0:8443;
        proxy_redirect     off;
        proxy_set_header   Host                 $host;
        proxy_set_header   X-Real-IP            $remote_addr;
        proxy_set_header   X-Forwarded-For      $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto    $scheme;
    }
}

Where:

  • server_name parameter defines the IP address or the FQDN of the server (published on the DNS)
  • return parameter defines the address returned if the http request is done
  • ssl_certificate defines the filename of the certificate, including the path
  • ssl_certificate_key defines the filename of the private key, including the path
  • ssl_ciphers defines the ciphers allowed to setup the connection- proxy_pass defines the address where Nemo GUI is listening.

INFO

Starting from 4.1, some of these headers are already managed by Nemo and do not need to be added again by NGINX. To check that the headers are set as expected, it is possible to perform a test with curl. Example:

curl --insecure -v https://10.1.0.215/
  1. Restart nginx.

Example:

systemctl restart nginx

The configuration file above is keeping Nemo using the port 8443 to the internal connection between Nemo and nginx, exposing the port 443 to the external world, redirecting the connection to the secured 443 connection if a standard http connection is attempted. To have nginx working the section HTTPs configuration must be first applied.

Database Replication

It is possible to deploy a Nemo database on several (central) servers. In this case a replica set must be configured on mongoDB to set up replication between different servers. The minimum configuration to have a fully working replication is 2 central servers which are hosting the database, one primary and one secondary, and an arbiter, which consists of a server/VM which must decide the role of the other two nodes. Ideally the arbiter must be always running: in case it is not, while the two databases remain fully working, a switchover could cause an unexpected behavior. For thisreason the server which hosts the arbiter shouldn't be the same which hosts one of the 2 central servers. Practically the most common situation is that the arbiter is one of the probes.

In case of a redundant deployment, the stats engine must run only on one of the central server instances. If the stats engine is running on both at the same time, the charts shown on the GUI will not be correctly calculated. In case of switchover of the database, no manual action is requested to the administrator. In case the central server where the stats engine is active is unavailable for any reason, the stats engine must be manually enabled on the secondary central server, until the main central server is back online, to keep the stats updated.

To configure an arbiter the same mongodb packages of a central server must be installed

Packagecertified versionNotes
cyrus-sasl2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-gssapi2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-lib2.1.26-24dependencies of mongodb-4.4
cyrus-sasl-plain2.1.26-24dependencies of mongodb-4.4
mongodb-database-tools100.7.0
mongodb-org4.4.19-1
mongodb-org-database-tools-extra4.4.19-1
mongodb-org-mongos4.4.19-1
mongodb-org-server4.4.19-1
mongodb-org-shell4.4.19-1
mongodb-org-tools4.4.19-1

TIP

Any version of mongodb 4.4.x is compatible with Nemo: the latest minor version can be installed.

WARNING

the packages listed above are for a RedHat 7 installation. In case of istallation on RedHat 8 the packages and the versions could be different

The following configuration should be applied.

On central servers

  1. Edit the file /etc/mongod.conf, changing the replication statement.
replication:
  replSetName: nemo
  enableMajorityReadConcern: false
  1. Restart mongod service.

Example:

service mongod restart
  1. Change /opt/nemo/etc/global.conf.
  • in database section as below:
host=<ip address server 1>,<ip address server 2>,<ip address server 3>,...
replicaSet=nemo

Where <ip address server 1>… are the IP addresses of the servers hosting the database replicas.

  • in StatsEngine section of the central server elected as secondary
[StatsEngine]
run=0
user=nemo
  • on the primary central server execute the following commands
mongo
MongoDB shell version v4.4.19
rs.initiate({_id : "nemo", members: [{ _id: 0, host: "<ip address server 1>" },{ _id: 1, host: "<ip address server 2>" }, { _id: 2, host: "<ip address server 3>" }]})

Where <ip address server 1>… are the IP addresses of the servers hosting the database replicas.

  1. Restart nemo service.

Example:

service nemo restart

On probes

  1. Change /opt/nemo/etc/global.conf.
  • in database section as in the example below:
host=<ip address server 1>,<ip address server 2>,<ip address server 3>,...
replicaSet=nemo

Where <ip address server 1>… are the IP addresses of the servers hosting the database replicas.

  • in CaptureEngine section, adding the IP addresses of all the central servers similarly to the example below:
[CaptureEngine]
run=1
#The command /opt/nemo/bin/nemo-capture-probe -h can be run to list the different
probes=-i,eth1,-o,<ip address server 1>&<ip address server 2>,-sip,-rtpStats,-rtpCapture,-rtpVLANMatch=false,-resolveNAT,-logLevel,debug,-filterTCPPorts,5060-5090
  1. Restart nemo service.

Example:

service nemo restart

On Arbiter

  1. Install mongodb files.

Example:

yum install cyrus*
yum install mongodb*4.4.19-1.el7.x86_64.rpm
  1. Enable MongoDB on boot.

Example:

chkconfig mongod on
systemctl enable mongod
  1. Edit the file /etc/mongod.conf, changing the replication statement.
replication:
  replSetName: nemo
  enableMajorityReadConcern: false
  1. Restart mongod service.

Example:

service mongod restart
  1. Connect to the primary database and add the arbiter.

Example:

mongo
rs.addArb("<IP_ADDR>:27017")

Where <IP_ADDR> is the IP address of the arbiter.

TIP

The status of the replication can be verified with the command rs.status() within the mongo interface.

Example:

nemo:PRIMARY> rs.status()
{
	"set" : "nemo",
	"date" : ISODate("2022-01-28T12:23:51.930Z"),
	"myState" : 1,
	"term" : NumberLong(728),
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1643372631, 5),
			"t" : NumberLong(728)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1643372631, 5),
			"t" : NumberLong(728)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1643372631, 5),
			"t" : NumberLong(728)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1643372631, 5),
			"t" : NumberLong(728)
		}
	},
	"members" : [
		{
			"_id" : 0,
			"name" : "10.1.0.215:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 2150237,
			"optime" : {
				"ts" : Timestamp(1643372631, 5),
				"t" : NumberLong(728)
			},
			"optimeDate" : ISODate("2022-01-28T12:23:51Z"),
			"electionTime" : Timestamp(1643366546, 1),
			"electionDate" : ISODate("2022-01-28T10:42:26Z"),
			"configVersion" : 4,
			"self" : true
		},
		{
			"_id" : 1,
			"name" : "10.1.18.2:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 5960,
			"optime" : {
				"ts" : Timestamp(1643372631, 5),
				"t" : NumberLong(728)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1643372631, 5),
				"t" : NumberLong(728)
			},
			"optimeDate" : ISODate("2022-01-28T12:23:51Z"),
			"optimeDurableDate" : ISODate("2022-01-28T12:23:51Z"),
			"lastHeartbeat" : ISODate("2022-01-28T12:23:51.905Z"),
			"lastHeartbeatRecv" : ISODate("2022-01-28T12:23:51.624Z"),
			"pingMs" : NumberLong(0),
			"syncingTo" : "10.1.0.215:27017",
			"configVersion" : 4
		},
		{
			"_id" : 2,
			"name" : "10.1.0.237:27017",
			"health" : 1,
			"state" : 7,
			"stateStr" : "ARBITER",
			"uptime" : 2150215,
			"lastHeartbeat" : ISODate("2022-01-28T12:23:50.720Z"),
			"lastHeartbeatRecv" : ISODate("2022-01-28T12:23:51.133Z"),
			"pingMs" : NumberLong(0),
			"configVersion" : 4
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1643372631, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1643372631, 5),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

LDAP/AD login configuration

To activate the LDAP/AD login configuration, a new section GUI.LDAP must be added in the file /opt/nemo/etc/global.conf, and the following parameters must be defined

NameDescription
urlLDAPURL of LDAP/AD server
usernameMatchRegex to select the username used to login
usernameSubstitutionOutput of the previous regex, to match the search of the username
usernameFilterBaseDNbaseDN to match the
usernameFilterMatchRegex to select the username used to match the filter (for example to select the ownership of a group
usernameFilterSubstitutionFilter to select the ownership to a group, using the previous selection

Notes:

  • urlLDAP can be either ldap or ldaps. In case of ldaps ensure to have imported the certificates of the ldap server. Without, the ldapsearch fails.
  • usernameMatch and usernameSubstitution are used to find a match between the username used to login and the username present in the ldap database. Typical use case is when to login the user wants to use the user part of the email address, and the ldap uses the full email address to authenticate the user
  • usernameFilterMatch is similar to the usernameMatch, but in case a filter should be applied. The typical use case of a filter is when the user must be authenticated if the password is correct AND the user belongs to a given group
  • usernameFilterSubstitution is the filter itself, which uses the value captured by the usernameFilterMatch
  • The username, after the filters mentioned above are applied, must be also configured locally in Nemo. The password can be anything, because the authentication will be done by the LDAP server, but it is still necessary to correctly assign the access rights (elements, pages, charts, ...) to the user. It can be still used in case the LDAP connection is unavailable, so it must be chosen carefully.

A configuration example can be found in this section.

mongoDB password setup

The default installation is without setting up any password on mongoDB, because either the database is accessible only from the local host, in case of a standalone installation and without probes or, in case of a redundant installation with or without probes, it is suggested to use the internal linux firewall, to allow the access only from Nemo nodes. Nevertheless, it is possible to set up a password to allow the read/write access only after a successful authentication. The following procedures describe how to set up a password in case of a standalone or reduntant configuration.

Standalone configuration

Login on the Nemo server with root rights

  1. stop Nemo processes
systemctl stop nemo

2a. login in mongo console

mongo
MongoDB shell version v4.0.28
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5675abc9-727f-46bd-864a-efbafd2726f5") }
MongoDB server version: 4.0.28
>

2b. switch to admin database

use admin

2c. create the admin user with the a password of your choice (in the example: nemo)

db.createUser(
  {
    user: "nemoadmin",
    pwd: "nemo",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" }
             { role: "readWriteAnyDatabase", db: "admin" }
    ]
  }
)
  1. enable the authentication on mongo config file /etc/mongod.conf
sed -i 's/#security.*/security:\n\  authorization: enabled/g' /etc/mongod.conf
  1. restart mongod process
systemctl restart mongod
  1. edit nemo configuration file /opt/nemo/etc/global.conf to add the authentication information example:
sed -i '/\[database\]/ausername=nemoadmin\npassword=nemo' global.conf

The section [database] should appear similar to this (the host could be different, depending on the configuration):

[database]
username=useradmin
password=nemo
host=127.0.0.1
  1. start nemo processes
systemctl restart nemo

probe configuration adaptation

In case one or more probes are present, adapt the configuration of nemo repeating the points 5 and 6 of the previous section on each probe.

Redundant configuration

In case of a redundant configuration, execute the following steps:

  1. stop Nemo processes on all the servers (central servers, and probes if any)
systemctl stop nemo

2a. Login on one of the central servers as root, and create a keyfile, and update the access rights and the owner:

Example:

openssl rand -base64 756 > /var/lib/mongo/mongo.keyfile
chmod 400 /var/lib/mongo/mongo.keyfile
chown mongod.mongod /var/lib/mongo/mongo.keyfile

2b. copy the mongo.keyfile just created to the other central server and to the arbiter, and set the correct rights and owner as at point 1a.

INFO

With keyfile authentication, each mongod instances in the replica set uses the contents of the keyfile as the shared password for authenticating other members in the deployment. Only mongod instances with the correct keyfile can join the replica set.

  1. Login on both central servers as root, and verify that mongoDB is in a normal state, with a PRIMARY and SECONDARY node. To do so, it is sufficient to enter in the mongo CLI and check the prompt.

Example of a Primary node:

[root@NemoNL-A ~]# mongo
MongoDB shell version v4.4.29
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d8323e84-7431-4fa0-8a40-8b4dfe207543") }
MongoDB server version: 4.4.29
nemo:PRIMARY>

Example of a Secondary node:

[root@NemoNL-B ~]# mongo
MongoDB shell version v4.4.29
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ee1842a0-6323-4dc7-9b89-53ffff10bcac") }
MongoDB server version: 4.4.29
nemo:SECONDARY>
  1. Login on the central server with the SECONDARY mongoDB, and shut it down.
use admin
db.shutdownServer()
exit

Repeat this step on the arbiter, and as last, on the PRIMARY mongoDB.

WARNING

The primary must be the last member shut down to avoid potential rollbacks.

At the end of this step, all members of the replica set should be offline.

  1. Edit the mongod configuration file `/etc/mongod.conf' to add the keyfile

Example:

sed -i 's/#security.*/security:\n\  keyFile: \/var\/lib\/mongo\/mongo.keyfile/g' mongod.conf

The mongod.conf file should appear similar to this:

[...]
# network interfaces
net:
  port: 27017
  bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.


security:
  keyFile: /var/lib/mongo/mongo.keyfile

#operationProfiling:

replication:
  replSetName: nemo
  enableMajorityReadConcern: false
[...]
  1. Restart mongoDB on all the nodes: PRIMARY, ARBITER, SECONDARY.

Example:

systemctl restart mongod

7a. login in mongo console of the PRIMARY mongoDB

mongo
MongoDB shell version v4.4.29
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("d8323e84-7431-4fa0-8a40-8b4dfe207543") }
MongoDB server version: 4.4.29
nemo:PRIMARY>

7b. switch to admin database

use admin

7c. Create the admin user with the a password of your choice (in the example: nemo)

db.createUser(
  {
    user: "nemoadmin",
    pwd: "nemo",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" }
             { role: "readWriteAnyDatabase", db: "admin" }
    ]
  }
)

7d. After the creation of the user, try to authenticate and verify that the output is 1

Example:

nemo:PRIMARY> db.getSiblingDB("admin").auth("useradmin", "nemo")
1
  1. Edit nemo configuration file /opt/nemo/etc/global.conf to add the authentication information on all the central servers, and eventually the probes.

example:

sed -i '/\[database\]/ausername=nemoadmin\npassword=nemo' global.conf

The section [database] should appear similar to this (the host could be different, depending on the configuration):

[database]
username=nemoadmin
password=nemo
host=10.1.0.215,10.1.18.2
replicaSet=nemo
  1. restart nemo processes
systemctl restart nemo

Upgrade from release 3

Prior to version 4, Nemo could only support one plugin at a time. Starting from release 4, Nemo can support several plugin types simultaneously and therefore custom metrics, custom charts, stats export profiles, CDR export profiles, anomalies profiles must be updated in DB to be typed according to the plugin type previously active in version 3.

To do so:

  1. Annotate the device type (group) active in the configuration with the following command
grep -m1 groups /opt/nemo/etc/global.conf

Example: the device type is capture

grep -m1 groups /opt/nemo/etc/global.conf
groups=capture
  1. Execute the following commands on the VM/host where mongodb is running (in a redundant deployment, where mongodb is primary), replacing <device-type> with the output retrieved at the previous point.
mongo nemo
db.getCollection('statsExportProfiles').update({}, {$set: {deviceType: '<device-type>'}}, false, true)
db.getCollection('cdrExportProfiles').update({}, {$set: {deviceType: '<device-type>'}}, false, true)
db.getCollection('anomaliesProfiles').update({}, {$set: {deviceType: '<device-type>'}}, false, true)
db.getCollection('metrics').update({}, {$set: {deviceType: '<device-type>'}}, false, true)
db.getCollection('metrics_charts').update({}, {$set: {deviceType: '<device-type>'}}, false, true)

Example:

mongo nemo
db.getCollection('cdrExportProfiles').update({}, {$set: {deviceType: 'capture'}}, false, true)
db.getCollection('anomaliesProfiles').update({}, {$set: {deviceType: 'capture'}}, false, true)
db.getCollection('metrics').update({}, {$set: {deviceType: 'capture'}}, false, true)
db.getCollection('metrics_charts').update({}, {$set: {deviceType: 'capture'}}, false, true)
exit