Clearwater是一個開源的IMS專案,常被業界用來部屬virtual IMS系統。
本篇文章會介紹Clearwater安裝步驟中的手動安裝步驟
安裝需求
- 最少7個VM,其中一個做為DNS server剩下的六個分別對應到Clearwater的六個Node
- 記下每個Node的IP作為後續設定使用
- 每個VM必須擁有Public IP以及一個Private IP,根據環境的不同這兩個IP可以相同。兩個IP分別對應到<publicIP> 和 <privateIP>
- 設定每台機器的FQDN如果沒有FQDN的話則使用public ip代替,對應到<hostname>
- A DNS root zone in which to install your repository and the ability to configure records within that zone. This root zone will be referred to as <zone> below. In setting of DNS, this is referred to as ims.hom
強烈建議!!! 確認好所有VM的IP之後就先設定DNS Server
架設DNS Server
Installation
To install BIND on Ubuntu,
sudo apt-get install bind9
Creating Zone Entry
To create an entry for your zone, edit the /etc/bind/named.conf.local
file to add a line of the following form, replacing <zone>
with your zone name.
zone "<zone>" IN { type master; file "/etc/bind/db.<zone>"; };
假設我的zone要叫做 example.com
zone "example.com" IN { type master; file "/etc/bind/db.example.com"; };
Configuring Zone
If you followed the instructions above, the zone file for your zone is at /etc/bind/db.<zone>
.
接著是設定DNS zone的紀錄檔 以下是官方提供的範例 裡面有使用多個node 如果不需要的話只要留下一組就好
$TTL 5m ; Default TTL ; SOA, NS and A record for DNS server itself @ 3600 IN SOA ns admin ( 2014010800 ; Serial 3600 ; Refresh 3600 ; Retry 3600 ; Expire 300 ) ; Minimum TTL @ 3600 IN NS ns ns 3600 IN A 1.0.0.1 ; IPv4 address of BIND server ns 3600 IN AAAA 1::1 ; IPv6 address of BIND server ; bono ; ==== ; ; Per-node records - not required to have both IPv4 and IPv6 records bono-1 IN A 2.0.0.1 //ipv4 of bono bono-2 IN A 2.0.0.2 bono-1 IN AAAA 2::1 //ipv6 of bono bono-2 IN AAAA 2::2 ; ; Cluster A and AAAA records - UEs that don't support RFC 3263 will simply ; resolve the A or AAAA records and pick randomly from this set of addresses. @ IN A 2.0.0.1 @ IN A 2.0.0.2 @ IN AAAA 2::1 @ IN AAAA 2::2 ; ; NAPTR and SRV records - these indicate a preference for TCP and then resolve ; to port 5060 on the per-node records defined above. @ IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp @ IN NAPTR 2 1 "S" "SIP+D2U" "" _sip._udp _sip._tcp IN SRV 0 0 5060 bono-1 _sip._tcp IN SRV 0 0 5060 bono-2 _sip._udp IN SRV 0 0 5060 bono-1 _sip._udp IN SRV 0 0 5060 bono-2 ; sprout ; ====== ; ; Per-node records - not required to have both IPv4 and IPv6 records sprout-1 IN A 3.0.0.1 sprout-2 IN A 3.0.0.2 sprout-1 IN AAAA 3::1 sprout-2 IN AAAA 3::2 ; ; Cluster A and AAAA records - P-CSCFs that don't support RFC 3263 will simply ; resolve the A or AAAA records and pick randomly from this set of addresses. sprout IN A 3.0.0.1 sprout IN A 3.0.0.2 sprout IN AAAA 3::1 sprout IN AAAA 3::2 ; ; Cluster A and AAAA records - P-CSCFs that don't support RFC 3263 will simply ; resolve the A or AAAA records and pick randomly from this set of addresses. scscf.sprout IN A 3.0.0.1 scscf.sprout IN A 3.0.0.2 scscf.sprout IN AAAA 3::1 scscf.sprout IN AAAA 3::2 ; ; NAPTR and SRV records - these indicate TCP support only and then resolve ; to port 5054 on the per-node records defined above. sprout IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp.sprout _sip._tcp.sprout IN SRV 0 0 5054 sprout-1 _sip._tcp.sprout IN SRV 0 0 5054 sprout-2 ; ; NAPTR and SRV records for S-CSCF - these indicate TCP support only and ; then resolve to port 5054 on the per-node records defined above. scscf.sprout IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp.scscf.sprout _sip._tcp.scscf.sprout IN SRV 0 0 5054 sprout-1 _sip._tcp.scscf.sprout IN SRV 0 0 5054 sprout-2 ; ; Cluster A and AAAA records - P-CSCFs that don't support RFC 3263 will simply ; resolve the A or AAAA records and pick randomly from this set of addresses. icscf.sprout IN A 3.0.0.1 icscf.sprout IN A 3.0.0.2 icscf.sprout IN AAAA 3::1 icscf.sprout IN AAAA 3::2 ; ; NAPTR and SRV records for I-CSCF - these indicate TCP support only and ; then resolve to port 5052 on the per-node records defined above. icscf.sprout IN NAPTR 1 1 "S" "SIP+D2T" "" _sip._tcp.icscf.sprout _sip._tcp.icscf.sprout IN SRV 0 0 5052 sprout-1 _sip._tcp.icscf.sprout IN SRV 0 0 5052 sprout-2 ; homestead ; ========= ; ; Per-node records - not required to have both IPv4 and IPv6 records homestead-1 IN A 4.0.0.1 homestead-2 IN A 4.0.0.2 homestead-1 IN AAAA 4::1 homestead-2 IN AAAA 4::2 ; ; Cluster A and AAAA records - sprout picks randomly from these. hs IN A 4.0.0.1 hs IN A 4.0.0.2 hs IN AAAA 4::1 hs IN AAAA 4::2 ; ; (No need for NAPTR or SRV records as homestead doesn't handle SIP traffic.) ; homer ; ===== ; ; Per-node records - not required to have both IPv4 and IPv6 records homer-1 IN A 5.0.0.1 homer-2 IN A 5.0.0.2 homer-1 IN AAAA 5::1 homer-2 IN AAAA 5::2 ; ; Cluster A and AAAA records - sprout picks randomly from these. homer IN A 5.0.0.1 homer IN A 5.0.0.2 homer IN AAAA 5::1 homer IN AAAA 5::2 ; ; (No need for NAPTR or SRV records as homer doesn't handle SIP traffic.) ; ralf ; ===== ; ; Per-node records - not required to have both IPv4 and IPv6 records ralf-1 IN A 6.0.0.1 ralf-2 IN A 6.0.0.2 ralf-1 IN AAAA 6::1 ralf-2 IN AAAA 6::2 ; ; Cluster A and AAAA records - sprout and bono pick randomly from these. ralf IN A 6.0.0.1 ralf IN A 6.0.0.2 ralf IN AAAA 6::1 ralf IN AAAA 6::2 ; ; (No need for NAPTR or SRV records as ralf doesn't handle SIP traffic.) ; ellis ; ===== ; ; ellis is not clustered, so there's only ever one node. ; ; Per-node record - not required to have both IPv4 and IPv6 records ellis-1 IN A 7.0.0.1 ellis-1 IN AAAA 7::1 ; ; "Cluster"/access A and AAAA record ellis IN A 7.0.0.1 ellis IN AAAA 7::1
Restarting
To restart BIND, issue sudo service bind9 restart
. Check /var/log/syslog for any error messages.
Client Configuration
Clearwater nodes need to know the identity of their DNS server. Ideally, this is achieved through DHCP. There are two main situations in which it might need to be configured manually.
- When DNS configuration is not provided via DHCP.
- When incorrect DNS configuration is provided via DHCP.
Either way, you must
- create an
/etc/dnsmasq.resolv.conf
file containing the desired DNS configuration (probably just the single linenameserver <IP address>
) - add
RESOLV_CONF=/etc/dnsmasq.resolv.conf
to/etc/default/dnsmasq
- run
service dnsmasq restart
.
Restart BIND
sudo service bind9 restart
Your primary DNS server is now setup and ready to respond to DNS queries.
配置剩下的六個VM
以下步驟在剩下的六個VM皆要執行 作為clearwater nodes的前置作業
Configuring the APT software sources
The machines need to be configured so that APT can use the Clearwater repository server.
Project Clearwater
Under sudo, create /etc/apt/sources.list.d/clearwater.list
with the following contents:
deb http://repo.cw-ngv.com/stable binary/
Once this is created install the signing key used by the Clearwater server with:
curl -L http://repo.cw-ngv.com/repo_key | sudo apt-key add -
You should check the key fingerprint with:
sudo apt-key finger
The output should contain the following - check the fingerprint carefully.
pub 4096R/22B97904 2013-04-30
Key fingerprint = 9213 4604 DE32 7DF7 FEB7 2026 111D BE47 22B9 7904
uid Project Clearwater Maintainers <maintainers@projectclearwater.org>
sub 4096R/46EC5B7F 2013-04-30
Finishing up
Once the above steps have been performed, run the following to re-index your package manager:
sudo apt-get update
Configuring the inter-node hostnames/IP addresses
在每個Node上面建立clearwater的inter-node config
/etc/clearwater/local_config
local_ip=<privateIP>
public_ip=<publicIP>
public_hostname=<hostname>
etcd_cluster="<comma separated list of private IPs>"
Note that the etcd_cluster
variable should be set to a comma separated list that contains the private IP address of the nodes you created above. For example if the nodes had addresses 10.0.0.1 to 10.0.0.6, etcd_cluster
should be set to "10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6"
除了上面的設定之外 Sprout 和 Ralf Node還要額外設定
/etc/chronos/chronos.conf
<privateIP>的部分分別填上sprout和ralf的IP
[http]
bind-address = <privateIP>
bind-port = 7253
threads = 50
[logging]
folder = /var/log/chronos
level = 2
[alarms]
enabled = true
[exceptions]
max_ttl = 600
Install Node-Specific Software
ssh
onto each box in turn and follow the appropriate instructions below according to the role the node will take in the deployment:
注意 當初有遇到ellis安裝失敗的問題,最後發現是原來ellis node的hostname跟username我都設定為ellis 所以導致有個程序叫做ellis
在安裝的時候就以為已經裝過了 導致安裝失敗
官方在後來有更新apt package的名稱 建議在這步驟時參考官方提供的文件
http://clearwater.readthedocs.io/en/stable/Manual_Install.html#install-node-specific-software
Ellis
Install the Ellis package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install ellis --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Bono
Install the Bono and Restund packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install bono restund --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Sprout
Install the Sprout package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install sprout --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
If you want the Sprout nodes to include a Memento Application server, then install the Memento packages with:
Memento可以選擇不裝
sudo DEBIAN_FRONTEND=noninteractive apt-get install memento-as memento-nginx --yes
Homer
Install the Homer packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install homer --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Homestead
Install the Homestead packages with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install homestead homestead-prov clearwater-prov-tools --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Ralf
Install the Ralf package with:
sudo DEBIAN_FRONTEND=noninteractive apt-get install ralf --yes sudo DEBIAN_FRONTEND=noninteractive apt-get install clearwater-management --yes
Sprout, Bono and Homestead nodes expose statistics over SNMP. This function is not installed by default. If you want to enable it follow the instruction in our SNMP documentation.
Provide Shared Configuration
Log onto any node in the deployment and create the file /etc/clearwater/shared_config
with the following contents:
# Deployment definitions
home_domain=<zone>
sprout_hostname=sprout.<zone>
hs_hostname=hs.<zone>:8888
hs_provisioning_hostname=hs.<zone>:8889
ralf_hostname=ralf.<zone>:10888
xdms_hostname=homer.<zone>:7888
# Email server configuration
smtp_smarthost=<smtp server>
smtp_username=<username>
smtp_password=<password>
email_recovery_sender=clearwater@example.org
# Keys
signup_key=<secret>
turn_workaround=<secret>
ellis_api_key=<secret>
ellis_cookie_key=<secret>
If you wish to enable the optional external HSS lookups, add the following:
(我們沒有外部HHS 所以這裡跳過)
# HSS configuration hss_hostname=<address of your HSS> hss_port=3868
If you want to host multiple domains from the same Clearwater deployment, add the following (and configure DNS to route all domains to the same servers):
這裡也跳過不設定
# Additional domains additional_home_domains=<domain 1>,<domain 2>,<domain 3>...
If you want your Sprout nodes to include Gemini/Memento Application Servers add the following:
如果先前安裝memento這邊就要設定
# Application Servers gemini=<gemini port> memento=<memento port>
See the Chef instructions for more information on how to fill these in. The values marked <secret>
must be set to secure values to protect your deployment from unauthorized access. To modify these settings after the deployment is created, follow these instructions.
接著將本機的設定檔上傳到每個node
shared_config的步驟官方指令也有更新
請參考最新的格式
http://clearwater.readthedocs.io/en/stable/Manual_Install.html#provide-shared-configuration
Now run the following to upload the configuration to a shared database and propagate it around the cluster.
/usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config
Then upload it to the shared configuration database by runningsudo /usr/share/clearwater/clearwater-config-manager/scripts/upload_scscf_json
. This means that any sprout nodes that you add to the cluster will automatically learn the configuration.
Provision Telephone Numbers in Ellis
Log onto you Ellis node and provision a pool of numbers in Ellis. The command given here will generate 1000 numbers starting at sip:6505550000@<zone>
, meaning none of the generated numbers will be routable outside of the Clearwater deployment. For more details on creating numbers, see the create_numbers.py documentation.
sudo bash -c "export PATH=/usr/share/clearwater/ellis/env/bin:$PATH ; cd /usr/share/clearwater/ellis/src/metaswitch/ellis/tools/ ; python create_numbers.py --start 6505550000 --count 1000"
On success, you should see some output from python about importing eggs and then the following.
Created 1000 numbers, 0 already present in database
This command is idempotent, so it’s safe to run it multiple times. If you’ve run it once before, you’ll see the following instead.
Created 0 numbers, 1000 already present in database
之後就可以執行下階段的測試
Where next?
Once you’ve reached this point, your Clearwater deployment is ready to handle calls. See the following pages for instructions on making your first call and running the supplied regression test suite.