This feature has been requested for years by our team and I have never had the time to really dive in and add the functionality. Since the FMS has been transitioning to a microservice backend, it was further put off and continued to be put off as the FMS core functionality was migrated. Upon discussion in a Calix Community thread, it brought the idea back to the front burner.
In addition, we’ve had a higher number of devices reporting low light levels and they seems to be more related to drops in terminals or at the CSP, rather than the ONT splice point. Cleaning the fiber fixes the issue, but having a way to monitor or pull the levels daily for analysis is becoming increasingly important in our office.
So, as I set to figure this out, I knew I already had a pretty extensive system built that would be very simple to implement. Already having Customer, Inventory and Calix API services tied together, it would just be a simple UI edit for a graph, an API to pull information based on date, and then running a scheduled daily task to pull all of the ONTs.
There were a few things to consider. The FMS is built in a relational database, meaning that it’s a bit slower and archiving information is not always the most performance or resource optimized. For the sake of timeline, company size and resource demands, I made the choice that switching to a different DB or key/value store would be a future project.
Our existing database already had inventory records and customer records that were joined as a 1:Customer to n:Device relationship. Introducing a new table to hold light level information would have to reference the inventory records. Further, we are only pulling devices that are assigned to active customers, and provisioned via CMS or SMx. Without making the join too busy for this post, I have condensed it to Customer joined to Inventory ONTs where the customer is active and the ONT is provisioned via CMS.
Now that we can store and query the data, we have to write the services to connect to the device. For background, the UI makes http requests to the API for information. The API Gateway will then direct the request to the microservice. Depending on the request, it may make additional http calls internally to process the request. For example, to provision an ONT on a customer profile: the UI would send an http request from the client to the API Gateway, the gateway will direct it to the customer module, where it will begin processing the request. It will make another call to the inventory module to look up the inventory record and make sure the FSAN exists in inventory, if it does, the inventory record is updated to assign the unit to the customer record and the inventory record is sent back to the customer module, and joined to the provisioning parameters. Then the customer module will make a call to the CMS/SMX/Cloud API and process a new ONT. The Calix API service will query for existing ONTs and resolve service tags, ONT models and bandwidth profiles, then process a new ONT for the appropriate host. If it is an RG service and not a data service, a Calix Cloud profile is created as well and then a response is sent back to the customer service, which updates the customer record and returns the result.
Now, this is a pretty complicated example, but it is suppose to show the idea of workflow that involves inter-service dependencies as well as public/secure API requests as well. In our example, there are a couple API calls made. One is requesting historical information, the other is actually getting the data from the ONT and storing it in the DB.
Lets start with ONT querying. We have a task scheduler that will run at 5am every morning, to query all ONTs that are assigned to an active customer and provisioning using CMS or SMX. It will be housed in the inventory service.
1. Task Scheduler runs at 5am, every morning. It will send an internal http request to internal/telecom/inventory/v1/equipment/pull-ont-light-levels. The image below shows the task row, and the cron pattern that is different considering it is in our development environment.
2. The inventory service receives the request and generates a process job, which it returns with an OK, job received. The job starts to spin up workers and processes all inventory records that are assigned to an active customer.
3. Inventory Service does its best to perform nodejs ‘multithreading’, and run multiple requests at once. It will query the CMS API for each device to find an ONT and pull the light levels, respond back and write the results to the DB. Finally, the job will update the task scheduling service every 30 seconds with updates.
4. The CMS API requires the serno, and must perform a lookup to get the ONT id, on a system. Once an ONT ID and hostname have been identified, we can query the ONT for data.
5. Finally, data is returned to the task worker and stored into the database!
Now, pulling the light level data, requires us to just make a db call that will return a list of information. The UI can graph the information, or you could use it to compute averages and generate alerts or reports of people will low light levels. CMS will give you live alerts, but being able to compute change over time has the ability to provide more meaningful stats.
My service hasn’t been running for 30 days, so I generated some fake data to show a potential trend.
I figured that I’ll make a post later on to help in determining reports and thresholds later.