There are three situations when a user needs to aggregate a lot of data to find the answer about the potential impact on customers and their products.
- In case of planned network element upgrade, decommissioning, or any change which might affect its functionality
- In the case of device outage identified by operation teams and monitoring tools
- In case of customer complains about service malfunction
In all of these situations, many users had to collaborate, access many tools, and compile the results to find the corresponding answer. This process is error–prone and in more complicated cases (e.g. decommissioning of metropolitan optical cable) might take two weeks to finish.
If you want to calculate all services which rely for example on a given metropolitan optical cable, you not only need high quality and strongly correlated data but you will also need to inspect large parts of the network both vertically and horizontally. The problem with such reports is also that you don’t know how “deep” you need to search.
Although successfully correlating data from all existing inventory systems and all network management systems in Orange Slovakia, there still were gaps which we had to fill to have the full picture of the network. For example, no data source would describe service architecture in the detail of particular network elements.
The requirements were to provide a list of services and customers impacted by network element outages. This list is used for example to inform customers about planned outages. Even if we would have had the list, then the question of “how could the user validate that the list is correct?” would arise. Another question we had to keep in mind was “how could the user understand what is the chain of the outage which results in an impact on this particular customer?”.
CELINE was designed for GNOC (Global network operation center). GNOC is responsible for the maintenance of Orange affiliates network infrastructure. To achieve this goal, access to information about the network’s infrastructure is paramount.
The original approach for GNOC with regards to inventory management was to access whatever tools are available in the country for which they provide services. It was clear early on that working with different systems from huge inventory solutions like NetCracker to Excel spreadsheets was not efficient. Therefore, it was identified that there should be a single inventory solution for GNOC that integrates all data sources available. One could describe the situation as following:
The applications storing the network infrastructure data were hugely different from each other in terms of APIs, level of detail, and data models. Our goal was to create a unified view of the data for the user. From the user perspective, the representation of a server or router (as an example) should be the same independent, if the original data came from NMS’s (Network Management Systems), Excel Spreadsheets, or Inventory tool.
It was not possible to replace all the tools used by Orange affiliates because they’re part of the day-to-day work of local engineers and integrated with other specific IT solutions.
Even if the data from the various data sources will be in the same application, to provide real benefit, these data need to be correlated to each other, they need to be aggregated into user reports and accessible via standardized APIs for further automation and reporting tools.
It is common for inventory systems to have data that is out of date. The problem becomes even bigger when 3rd party companies, with no management authority (GNOC), need to initiate data cleaning or update stale data.
There are significant project management issues related to projects with so many participants but, this goes beyond the scope of this case-study.
We have created an inventory data model starting with very generic terminology to more specific types. This model could be extended and can document any inventory situation we might encounter in the data sources, which were not even identified during that time. The result was “Common data model” that we have described and tested in detail.
Part of building network infrastructure for telecommunication operators is creating a lot of different documents like lease contracts, CAD drawings, measurements, installation materials, and so on. When the telecommunication site is operational, other activities like site revisions are performed which generate further documentation.
All these documents were previously stored on a single shared drive and as the volume of documents grew, it became harder and harder to find the right ones. To work around this problem, users started to create their own folder hierarchies, which meant duplicating some of the documents. The shared drive also had other problems, like lack of full-text search, no traceability, poor user rights management, and so on, which needed to be solved.
It was obvious from analyzing the documents stored on the shared drive that users have different “views” on document categorization. For some users, categorization was “Site-centric” whereas other users categorized based on “Time-centric” priorities. This resulted in different folder hierarchies and document fragmentation.
Although most data have been stored in documents, some information was stored in the folder structures themselves. Document type, site number, year, and much more were encoded in folder names that prevented proper use of this information.
It often happened that users missed some mandatory folders that resulted in wrong document categorization or uploading a document, which was already there.
The primary concern of a shared drive document storage system was poor usability when searching for documents. It was not possible to use full-text search, and it was hard to navigate via folder hierarchies and resolve duplicates.
If users moved documents to another location or accidentally deleted documents, it was not possible to trace which user it was. It was also not easy to protect documents from unauthorized access.
There was already a large set of documents stored in the shared drive, which had to be loaded to DMS without losing information about its categorization.
User have no easy option to get answer for a simple question: How many sites exists, which do not have electric revision for the year 2020.
Using folders to categorize documents have some major drawbacks; users must agree on a single hierarchy of folders and assign documents accordingly. Additionally, folders do not give any information about the data it contains. That is why we decided not to use folders at all. In DMS, all structured information (Project, Creation date, Document type, anything you like) is stored in a specific document attribute that allows for advanced filtering, reporting on documents, validations, permission rules based on attribute values, and more.
Users can define document types and their attributes. They can also customize which fields are mandatory. To prevent users from duplicating content, we have implemented a search for identifying similarities in content and alerting the user of results found. Additionally, users can define “Upload trees” to easily upload different types of documents for a given scenario. Users can also configure rules to include structured information from folders being uploaded.
We use Apache Solr to index documents and search full-text with word occurrence highlighting typeahead features, or spellcheck corrections. Further, we implemented a faceted search so that users can easily narrow down Full-text search results. The faceted search also allows for hierarchy independent search order; “Site-centric” users can start by filtering via Site, and “Time-centric” users filter via a data range.
A medical guideline is a document with the aim of guiding decisions and criteria regarding diagnosis, management, and treatment in specific areas of healthcare. These guidelines are updated regularly and are often in the form of “free text” documents.
A healthcare provider is obliged to know the medical guidelines of his or her profession and must decide whether to follow the recommendations of a guideline for individual treatment. It is important to find a way to promote the newest guidelines and draw attention to possible diverges between the guidelines and widespread practice.
Prof. Dr. Paul Martin Putora proposed a method to document the decision-making process in a structured way. He used the decision tree notation that condensed the information in a very efficient and readable way and also allowed to compare the decision-making process between different healthcare providers.
The treatment decisions are based on parameters which can have different names in different hospitals. Even when the different hospitals use the same names for same parameters, they might express parameter values in different units; for example, glucose can be measured in mmol/l as well as mg/dl. To perform calculations, this terminology had to be unified or at least be mappable.
The decision trees can be documented in diverse ways. Some prefer to use complex mathematical expressions to define when a given action is performed when others use simple logical expressions. Some medical centers can omit certain parameters because the measurements for that parameter have not been taken. To find differences in such complicated structures was not trivial.
The decision-making comparison has been performed in studies where more than 10 hospitals took part. One treatment can depend on 10+ parameters. To evaluate such state-space, we can easily end up with billions of combinations that need to be evaluated.
The differences between treatments can be quite profound. It is complicated to get the right insights from the results unless users investigate the result in detail.
Before the decision tree for a particular treatment can be created the terminology has to be unified in the treatment template. This template defines a vocabulary for decision trees in terms of the parameters that must be considered in the treatment as well as actions that can be performed. When the user is creating a decision tree, the system is using the template to provide guidance and ease the process of input.
We have implemented algorithms that can transform the decision tree into a multidimensional state-space and can assign each “coordinate” a set of actions for a given parameter range. This allows us to compare each coordinate to find differences. From this information, we can back generate a decision tree that represents comparison results. We can use the same approach to enhanced the validation of decision trees like calculate parameter ranges that do not have any action or have contradictory conditions.
Lots of companies offer open positions to software developers. We had to distinguish and create a product, which will be unique. At the same time, it was important to build trust between the company and developers.
The main goal of Koderia (formerly campaigned as “Developers for Developers”) was to increase the amount of email subscribers for our open positions at Objectify.
The beginning of the multi-functional portal
Started as a simple website with a simple form, where developers were able to find out how much they should earn (Adequate salary calculator). We had a proper knowledge of the economic situation in the industry so we were able to deliver personal, quick and adequate responses. The campaign was popular among developers and gained hundreds of new contacts. At that point, we decided to create an online space that will merge all important information about the industry into a one place.
Moving to Vue and Firestore
As Koderia grew, it became apparent that WordPress simply won’t be enough.
The decision to go with Firebase proved to be a beneficial one. Firebase offered us many out-of-the-box customizable features such as:
- Realtime Database was used until more querying possibilities were needed, while Firestore provides everything we needed since and probably will ever need concerning the Koderia project.
- Cloud Functions proved to be a fitting replacement for a standard backend.
- Firebase Authentication was a pleasant feature to use and implement. Providers such as Google, Facebook, Github (and also standard email + password) were implemented with ease.
- Firebase CI/CD also offered us an easy-to-use deployment solution that we have set up and used since our first release.
Graphs, visualizations, and much more can now be found at Koderia. We pay many thanks to chart.js. Integrating this library along with many other both external and custom-made libraries was effortless.
The main point that differentiates Koderia CV from any other, is a radar chart, that shows how much a person is oriented on Fronted, Backend, Database, Dev/Ops, or Styling. This helps to understand all the skills, education, and experience with a blink of an eye.People that created their CVs are able to react to open positions more quickly and also see whether the position suits their skills and experience or not.
Koderia kept the first feature – The adequate salary calculator, which is now also integrated into CV. There is no need to fill another form. When the CV is created, the adequate salary is calculated too.
Throughout the time the project is online, Koderia established its place in Slovakia and plans to continue with the support for developers, bringing new features and improve existing ones.