Friday, December 14, 2012

Case Study: InSQL to PI Migration

This case study describes the process, the challenges and the outcomes of a historical data migration project. This project consisted of migrating data from two Wonderware (Industrial SQL) Historians to a single OSISoft PI Historian. The PI Historian had replaced the InSQL Historians, but this resulted in valuable historical data being split across three historians. By migrating the historical data from the two InSQL Historians to the PI Historian, all the historical data became available in the PI Historian, and the two InSQL Historians could be decommissioned.

We will refer to the 2 InSQL Historians as ‘InSQL 1’ and ‘InSQL 2’. These two historians had the following properties:

  • InSQL 1:
    • Time span: 4 year and 7 months.
    • No. of Tags: approx. 23 000.
  • InSQL 2: 
    • Time span: 1 years and 11 months.
    • No. of Tags: approx. 40 000.

The InSQL Historians not only overlapped in time, but also had common tags which extracted their data from the same source. The tags in the PI Historian are a superset of the union of the tags in the two InSQL Historians. However, for many tags, the name in PI was different from the corresponding tags in InSQL 1 and/or InSQL 2.

The results of the migration are summarised in the figure below. The two plots show historical data from InSQL 1 and the migrated data in PI for a single tag covering a time period of one day. These plots are derived from the migration analysis results (more on which see below).

The result of migrating historical data from InSQL 1 and InSQL 2 to PI

Migration Analysis: Source (InSQL) Plot

Migration Analysis: Target (PI) Plot

The migration process consisted of the following steps:
  • Synchronisation of tag configurations: This consists of linking tags from the source historians (InSQL 1 and InSQL 2) to the corresponding tags in the target historian (PI). This step is particularly important for tags whose names differ in the source and target historians.
  • Import tag configurations and their synchronisation into Tag Manager: By using Tag Manager, this otherwise manual process is greatly simplified.
  • Create the Migration Jobs: Migration Jobs are responsible for the actual migration and run as services. A Migration Job is characterised by source and target historians, a tag list (imported from Tag Manager), and start and end times. Migrations Jobs are subdivided into batches, which are the smallest repeatable units of a migration job. The migration was completed using 6 jobs (4 for InSQL 1 and 2 for InSQL 2) and each batch had a time range of one day.
  • Analysis and verification: After a batch completes data migration, there is an option of running an analysis step. During this step, data from the source and the target are extracted and compared. Based on the difference between the source and target data, thresholds can be set to identify potentially problematic cases. Because of this, the migration was not merely a blind process. The results of the migration were verified. 

Summary of the migration process
The migration was executed on 6 virtual machines (VMs) running Windows Server 2003 R2. Three VMs were used per migration job (one for InSQL Historian Server, one for PI Historian Server and one for IDX 8 and Migration Job service). This meant that two Migration Jobs could be run in parallel.

The time it took to complete the entire migration process is summarised as follows:
  • Tag configuration synchronisation and importing to Tag Manager (InSQL 1 and InSQL 2): 2 weeks.
  • Executing the migration jobs (including analysis):
    • InSQL 1 (23 000 tags, 4 years and 7 months): 2 weeks.
    • InSQL 2 (40 000 tags, 1 year and 11 months): 5 days.
One of the challenges encountered was that certain tags had an incorrect data type in the InSQL Historian. This meant that the data types did not match in InSQL and PI. This required that the data for these tags not be migrated as is, but that a ‘processing’ step be included during the migration that transformed the source data into an appropriate version that was written into the target. This process was facilitated by Tag Manager, which held the source and target tag configurations, allowing these special cases to be easily identified during the migration.

Wednesday, December 12, 2012

Outsourcing the Migration of Plant History

When an organisation makes a decision to introduce a different Plant Data Historian the requirement to retain visibility and access to existing historical data can be a challenge. There might be multiple years’ worth of valuable plant data associated with the legacy Plant Data Historian which somehow needs to be migrated to the new Plant Data Historian.

Typically the same or a similar tag set is established on the new Data Historian and plant data will start to be recorded in the new system. There might be a switch over period where both the old and new systems each record data until the Plant is convinced that the new system is fully commissioned. All reports and web-portal views will now reference the new Plant Data Historian.
Data migration is typically done in one or two tranches and the legacy Data Historian is finally decommissioned and all costs associated with its maintenance cease.

In our experience the users of plant data are not empowered to attempt this onerous task on their own and its complexity can be a major distraction to their already busy schedules.

The IDX Tag Manager and IDX Historian with its Historian Data Migration Tool  enable IDX to offer an outsourced service to cost-effectively and reliably handle this process for a Plant.
There is no requirement for the customer to purchase any hardware or software, they simply need to provide access to the legacy and new Plant Data Historians.
Rather the customer will be charged a fee that will cover the equipment rental and engineering effort to complete the data migration.

Migration process overview when migrating and merging two historical sources into a single target using the Migration Tool.

The data migration is set-up offline on one or more virtual machines and when ready migrated data is “attached” to the new Plant Data Historian and can be accessed by users.

A typical plant data migration project for 50,000 tags over a period of 4 years would take 1-2 months to complete.

For more information contact IDX.

Tuesday, December 11, 2012

Migrating Historical Data between Historians

Organisations that wish to change the data Historian they use are often faced with the challenge of what to do with the data contained in the existing historian. Simply powering the old historian server off is usually not a viable option, and maintaining two parallel systems introduces additional complexity and cost. Clearly, migrating the data in the existing system to the new system is the preferred route, but when large quantities of data are involved over extended time periods, this is not as straight forward as it may appear.

Historians often include tools such as an OLEDB client that can be used to import data from other OLEDB-compliant sources. These tools are usually fairly limited, single query-based affairs that return and insert data in large lumps and tend to be slow given the quantity of data queried.  Also, the data is inserted blindly into the new historian, without providing any way of validating the inserted data, bar manually eye-balling trends of the inserted data on the target historian and comparing those with the source. This cannot possibly be considered a thorough and reliable migration process for valuable historical data!

Because we have encountered the need for a verifiable data migration process between historians on more than one occasion, when we developed the IDX Historian, our cost-effective enterprise-grade historian, we decided to incorporate a Data Migration tool that addresses this need directly into the Historian’s tool set. The tool does not require any additional license to use and allows for the migration of data between essentially any historian with an OLEDB interface or software API and the IDX Historian. However, the tool also allows data to be migrated directly between 3rd party historians; for example, it is possible to migrate data from a Wonderware (InSQL) Historian directly to an OSISoft PI Historian.

The migration tool uses the following methods and items to effectively migrate historical data:
  • Assistance from IDX 8 Tag Manager: The first step in migrating data often involves ensuring the tags for the data to be migrated already exist or are setup in the new historian. IDX Tag Manager allows the tag configurations to be automatically synchronised between systems before the migration takes place thus averting having to perform this process manually.
  • Source/Target Plugins: It is possible to migrate from any supported source historian to any supported target historian, which currently includes the Wonderware (InSQL) Historian, OSISoft Historian and IDX Historian. Plugins can be developed to meet other requirements on request.
  • Jobs and Batches: The migration tool carves up the data under migration into various levels of repeatable work items. The first step is to define a job which runs as a Windows service that encompasses a configured set of tags over a specified time period. Once a job is configured, the migration tool automatically creates a set number of work batches to be executed that equally divides the job data to be migrated. The two principle advantages of using the job/batch approach are:
    • Parallelisation: today’s computer hardware benefit from parallel operations. This speeds up the migration process considerably.
    • Recoverability: because the migration process is divided into batches, any batch can be resumed, re-executed or analysed as required.
    • Data Analysis and ReportingPost migration, the migration tool provides the option of analysing and comparing the data from the source and target historians at the batch level, allowing for greatly simplified side-by-side data comparison. It can also be configured to automatically highlight anomalies in the migrated batches that require further investigation. The provided analysis reporting can be used to provide an overview of the result of the complete migration.

Migration job summary page illustrating pre-creation of job batches and batch status

Using these features, the migration tool, as illustrated by this case study, is able to perform data migration where vendor-provided tools are slow, or simply fail, and provides a fully analysed and verified migration process in a reasonable time period.

Because we understand historian users may not wish to delve into the world of historical data migration themselves or purchase IDX for a once off migration project, we also perform data migration using the IDX Data Migration tool as a service, should this be more suitable. Feel free to contact us.

IDX 8 Tag Manager - Managing the unmanagable?

Tag management of real-time data systems can prove challenging. As an industrial IT company we frequently come across the problem where tags need to be migrated from one system to another or where tags need to be synchronised and maintained between systems including historians. Generally, this ends up being a manual process, often involving Excel and much diligence and patience of the maintainer’s behalf.

IDX Unifig was the first incarnation of a tool we designed with the aim to simplify the migration and management of tags between systems. The tool worked, but with as with most first editions of software tools, we could see the need for improvement and refinement of the approach used. Thus Tag Manager was born, and has become the key stone in the IDX 8 software suite. All other IDX 8 modules, such as Data Exchange, Alarms and Events and the Historian use Tag Manager to store and reference tags.

The comparison result we like to see - no changes.
 Tag Manager employs a plugin methodology that allows various sources and targets to be managed and synchronised. Currently, we mainly see the use of the Siemens WinCC and CSV sources, usually synchronising with WonderWare Historian (InSQL) and OSISoft PI Historian targets. The process of synchronisation is rule and wizard driven and allows for comprehensive comparison of tag data before synchronisation. These rules allow data to be filtered, pivoted and “Regular-Expression-ed” into the required target format.  That being said, there are still significant obstacles to overcome when dealing with certain sources in particular, which play a critical role in determining the overall simplicity of tag synchronisation.

For example, a common scenario we have is to synchronise tags in WinCC with tags in PI Historian. It must be highlighted that the ease of synchronisation is markedly affected by the rigorous adoption of a standardised naming convention for tags. Without this, you can quickly become bogged down in naming exceptions – while still do-able, usually means more initial rule setup and configuration time. Also, certain systems complicate matters further in that they use Tag names as the system unique tag identifier (e.g. WinCC). This is a bad idea simply because it in no way facilitates the ability to rename tags in the source without destroying the possibility of avoiding additional manual intervention of synchronisation between other systems. WinCC does have a tag identifier value associated with a Tag, but this identifier is not stored in some kind of central configuration repository - it changes when a full PLC download occurs which naturally breaks fully automated synchronisation. In such cases, additional care must be taken to synchronise configurations.

It is mainly because of cases such as the above that Tag Manager is not yet a fully automatic, click-once-and-forget affair (a request we have received more than once – we are looking into it is all I can say at the moment).

Tag Manager is used on an on-going basis for tag synchronisation, usually in a WinCC to Wonderware Historian or more frequently, OSISoft PI Historian configuration across the various  Platinum producer sites in South Africa.

IDX 8 Data Exchange and Alarms and Events

Data Exchange has been the core of the IDX suite since its first inception in 1995. Data Exchange allows real-time tag data to be shared between various, usually incompatible, systems with IDX being the intermediary that converts the data between the systems.  Up to IDX 7, IDX was exclusively a Data Exchange engine with a reliable industry track record. IDX 7 still exists today, and provides the bulk to the Data Exchange requirements to service our clients’ needs. This includes special connectors such as the Gensym G2 Expert Sytem bridge, as well as more common OPC DA, MODBUS, and OPC DA tunnelling via TCP.

IDX 8 introduces its own implementation of the Data Exchange engine. The intention is to mirror the functionality of common IDX 7 connectors, such as the OPC DA client, in native IDX 8 connectors to provide general data exchange ability to service functionality such as Alerting (more below). However, in the near future, the IDX 7 runtime will become a “proxy” data exchange engine of the IDX 8 framework, so that all the existing IDX 7 connectors can be configured and used from within IDX 8 without the need for the separate IDX 7 configuration tool and database. 

One of the more exciting new aspects of IDX 8 Data Exchange is that it couples to the IDX 8 Alarms and Events modules, which allows users to setup SMS and/or Email notifications based on configurable, hierarchical alarm logic. The standard alarm types are available, such as value, limit and heartbeat alarms. Two alarms types in particular provide additional useful functionality. The Logic alarm is used to aggregate alarms up the logic tree and this allows one to setup alarm configurations that do not send emails and SMS unnecessarily. Customised logic can be setup so that the logic alarm only enters the alarm state if its child alarm states match those specified.

Configuring a Logic alarm that analyses the state of a Heartbeat and Value/Condition alarm.

The Snapshot alarm allows you to generate a value snapshot of one or more Alarm Event items, such as Data Exchange tags, at the time the alarm is triggered in the alarm tree.  You can format the snapshot message to your requirements with custom text so that the data snapshot is presented as required in an Email or SMS message. At the time an alarm occurs, there is also the option to perform an action, which includes running a program, controlling a Windows service or writing a value into an Alarm Element such as a Data Exchange tag. We have used the service control action in a number of projects where alarms are set to monitor the health of other real-time data interfaces that happen to stop working once in a while. The Alarm action allows us to restart the interface services to get the data moving again without any user intervention.

Alarms can also be setup with acknowledgement escalation where, if an alarm is not acknowleged by any contacts in a group, the alarm is escalated to the next group. All alarms and alarm notifications and acknowledgements are logged, providing a useful event audit trail.

IDX 8 Historian

Industrial Data Historians are fairly common place. Generally they fall into two categories: enterprise-class historians such as OSISoft PI and Wonderware Historian, which bristle with features and carry a correspondingly enterprise-class price tag, and then there are the “cheap” historians (or sometimes glorified file loggers), that log data, but usually have significant performance or other functional limitations.

The IDX 8 Historian is aimed at users that don’t want to spend the earth to get a solution that fits in between these two cases, that will provide data logging that is extensible to fairly large capacities and without the multitude of features most users don’t know how to or even wish to use.

Our historian is built on Microsoft SQL 2008 (or above). Certain features introduced in SQL 2008 have made it viable to implement data logging with performance we were happy with (rough ball park figures show we routinely achieve around 20-30K sustained writes and 30-50K reads per second using SQL Express using ordinary workstation hardware).  We are aware there are more than a few other historians built on SQL. However, we believe our approach is a little different and we like to believe this is what makes our Historian more than just another SQL data logger. Firstly, in order to address cost issues, a primary design goal for us was to ensure the Historian could use any edition of SQL server. You may be aware a SQL server license, particularly for the Enterprise edition can become quite pricey, and is often a non-starter for SMB’s that wish to add the benefits of historical data to their production analysis. Thus, the IDX Historian is able to run on anything from the Express (free) edition up to the Enterprise edition, although we envisage most customers won’t gain any advantage of going beyond the Standard Edition for IDX Historian use. The IDX Historian segments data across tables and multiple databases, so unless you plan to store more than 10GB of raw data per day (the database size limit for SQL Express 2008 R2) or need performance beyond what Express edition can provide (Express is limited to the use of 1 CPU and 1GB of RAM), the free Express edition is the likely starting point, from which you can always update to SQL Standard if required.

To start using the Historian, the first and generally only configuration step required is to set the Period and Segment Count settings. The Period setting defines how data is split across SQL databases, currently Month, Week or Day. The Segment Count setting defines how data is divided across tables within a database. For example, if you select the Period to be Day, if the Segment Count is set to 12, the data for each day will be split into 12 tables spanning two hours each.  Point of note here is that once the Period and Segment Count settings are defined, they cannot be changed without backing up and clearing any existing historical data. Therefore, these settings should be chosen with care.

Currently, real-time data logging is supported via IDX 8 Data Exchange. IDX 7 also supports logging via the Historian Client which allows IDX 7 instances on different machines to safely log data into the Historian. The Historian Client uses disk-based buffering to ensure captured real-time data is not lost in the case where communications between the client and server is lost or if the Historian server machine is restarted for example. The IDX Historian Client has both .NET and COM registered APIs that can be used to integrate real-time data from other sources, and is freely available on request, along with C# and C++ code samples. Additional historian configuration is performed on a per-tag basis via IDX Tag Manager (or on the slots in IDX 7), to set options such as data compression (5 compression methodologies are implemented, including the Swinging Door algorithm).

Data can also be queried from the SQL database directly, allowing for additional data analysis and reporting, without the requirement for additional OLEDB or other clients.

Viewing "interesting" historical data with the IDX Historian data viewer.