In test, measurement and control applications, engineers and scientists can collect vast amounts of data in short periods of time. When the National Science Foundation’s Large Synoptic Survey Telescope comes online in 2016, it should acquire more than 140 terabytes of information per week.
Large gas turbine manufacturers report that data from instrumented electricity generating turbines, while in manufacturing test, generate over 10 terabytes of data per day. But the amount of data is not the only trait of big data. In general, big data is characterised by a combination of three or four ‘Vs’ – volume, variety, velocity and value. An additional ‘V,’ visibility, is emerging as a key defining characteristic. That is, a growing need among global corporations is geographically dispersed access to business, engineering, and scientific data characteristic.
Characterising Big Analogue Data information
Big Analog Data information is a little different from other big data, such as that derived in IT systems or social media. It includes analogue data on voltage, pressure, acceleration, vibration, temperature, sound, and so on from the physical world. Big Analog Data sources are generated from the environment, nature, people and electrical and mechanical machines. In addition, it is the fastest of all big data since analogue signals are generally continuous waveforms that require digitising at rates as fast as tens of gigahertz, often at large bit widths. And, it’s the biggest type because this kind of information is constantly generated from natural and man-made sources.
According to IBM, a large portion of the big data today is from the environment including images, light, sound, and radio signals – and it is all analogue. The analogue data the Square Kilometre Array (SKA) collects from deep space is expected to produce 10 times that of the global Internet traffic.
The three-tier Big Analogue Data solution
Drawing accurate and meaningful conclusions from such high-speed and high-volume analogue data is a growing problem. This data adds new challenges to analysis, search, integration, reporting and system maintenance that must be met to keep pace with the exponential growth of data. To cope with these challenges – and to harness the value in analogue data sources – engineers are seeking end-to-end solutions.
Specifically, engineers are looking for three-tier solution architectures to create a single, integrated solution that adds insight from the real-time capture at the sensors to the analytics at the back-end IT infrastructures. The data flow starts at the sensor and is captured in system nodes. These nodes perform the initial real-time, in-motion and early-life data analysis. Information deemed important flows across ‘The Edge’ to traditional IT equipment. In the IT infrastructure, or tier 3, storage, and networking equipment manage, organise and further analyse the early-life or at-rest data. Finally, data is archived for later use. Through the stages of data flow, the growing field of big data analytics is generating never-before-seen insights. For example, real-time analytics are needed to determine the immediate response of a precision motion control system. At the other end, at-rest data can be retrieved for analysis against newer in-motion data, for example, to gain insight into the seasonal behaviour of a power generating turbine. Throughout tiers two and three, data visualisation products and technologies help realise the benefits of the acquired information.
Considering that Big Analogue Data solutions typically involve many DAQ channels connected to many system nodes, the capabilities of reliability, availability, serviceability and manageability (RASM) are becoming more important. In general, RASM expresses the robustness of a system related to how well it performs its intended function. Therefore, the RASM characteristics of a system are crucial to the quality of the mission for which the system is deployed. This has a great impact on both technical and business outcomes. For example, RASM functions can aid in establishing when preventive maintenance or replacement should take place. This, in turn, can effectively convert a surprise or unplanned outage into a manageable, planned outage, and thus maintain smoother service delivery and increase business continuity.
The serviceability and management are similar to that needed for PCs and servers. They include discovery, deployment, health status, updates, security, diagnostics, calibration and event logging. RASM capabilities are critical for reducing integration risks and lowering the total cost of ownership because these system nodes integrate with tier three IT infrastructures.
The oldest, fastest and biggest big data – Big Analogue Data – harbours great scientific, engineering, and business insight. To tap this vast resource, developers are turning to solutions powered by tools and platforms that integrate well with each other and with a wide range of other partners. This three-tier Big Analogue Data solution is growing in demand as it solves problems in key application areas such as scientific research, product test, and machine condition and asset monitoring.
© Technews Publishing (Pty) Ltd | All Rights Reserved