top of page

Search Results

142 results found with an empty search

  • sbRIO-Based Turbine Monitoring Enables Remote Support

    Global power services provider eliminated technical debt and gained remote configuration capabilities with turbine monitoring built on NI sbRIO and Cyth CircaFlex. Technician inspects a turbine during routine maintenance in a power generation plant. Project Summary Global power services provider eliminated technical debt and gained remote configuration capabilities with turbine monitoring built on NI sbRIO and Cyth CircaFlex. System Features & Components FPGA-based frequency measurement for deterministic turbine speed monitoring from gear tooth pulse trains Hardware watchdog on FPGA for independent overspeed alarm triggering without microcontroller intervention Web-based configuration interface replacing serial command-line access for remote support Four configurable probe channels accepting active or passive sensors with Boolean alarm logic Form-fit-function replacement maintaining DIN rail mounting compatibility with legacy system Outcomes Eliminated component obsolescence through COTS platform with improved long-term availability Enhanced service team efficiency through web-based remote configuration Provided firmware transparency with custom FPGA logic in-house team could maintain Enabled scalable deployment supporting 50+ monitoring units annually Technology at-a-glance NI sbRIO-9608 LabVIEW LabVIEW FPGA Module Cyth CircaFlex Custom web interface (HTML/CSS/JavaScript) Active and passive magnetic/Hall effect sensors Rockwell PLC communication protocol Gas and steam turbine monitoring Gas and steam turbines are critical parts of the energy infrastructure that provide backup power when renewable energy sources fall short of meeting base load demand. To ensure the safe operation of these backup turbines, monitoring systems prevent overspeed conditions by measuring turbine blade velocity and triggering emergency shutdowns when dangerous speeds are detected. Monitoring system lifecycle challenges present substantial risks for energy service providers, as it's critical to ensure grid stability while mitigating upgrade costs and timelines. Obsolescence Challenges A global power services provider faced obsolescence challenges with their turbine overspeed monitoring systems. Deprecated semiconductor components forced the end-of-life of their existing measurement systems and introduced a critical sustainment risk that could impact the systems deployed to their clients' assets. Considering that the IP of the existing measurement solution was owned by the original equipment manufacturer (OEM), the powr services provider was left with a "black box" solution they couldn't modify. They decided to find a partner that could help them reverse engineer their solution and deliver: Form, fit, function replacement: New hardware must be DIN rail mountable and the same or smaller in footprint to avoid cabinet modifications across hundreds of installations Replication of proven functionality: Speed measurement accuracy and overspeed detection identical to legacy system Remote configuration capabilities: Addition of a web interface to modernize distributed power plant support capabilities Firmware transparency: Ownership of IP and deep familiarity with system functionality to ensure sustainability well into the future PLC compatibility: Seamless integration with existing Rockwell PLCs FPGA-Based Monitoring The global power services provider decided to work with Cyth Systems to reverse engineer and improve upon their existing monitoring solution because of their proven expertise delivering reliable, high-performance measurement systems into challenging environments. Working from only user manuals and schematics, Cyth engineered a mechanical test rig to replicate turbine gear tooth patterns and validate measurement accuracy against legacy system specifications. Parallel testing at the customer's facility enabled iterative firmware refinement throughout development, ensuring the FPGA implementation matched proven performance while adding modern capabilities. The resulting architecture delivered real-time RPM monitoring, overspeed threshold adjustment, and measurement breakpoint configuration—transforming field service requirements into remote support capabilities across distributed power plant installations. Cyth built the turbine speed and overspeed monitoring system on the NI sbRIO-9608 with Cyth's CircaFlex technology, a rapid prototyping solution that delivers connectivity and signal conditioning in a compact footprint. System Architecture & Capabilities FPGA-based turbine speed measurement: LabVIEW FPGA Module provides deterministic edge counting logic for gear tooth pulse trains, replacing four CPLD chips with unified FPGA implementation Configurable overspeed alarm logic: LabVIEW FPGA implements four probe channels routing to alarm outputs through user-defined Boolean conditions Real-time hardware watchdog protection: NI sbRIO FPGA monitors turbine speed continuously and triggers digital outputs to Rockwell PLCs without microcontroller intervention Remote web-based configuration: Custom application developed with HTML/CSS/JavaScript interface enabling remote RPM monitoring, overspeed limit adjustment, and measurement breakpoint configuration Multi-sensor probe compatibility: NI sbRIO digital I/O accepts both passive magnetic and active Hall effect sensors across four independent channels PLC communication compatibility: Backward-compatible protocol maintains seamless integration with existing Rockwell PLC infrastructure for drop-in replacement Commercial off-the-shelf platform: NI sbRIO consolidates bill of materials, reducing custom manufacturing requirements and improving long-term component availability Sustainable COTS Platform The NI sbRIO-based monitoring solution dramatically enhanced the sustainability of the platform by building on a robust, high-performance COTS hardware foundation while the custom application developed by Cyth gave the service provider complete ownership of system IP to eliminate vendor dependency. The power services provider experienced several operational improvements: Improved service team utilization: Web-based configuration enabled remote customer support, decreased average service response times, and greatly reduced required field service visits Reduced installation complexity: Form-fit-function design allowed rapid hardware swaps in existing cabinets without PLC system modifications Enhanced long-term supportability: BOM consolidation onto COTS hardware platform and ownership of software IP eliminated vendor dependency The ability to remotely configure and maintain firmware differentiated the provider's turbine monitoring offerings. Full ownership of software IP enabled the global services provider to develop system support expertise internally and greatly mitigate long-term sustainability concerns.

  • Arbitrary AWG for Next-Generation Semiconductor Manufacturing

    Semiconductor equipment manufacturer achieved next-generation etching capabilities through advanced waveform generation and control built with NI PXI and LabVIEW FPGA. Custom Advanced Arbitrary Waveform Generator (AWG) Project Summary Semiconductor equipment manufacturer achieved next-generation etching capabilities through advanced waveform generation and control built with NI PXI and LabVIEW FPGA. System Features & Components High-speed arbitrary waveform generation with complex waveform capabilities enabled the generation of waveforms with 1,000+ samples per millisecond (>1MS/s) Highly-synchronized waveform control and oscilloscope measurements for coordination of digitized signal measurement Streamlined vacuum chamber integration for semiconductor wafer processing applications Outcomes Next-generation semiconductor chip manufacturing enabled through high-precision waveform control Widespread customer adoption in every major silicon wafer fabrication site in the world Sustainable innovation enabled by robust and flexible system architecture built on NI PXI and LabVIEW FPGA Technology at-a-glance PXIe-1071 Chassis NI PXI-5441 Arbitrary Waveform Generator PXIe-5105 Oscilloscope PXIe-8822 Embedded Controller PXI-7852R FPGA Module LabVIEW FPGA Silicon Wafer Etching Almost every single modern electronic device contains at least one semiconductor chip. Smartphones, TVs, washing machines and cars depend on the precise and complex control of electrical signals that semiconductors provide. Silicon wafers are a foundational material from which many semiconductor technologies are made. Part of the semiconductor manufacturing process includes etching microscopic, 3D patterns onto these silicon wafers to form electronic devices like transistors, capacitors and interconnects that are critical for the function of the manufactured microchip. The level of precision with which these electronic components are etched into the silicon directly impact the performance of the microchip, making the uniformity and accuracy of these etched features critical at the nanometer scale. Complex Waveform Requirements A major semiconductor equipment manufacturer was facing significant limitations with their equipment’s ability to support the manufacture of next generation chips. Their semiconductor manufacturing tools were deployed into many silicon wafer fabrication sites and their manufacturing customers were continuously coming up against the limitations of their systems, putting their market position and market share at risk. They needed to upgrade the simple signal generators in their current solution to waveform generators capable of delivering the complex waveforms necessary to precisely-control the microscopic piezo coils central to their unique etching process. To maintain and expand their customer base, they required a solution capable of: Precision Control : Their equipment needed to drive microscopic piezo coils at rates of 2,000-100,000 times per second, requiring advanced control capabilities with arbitrary waveform programming and thousands of sample points per period. Complex Waveform Requirements: Simple waveforms like sine, square and triangular signals were not sufficient for controlling the complex etching operations their customers required; they needed the capability to customize every single point within the waveforms controlling their piezo coils. High-Speed Synchronization: The waveform generator required tight coupling with oscilloscope measurements to synchronize digitized signal acquisition Global Deployment: The solution needed to integrate seamlessly with semiconductor manufacturing equipment deployed worldwide Customized Advanced AWG The global semiconductor equipment manufacturer approached Cyth Systems for help improving their etching capabilities. Cyth’s expertise developing complex, highly-synchronized control systems and custom waveform generator solutions enabled them to rapidly iterate on the customer’s existing solution to provide high-performance waveform generation, measurement, and control. The development process of the Arbitrary Waveform Generator (AWG) included three iterations: 1st generation: Replicate simple, existing waveform generation capabilities on NI PXI platform 2nd generation: Synchronize AWG pulses with oscilloscope measurements to ensure accurate digital signal data acquisition 3rd generation: Further refine synchronization between waveform pulses and oscilloscope measurements to enable the driving of piezo coils in the upper frequency ranges (up to 100,000 times per second) Left: 2nd Generation Arbitrary Waveform Generator, Right: 1st Waveform Generator. The AWG system was built to perform sophisticated waveform analysis, generation, and control without operator intervention: Collected waveforms, measured their length and characteristic shape Determined the optimal number of points required to accurately describe each waveform Set appropriate sampling rates based on waveform complexity Calculated the precise number of points needed to achieve target sample rates The equipment manufacturer required 1,000+ samples per millisecond (>1MS/s) to accurately characterize the waveforms to be generated; the samples were then upscaled to 200MS/s to ensure smooth signal quality as the waveform is output. The waveform generator was tightly coupled with an NI PXI oscilloscope in a closed-loop approach to enable real-time system optimization. The synchronization in the measurements of digitized signals ensured precise timing coordination between waveform output and measurement feedback. Graphs comparing the outputs of a simple waveform generator vs. an arbitrary waveform generator Leveraging the NI PXI platform with LabVIEW FPGA software, Cyth created a sophisticated Arbitrary Waveform Generator (AWG) capable of supporting next-generation semiconductor equipment with dramatically improved high-speed and high-sample rate waveform control. For the semiconductor equipment manufacturer, the greatest differentiators of the NI platform were: Measurement integration: Synchronized waveform generation and oscilloscope measurement in a single platform Firmware flexibility: LabVIEW-based algorithms enable rapid parameter adjustments and optimization Hardware reliability: NI PXI platform provides industrial-grade reliability for 24/7 manufacturing operations Compact footprint: 4-slot PXI chassis delivers advanced capabilities in space-efficient design ​System PXI Card Specifications ​Use PXIe-1071 Chassis 4-Slot Chassis PXI Chassis NI PXI-5441 43 MHz, 100 MS/s AWG, 16-Bit, Onboard Signal Processing Arbitrary Waveform Generator PXIe-5105 60 MHz, 8-Channel, 12-Bit PXI Oscilloscope High Speed & High Sample Rate Waveform Measurement PXIe-8822 Embedded Controller – FPGA-Based I/O, 2.4 GHz Quad-Core Processor PXI Controller Data Logging & Control PXI-7852R Virtex-5 LX50 FPGA, 750 kS/s Data Logging & Control Sustainable Innovation The Arbitrary Waveform Generator delivered transformative capabilities that positioned the equipment manufacturer for next-generation semiconductor manufacturing leadership. The most impactful system performance improvements were: Advanced waveform control:  Transition from simple signal generation to arbitrary waveform programming with thousands of sample points per period Synchronized measurement:  Tight integration between waveform generation and oscilloscope measurement for closed-loop optimization Scalable sampling rates: Flexible sampling from >1 MS/s for analysis up to 200 MS/s for output generation The overall system improvements enabled the customer to deliver: Global deployment capability:  System integrated successfully across every major silicon wafer fabrication site worldwide Next-generation semiconductor manufacturing capabilities:  Advanced waveform control capabilities support production of cutting-edge semiconductor devices Future-ready platform:  Modular PXI architecture enables flexibility for continued system evolution and capability expansion The new, advanced capabilities fundamentally strengthened the semiconductor equipment manufacturer’s position as a leader in their space. The transition from simple signal generation to sophisticated arbitrary waveform generation and control enabled their equipment to meet the high precision requirements of next-generation semiconductor chip manufacturing. These modernized etching systems became a critical enabler for the semiconductor industry's continued advancement toward smaller, faster, and more efficient devices. These systems were built for sustainable innovation. The proven software architecture and modular NI PXI I/O enable continuous capability enhancement as semiconductor manufacturing requirements continue to evolve.

  • Precision Rotor Balancing for Turbomolecular Pumps

    Ultra-high vacuum equipment manufacturer developed high-precision rotor balancing system in five weeks using the NI USB-9234 DSA, LabVIEW, and the NI Sound and Vibration Measurement Suite. *As Featured on NI.com Original Author: Gerard Johns, Edwards Edited by: Cyth Systems Edwards turbomolecular pump CAD rendering Project Summary Ultra-high vacuum equipment manufacturer developed high-precision rotor balancing system in five weeks using the NI USB-9234 DSA, LabVIEW, and the NI Sound and Vibration Measurement Suite. System Features & Components Integrated IEPE signal conditioning of NI USB-9234 enabled direct accelerometer connection without external amplification NI sound and vibration measurement software included signal reconstruction and analysis functions that facilitated quick implementation of custom processing algorithms LabVIEW software enabled rapid prototyping , automated code generation, and implementation of parallel processing architecture Parallel processing architecture balancing of two pumps simultaneously per station enhanced balancing operation throughput capability through Interactive operator guidance tools translating phase angle and magnitude calculations into correction mass assembly instructions Outcomes Complete system development from concept to production deployment in under five weeks Dynamic input range exceeding all commercially available balancing systems Parallel balancing and correction operations per NI USB-9234 increased production throughput at a fraction of the cost of a typical commercial system Class-leading vibration measurement performance enabled successful product launch  Technology at-a-glance Hardware: NI USB-9234 Dynamic Signal Analyzer (obsolete  – comparable NI C Series module NI-9234 ) Accelerometers Digital photo sensor for rotor position detection Software: NI LabVIEW NI Sound and Vibration Measurement Suite (obsolete – replaced by LabVIEW Sound and Vibration Toolkit) Ultra-High Vacuum Pumps Laboratory mass spectrometers and electron microscopes require ultra-high vacuum environments where even minimal vibration compromises measurement accuracy. Edwards, a leading manufacturer of vacuum equipment serving semiconductor and pharmaceutical industries, faced a critical challenge when developing their nEXT Turbo Molecular Vacuum Pump. They wanted to achieve world-class vibration performance, which required rotor balancing tolerances that were impossible to measure with existing solutions available on the market. Edwards needed a custom balancing system that could measure vibration with unprecedented precision for rotors spinning at high velocities. Measurement & Data Analysis Challenges The nEXT Turbo Molecular Vacuum Pump utilized a turbine rotor spinning at 60,000 rpm, with blade-tip velocities approaching 90% of the speed of sound. These unparalleled capabilities of this technology presented a few critical manufacturing challenges for Edwards. Inaccessible measurement location:  Rotor assembly contained within sealed pump housing prevented direct vibration measurement at the source, deviating from standard balancing methodology Manual calculatios:  Off-the-shelf balancing solutions required operators to perform manual calculations and balancing compensation, which greatly slowed production and increased the potential for human error Compressed timeline:  Product launch was weeks; Edwards needed to take their concept through validation to production deployment relying only on their internal teams for development Measurement precision:  Dynamic input range required for the detection of minute vibrations and high rotor speeds exceeded all commercially available rotor balancing solutions Due to technical and timeline constraints, Edwards knew they could not rely on the conventional balancing solutions commercially available. They decided to leverage an NI-based technology stack to accelerate development while staying within budget. Left: OEM vacuum pumps by Edwards, Right: the inside fan blades of a turbomolecular pump. High Measurement Accuracy Edwards' engineering team selected the a few key pieces of NI platform to prototype and deploy a custom balancing solution. Core Platform Selection: NI LabVIEW software:  Graphical programming environment enabled rapid prototyping and robust application development into production environments NI Sound and Vibration Measurement Suite: Domain-specific libraries with signal reconstruction functions and vibration analysis algorithms NI USB-9234 dynamic signal analyzer:  Precision measurement hardware with 51.2 kS/s sample rate per channel, 24-bit resolution, and 102 dB dynamic input range Integrated IEPE signal conditioning:  Direct connection from USB-9234 to the accelerometer simplified system architecture and reduced potential signal degradation points This development approach enabled Edwards to accelerate prototyping and application development without sacrificing the high measurement accuracy critical for this application. Rapid Prototyping Through Production The NI Sound and Vibration Measurement Suite signal reconstruction functions enabled interfacing a digital photo sensor with analog inputs for rotor position detection. An accelerometer attached to the pump body captured vibration data. The Sound and Vibration Assistant automatically generated LabVIEW code from the prototype configuration, eliminating weeks of manual coding and providing a validated foundation for production application development. Edwards built production-ready features on the LabVIEW foundation: Automated calibration functions to maintain measurement accuracy across multiple balancing rigs Proprietary calculation algorithms for measuring pump imbalance through the housing rather than directly at the tip of the rotor Interactive operator guidance tools to translate phase angle and magnitude calculations into straightforward instructions for rotor balancing Using LabVIEW's parallel processing architecture, Edwards configured the USB-9234's remaining channels to balance two pumps simultaneously. This effectively doubled production capacity from a single hardware platform at a fraction of the cost of purchasing two commercial systems. In less than five weeks, Edwards took their proof of concept through validation and into production deployment. Increased Production & Enhanced Sustainability Edwards’ augmented technical capabilities far exceeded commercially available balancing equipment. The key technical features enabled by the NI USB-9234 included: Detection and correct of vibrations in rotors spinning at 60,000 rpm 102 dB dynamic input range enhanced measurement flexibility and enabled quick accommodation for variations in pump models and rotor configurations Leveraging the NI technology stack, Edwards increased their rotor balancing operation efficiency: Cost Savings: Parallel, dual-pump rotor balancing capability delivered two complete balancing rigs at fraction of the price for a single commercial system Improved cycle time: Automated imbalance calculations and operator guidance tools reduced balancing cycle time and training requirements Operational control: Internal solution support eliminated vendor dependencies and enabled direct implementation of system enhancements as product line expanded Platform standardization: LabVIEW and NI hardware as accepted standards across Edwards’ production test, global service centers, and R&D laboratories, resulting in reduced training complexity and enhanced knowledge transfer In five weeks, Edwards used LabVIEW, the USB-9234, and the Sound and Vibration Measurement Suite to rapidly develop a custom solution for high-speed rotor balancing that outperformed commercial alternatives. Original Author: Gerard Johns, Edwards Edited by: Cyth Systems

  • Automated Battery QA Ensures Medical Device Reliability

    Medical device manufacturer achieved 100% quality verification in 12 weeks by implementing an automated battery testing solution built with Cyth BatteryFlex and NI PXI. BatteryFlex platforms battery tests and analysis of portable ventilator batteries. Project Summary Medical device manufacturer achieved 100% quality verification in 12 weeks by implementing an automated battery testing solution built with Cyth BatteryFlex and NI PXI. System Features & Components NI PXI data acquisition platform provided high-accuracy voltage and current measurements to enable comprehensive battery characterization Cyth BatteryFlex multi-channel testing architecture enabled simultaneous test of multiple batteries to maximize testing throughput LabVIEW user interface provided operators with live test data visualization Scalable platform architecture accommodated increased production volumes without additional capital investment Outcomes 100% individual battery verification achieved , eliminating field failures due to battery capacity issues Test cycle time and cost of test significantly reduced through parallel test of multiple batteries Turnkey automated test solution delivered in 12 weeks leveraging Cyth BatteryFlex platform Technology at-a-glance NI PXI platform LabVIEW  software Cyth BatteryFlex Verifying OEM Component Performance When designing new products, manufacturers must verify that every component in the bill of materials (BOM) performs to OEM specifications, ensuring that the product delivers on promises made to end-users. Some components influence a devices overall performance more than others. For example, a defective or substandard battery built into a in a life-critical application can result risks to patient safety and an enormous liability burden for the device manufacturer. Manual Testing Limits Scalability The device manufacturer’s manual battery testing methodology was not capable of addressing their quality assurance needs. Measurement accuracy limitations prevented the acquisition of the high-precision voltage and current measurements necessary for comprehensive battery characterization. Testing inefficiencies of manual processes created bottlenecks in production timelines. Quality uncertainty resulting from the lack of individual battery verification resulted in deployment risks for life-critical applications. Scalability constraints of manual testing prevented the manufacturer from increasing production volumes. Left: The customer's portable ventilator, Right: Traditional ventilator. To hold their supplier accountable and ensure patient safety, the device manufacturer needed a high-accuracy, automated solution to verify the quality of the batteries built into their portable ventilators. They decided to partner with Cyth Systems to address their automated testing needs because of their proven expertise designing high-throughput automated test solutions. Comprehensive Battery Characterization Cyth deployed their BatteryFlex architecture, an automated battery testing platform for comprehensive battery characterization leveraging the high-accuracy measurement capabilities of NI’s PXI platform. The programmatic execution of numerous test protocols simultaneously across multiple batteries enabled the device manufacturer to drive test time and test cost down substantially. Key hardware capabilities: NI PXI data acquistion for high-accuracy voltage and current measurements Cyth BatteryFlex multi-channel architecture enabled simultaneous battery testing Custom test fixtures ensured secure battery connection and consistent test conditions Key software capabilities: LabVIEW-based user interface for live data visualization Automated test sequencing for five critical battery characterization protocols Open Circuit Voltage (OCV) Power Cycle Test Capacity Testing (Static, Script, Pattern/Pulse) DC Internal Resistance (DCIR) AC Internal Resistance (ACIR) Battery capacity identification and quality verification for each individual battery Left: PXI data acquisition platform, Right: BatteryFlex LabVIEW user interface (UI) showing live test data. 100% Quality Verification The turnkey test solution transformed the device manufacturer’s battery quality assurance from a manual bottleneck into an automated, scalable process. Quality assurance: 100% individual battery verification prior to deployment eliminated field failures from battery capacity issues Testing efficiency: Simultaneous testing of multiple batteries substantially reduced cycle time and overall cost of test Production flexibility: Scalable platform accommodates increased testing volumes without additional capital investment Risk mitigation: High-accuracy measurements ensure that only specification compliant batteries are deployed into life-critical ventilators Rapid deployment: BatteryFlex reference design accelerated solution development time; everything from proof-of-concept to turnkey test system deployed in 12 weeks. Now, the medical device manufacturer operates with confidence, knowing that every single battery deployed into their portable ventilators meets their exact capacity specifications, ensuring reliable performance in life-critical patient care.

  • CompactRIO Enables Undergraduate Power Electronics Education

    *As Featured on NI.com Original Author: Mats Alaküla, Lund Univerisity Edited by: Cyth Systems Project Summary Lund University integrated the NI CompactRIO into its power electronic lab, teaching students real-time power electronics with research-grade systems. System Features & Components Real-time operating system (RTOS) enabled speed control and PID optimization FPGA-level logic enabled implementation of hysteresis bounds and the simplification of overall system architecture Live data visualization and parameter adjustment enabled through HMI Outcomes Achieved “fast computer” model levels of determinism , enabling real-world levels of system responsiveness Reliable control loop execution delivers continuous live monitoring Equipped undergraduate students with hands-on experience using research-grade control systems Technology at-a-glance Hardware: NI cRIO-9063 chassis NI cRIO-9038 chassis Software: LabVIEW LabVIEW FPGA LabVIEW Real-Time Control Theory in Practice In university electrical engineering labs, students learn how motor drives and power electronics operate. These types of systems require microsecond-level precision to ensure continuous and smooth operation of motors. For educators, it can be a challenge to bridge the gap between theoretical “fast computer” models and real-world control systems that introduce computational delays. At Lund University in Sweden, they needed to address this education gap needed to ensure their students could experience firsthand how control theory performs in a real-world context. Determinism Requirements Professor Mats Alaküla needed to teach students how to control electrical motor drives and power electronics systems with sub-milisecond time constraints. Maintaining currents within safe operating limits require voltage controll within hundreds of microseconds. Their existing MATLAB/Simulink and DSpace technology platform could not keep pace with modern electrical drives requiring increasingly higher frequencies. The Windows-based monitoring system interfered with control, disrupting the simulation of a realistic control system. The majority of students’ time was spent creating workarounds for hardware limitations, not mastering control algorithms themselves. Lund University needed a solution that would prepare their students for the real-world scenarios they would encounter in ther future careers. Hysteresis Control Enabled The university chose to adopt the NI CompactRIO platform, paired with LabVIEW Real-time and FPGA Modules to implement a control architecture that would eliminate computation delays. NI cRIO-9063 & NI cRIO-9038 CompactRIO controllers. System Architecture & Capabilities FPGA-based current control: time-critical electrical current control implemented directly on the FPGA Real-time processing: slower control loops, for ensuring optimal system performance, run on real-time operating system (RTOS), including engine speed trajectory following and continuous PID parameter recalculation Windows OS: live data visualization and datalogging enabled through user interface housed on the Windows OS Integrated resolver signal processing: cRIO I/O availability and measurement speed capabilities eliminated the need for dedicated resolver circuits Hysteresis control capability: FPGA measurement speed enabled direct current control with real-time three-phase current visualization in real-imaginary planes Sub-100 microsecond voltage control: implemented on FPGA and RTOS to maintain current within acceptable intervals required by electrical drives The responsiveness of the cRIO enabled the implementation of control methods that their previous solution couldn’t support. Direct current control via hysteresis required high determinisim to keep current within precise tolerances. Applied Motion Stepper Motor Drives, controlled and communicated with using NI LabVIEW software. Traditional rotor position measurement requires high-frequency input signals and additional processing circuits. The measurement speed and I/O flexibility of the NI cRIO platform were capable of directly handling resolver signal processing and simplifying the system architecture students interact with. The self-contained nature of the cRIO, paired with its ability to push live updates to host computers eliminated the Windows OS interference problems that previously disrupted control loops. Real-World "Fast Computer" The architecture enabled by the technology platform eliminated the gap between theory and practice for these students, as the solution responds as theoretical “fast computer” models would, making control theory directly applicable to real-world systems. Lund University’s introduction of the NI CompactRIO platform to undergraduate students enabled continuity and best practice sharing with graduate students already using the platform for advanced electrical machine development. The university is now fully capable of preparing their students for their future careers by enabling them to gain hands-on experience with the optimal control strategies driving the pace of development in modern power electronics engineering. Original Author: Mats Alaküla, Lund Univerisity Edited by: Cyth Systems

  • Quality Assurance & Control through LabVIEW-based Automation

    Window tint manufacturer improved QA through implementation of turnkey, automated film testing solution built by Cyth Systems with LabVIEW. Technician installing window tint film on a vehicle Project Summary Window tint manufacturer improved quality assurance through implementation of turnkey, automated film testing solution built on LabVIEW. System Features & Components Turnkey film quality testing system delivered calibration repeatability, multi-axis measurement and standardized quality control LabVIEW-based measurement automation and algorithmic analysis of images with custom calibration routines and pass/fail decision logic Custom optical measurement solution contained in benchtop housing with a high-resolution camera, high-powered LED array and cartridge clamp mechanism   Outcomes Standardized quality control across all production lines through eliminating  manual inspection Reduced error rates in film acceptance by implementation of measurable pass/fail criteria Expanded quality assurance capacity by deploying test units to every production line Improved product quality assurance through enabling capture of fine defects Technology at-a-glance LabVIEW software Custom optical measurement housing High-resolution camera High-power LED illumination Precision cartridge clamp solution Precision Engineering and Inspection Window tint film shields vehicles and buildings from heat, UV damage and glare. This seemingly simple technology requires precision engineering and manufacturing to ensure adequate visibility and durability of the ting long term. Ensuring consistent film quality at production volume requires automated testing to deliver reliable, high-quality results.   Manual Testing Bottlenecks A well-established San Diego window tint manufacturer needed to transform their quality control process from manual inspection to automated precision measurement. The greatest limitations to their quality assurance were: Manual inspection resulting in quality variations:  different quality inspectors rated distortion and striation levels subjectively, producing iconsistent quality judgements Random sampling missing defects:  randomly selecting film segments from production lines left gaps in quality assurance coverage No standardized measurement criteria:  without numerical pass/fail thresholds, accepted quality standards varied significantly between shifts Scale limitations:  Manual processes limited production volumes and scalability of the business The quality of window tint film relies on minimal surface grain and distortion. Striations on the film directly impact transparency, and consequently, customer satisfaction The window tint film manufacturer needed to implement an automated tessting system to achieve the critical consistent, repeatable measurements they needed across all of their product lines. They turned to Cyth Systems for help because of their expertise with test automation and precision measurements. Multi-subsystem Architecture Cyth engineered a benchtop distortion measurement system, a turnkey solution that combined optical measurement hardware with custom LabVIEW software for automated quality control. The prominent subsystems in the solution were: Housing with cartridge clamp system:  mechanical fixture for positioning and stabilizing film samples to ensure consistent test quality High-powered LED illumination:  array of controlled lighting to eliminate ambient variables and highlight surface characteristics of test samples Precision camera scanning:  high-resolution image capture scans entire film surface with calibrated field of view Custom LabVIEW measurement algorithm:  custom algorithm analyzes surface grain, calculates distortion ratings and applies pass/fail criteria based on specifications Multi-axis measurement protocol:  four-point measurement system captures comprehensive distortion profiles Calibrated repeatability:  systems were standardized based on hundreds of reference samples to align manufacturer’s existing rating scale matches the specifications and thresholds in the test system The automated testing workflow was sequenced as follows: Operator calibrates the camera to appropriate field of view Film sample is loaded into the cartridge clamp First measurement is captured at 0 degrees orientation Cartridge is rotated 90 degrees and steps 2-4 are repeated LabVIEW-based algorithms calculated the average distortion score of the sample Pass/fail determination is made automatically based on quality acceptance thresholds   LabVIEW user interface enables test technicians to view and save results. Iterative Algorithm Refinement The window tint manufacturer’s product lines varied from automotive window tint to commercial building installations, with each application requiring very specific distortion tolerances. One of the most challenging technical aspects to develop was the calibration of the optical measurement system. The custom measurement algorithm was iteratively tested with hundreds of collaboratively-rated film samples, which trained the system to recognize quality characteristics that experienced operators identified visually. Enclosed, benchtop version of tester alongside laptop HMI. Core System Architecture: Optical measurement subsystem included high-resolution cameras with a calibrated field of view and high-power LED lighting array enclosed in a custom, compact housing designed for benchtop integration. Mechanical handling subsystem was built with a cartridge clamp mechanism to enable standardized sample positioning with 90-degree rotation capabilities for multi-axis measurement and an operator-accessible interface for replacing samples. Custom LabVIEW control and analysis software  enabled the implementation of processing algorithms to automatically measure film sample distortion and display results based off of the window tint manufacturer’s existing quality rating system. User interface and reporting features were also built with LabVIEW; the front panel guided operator workflow and included graphs with numerical distortion scores and data trends. Quality control documentation was automatically generated and saved to a secure database. The core elements of the final solution enabled the manufacturer to remove subjectivity from their quality assessments while ensuring adherence to the manufacturer’s existing and established quality rating system. Iterative system calibration process  was a collaboration between the manufacturer’s quality experts and Cyth Systems engineering to test hundreds of film samples and ensure measurement accuracy and repeatability. Extensive standardization validation performed to deliver identical ratings across units for quality assurance at scale. Turnkey integration facilitated operator ramp-up due to straightforward sample loading procedure and automated measurements. The precise, quantifiable results of the system enabled data-driven decision making, which mitigated quality risks and reduced overall test times.   Improved Quality Assurance The automated window film quality test solution enabled the manufacturer to improve and scale quality control operations. The turnkey system delivered: Standardized quality control across all production lines through eliminating the inconsistencies in manual inspection. Reduced error rates in film acceptance through implementation of measurable pass/fail criteria, as opposed to subjective operator judgement. Expanded quality assurance capacity by deploying units to every assembly line to ensure comprehensive coverage for all production. Improved product quality assurance through enabling capture of defects that were previously imperceptible.

  • Robotic Automation Triples Sample Preparation Throughput

    Automated viral dispensing system increases biopharmaceutical R&D lab’s throughput by 300% with precision robotics, LabVIEW, NI TestStand, PXI hardware and machine vision. Custom, automated dispensing subsystem coats test slides with sample suspensions. Project Summary Automated sample dispensing system increased biopharmaceutical R&D lab's throughput by 300% with LabVIEW, NI Test Stand, PXI System Features & Components 6-axis robot arm with precision gripper and machine vision to ensure safe and efficient slide handling 16-channel peristaltic pump system for transporting suspensions with viral material to dispensing nozzle NI LabVIEW and Test Stand sotware architecture for controlling PXI and process hardware Biosafety Level 2 (BSL-2) cabinet with laminar airflow for containment of viral matter Outcomes 300% sample preparation throughput increase (1,00 slides per 2-hour cycle) Complete containment of infections materials through implementation of Biosafety Level 2 (BSL-2) protocols 5-30μL dispensing accuracy (±10%) ensures consistent test preparation Emergency safety systems with light curtain monitoring Technology at-a-glance Hardware: NI PXI-1082 Chassis (obsolete) NI PXI-6514 NI PXI-6225 (obsolete) 1300 Series Class II, Type A2 Biological Safety Cabinet (Thermofischer) 6-stop Indexing Rotary Table Denso 6-axis robot Watson Marlow 520Di Peristaltic Pump Drive (16 channel) Basler Camera 4 Megapixel Color Edmund Optics Telecentric Lens SONY Mini Pinhole Lens Camera Sick Proximity Sensors Safety Light Curtain Sender and Photoelectric Sensor Software: LabVIEW NI TestStand Biopharmaceutical Viral Research Breakthroughs in biopharmaceutical research depend on precise, high-volume testing of viral samples. Researchers require thousands of consistently prepared test slides with exact viral concentrations to develop flu vaccines or understand the transmission of respiratory diseases. The manual preparation of virus-coated slides can slow critical research and expose laboratory personnel to risk because of the continuous handling of infectious materials. For biopharmaceutical R&D labs, testing throughput directly impacts their research timelines and discovery rates. To accelerate their research and bring treatments to market faster, researchers need an automated solution that can handle infectious materials safely and rapidly prepare viral samples. Manual Sample Preparation Bottlenecks A   biopharmaceutical R&D company faced a serious bottleneck in their respiratory desease testing program due to the manual dispensation of viral samples by laboratory operators. Operators were dispensing eight different viral samples, including rhinovirus, influenza, and SARS, onto individual test slides, one by one. This manual process grealy limited their resting capacity and created safety concerns for operators handling infectious materials at scale. Their existing workflow could only produce a fraction of the test slides they needed to support their research initiatives, which created delays in their studies and greatly limited their ability to scale their programs effectively. This biopharmaceutical R&D company decided to collaborate with Cyth Systems to build an automated slide coating solution because of their expertise with industrial automation and precise mechanical control. Industrial Automation and Robotics Cyth engineered a fully automated viral dispensing system that integrated precision robotics, multi-channel pumping, and biological safety containment. Cyth designed a complete automated workflow to transform their inefficient, manual processes into a high-throughput and safety-focused operation. Core System components: Heated stirring plate for maintaining suspension of eight distinct viruses in liquid mediums 16-channel peristaltic pump for delivering precise viral volumes to nozzles for dispensation onto a slide Denso 6-axis robot arm for managing slide po sitioning with 5-30μL dispensing accuracy (±10%) Indexing rotary table for transporting coated and dried slides to a rack for storage Biological Safety Lever II cabinet for containing infections materials   Advanced Process Architecture built on NI Labview and TestStand:  Supervisory processes built on LabVIEW programs orchestrated by NI TestStand software. NI PXI chassis delivers high-speed I/O and Modbus/RS485 communication capabilities to coordinate system hardware. Precision Robotic Control through Machine Vision:  Sick proximity sensors and high-resolution cameras identify available positions on the drying rack. Comprehensive Safety Intgration: Emergency light curtains trigger immediate halt to process hardware if the enclosure is breached. The Biological Safety Level II (BSL-II) cabinet uses laminar airflow to contain all viral particles and prevent contamination of samples. Workflow steps: Robot arm retrieves clean slides from supply stack Heated stirring plate (1) prepares viral content for dispensation; eight distinct viruses were suspended in a liquid medium. 16-channel peristaltic pump (2) delivers virus suspensions to dispensing nozzles. Denso 6-axis robot arm (4) picks up an available testing slide (3) and positions it underneath the nozzle to be coated in the suspension, then the arm transports the slide to a drying rack Indexing rotary table (5) transports completed and dried samples to an available rack with storage for up to 1000 samples 1: Heated stirring plate. 2: 16-channel peristaltic pump. 3: Stack of clean slides. 4: Denso 6-axis robot with custom-fabricated gripper. 5: Indexing rotary table with a drying rack for slides. Sample Preparation Throughput Tripled Production efficiency gains: 300% increase in test sample preparation throughput 1,000 slides produced every 2-hour cycle Continuous operation with minimal manual interaction from operators Enhanced safety: Complete containment of infections materials achieved adhering to BSL-II protocols Eliminated direct operator exposure to viral samples Emergency stop enables immediate operator intervention as needed Research acceleration: Increased access to high-quality samples greatly increased biopharmaceutical research testing capacity Consistent dispensation of slide coating improved reliability of research results Reduced slide preparation time allows researchers to focus on analysis rather than sample preparation The automated system has positioned the biopharmaceutical R&D company to scale their respiratory disease research programs significantly, enabling them to accelerate the development of treatments. The enhanced safety protocols, combined with increased sample preparation throughput gave the R&D company a competitive advantage when securing research contracts and advancing their pharmaceutical developments.

  • Pipette Manufacturing Quality improved by NI PXI and cRIO Automation

    Pipette manufacturer partnered with Cyth to reverse-engineer and automate their proprietary manufacturing process, achieving microscale precision at high volumes. Fully integrated and enclosed pipette manufacturing station. Project Summary Improved product quality achieved through automation of multi-stage glass fabrication process using high-resolution vision inspection and precision motion control. System Features & Components NI CompactRIO-based system coordinated multi-stage glass fabrication process High-resolution video capture to ensure continuous quality inspection throughout manufacturing Precision motion control for sub-micron positioning accuracy to ensure fine precision manipulation of delicate glass pipettes Automated thermal management systems for heating element control and monitoring Custom LabVIEW software for image processing, motion control, and thermal regulation Outcomes Microscale manufacturing precision with sub-micron positioning accuracy for medical-grade pipette production Reverse-engineered manufacturing process mitigated sustainment risks by enabling access to low-level process IP Automated multi-stage quality inspection replaced manual verification, significantly improving test times and quality assurance Technology at-a-glance NI CompactRIO-9064 chassis NI-9512 motor drive interface module (obsolete) PXIe-1073 chassis NI PXI-6521 industrial digital I/O module NI PXIe-4113 power supply LabVIEW  software High-resolution cameras Precision servo motors and drives Custom thermal control hardware High Accuracy Application Successful in vitro fertilization procedures rely on an instrument with precision finer than the width of a human hair, the insemination pipette. These microscale glass tools require ultra-precise manufacturing; the beveled tips must be capable of penetrating an egg cell to deliver genetic material without damaging the cell membrane. Delicate procedures like these require precision tools manufactured of the utmost quality. Microscale Manufacturing A global leader in IVF solutions was faced with an inflection point in their pipette manufacturing process. Due to IP restrictions, they needed to find a partner to reverse-engineer their precision manufacturing system with limited additional specifications. They needed a partner that could develop and build an automated manufacturing solution on a microscale. Their pipette manufacturing process required high levels of precision at every stage. The manufacturing process began with glass tube stock, which was then heated and pulled to create ultra-fine threads. These threads were then cut, beveled, inspected, and sharpened to create a medical-grade instrument. The technical challenges included: Microscale manipulation : Each operation was performed on a pipette finer than the width of a huma hair, which required sub-micron positioning accuracy Multi-stage quality control : Pipette width verification, length cutting, bevel angle inspection, and pathway validation were performed in tight sequence. Proprietary process constraints : IP restrictions required the independent development of a system capable of matching or exceeding the performance of the previous manufacturing solution Complex thermal processes : Precise heating and thermal management were required to control glass pulling, melting and sharpening processes Fully Automated Manufacturing Cyth brought expertise in high-performance imaging and precision motion control critical to ensuring manufacturing quality and measurement accuracy. Cyth developed an automated pipette manufacturing fixture through incorporating the limited details and customer requirements and performing an in-depth research phase for learning about the system to be able to reverse engineer its functionality. Stations in the process of being built by Cyth's Manufacturing team in San Diego, CA. The NI CompactRIO platform’s high-speed data acquisition and real-time control capabilities enabled the implementation of key manufacturing system capabilities, including: Glass tube heating and pulling for pipette thread stock formation Real-time width measurement and verification with micron-scale resolution Precision cutting and length tolerance verification Beveling and angle inspection Multi-camera vision for continuous process monitoring and quality assurance Sub-micron level positioning accuracy of motion control systems The NI PXI platform was critical for: Precision control of forges for heating filaments to precise temperatures for shaping operations Synchronization of LED illumination for visual inspection with ionizer operation Parallel processing for thermal, optical, and environmental subsystems Improved Quality Consistency Cyth’s reverse-engineered solution exceeded expected accuracy and throughput metrics and provided the pipette manufacturer with reliable, repeatable testing capabilities. The seamless integration into production and long-term sustainability of the manufacturing systems helped to quickly establish a beneficial ROI for system development. Production and quality improvements: Microscale manufacturing precision with sub-micron positioning accuracy Automated multi-stage quality inspection eliminated manual verification steps Reduced human handling of delicate instruments improved quality consistency

  • CompactRIO Enables Automated Circuit Board Testing

    Bed of Nails test Fixture used to test embedded control circuit boards. Project Summary Cyth Systems automated testing of CircaFlex embedded control circuit boards with custom bed-of-nails fixture using NI CompactRIO, LabVIEW and TestStand in 12 weeks. System Features & Components Custom bed-of-nails mechanical fixture with pneumatic actuation for reliable probe contact with circuit board test points NI CompactRIO platform with multifunction I/O, voltage output, and universal analog input modules LabVIEW-programmed tests orchestrated by NI TestStand to test hundreds of individual components and functional tests in minutes Automated test reporting and storage of all measurement data and pass/fail status for quality validation Outcomes PCBA validation in minutes, greatly improving test throughput T esting execution time reduced due to integration of mechanical, pneumatic, and data acquisition system onto a unified technology stack Test system development from proof of concept to production-ready unit delivered within 12 weeks Technology at-a-glance Hardware NI CompactRIO  real-time control system NI C Series Modules NI-9381 multifunction I/O module (8 AI, 4 AO, 4 DIO, 0-5V) NI-9264 voltage output modules, 25 kS/s/ch, ±10V, 16-channel (qty 2) NI-9219 universal analog input module, 100 S/s/ch, 4-channel Custom bed-of-nails fixture Pneumatic actuation system Embedded PC Software NI LabVIEW NI TestStand Functional Circuit Testing Circuit boards serve as the center of processing and control in most consumer electronic devices, from smartphones to home appliances. These printed circuit board assemblies (PCBAs) boards require rigorous testing to validate quality and function before distribution to consumers.     Cyth designs and builds systems for the test of PCBAs, so when they found themselves needing to test the PCBAs that their PCBA manufacturer builds per Cyth specifications. They decided to design and build a test system on their PCBACheck reference architecture. They needed a system that could perform comprehensive electrical measurements, validate individual component function, and sequence hundreds of tests rapidly while maintaining repeatability across production runs. Mechanical & Data Acquisition Requirements The mechanical design of the clamshell bed-of-nails included: Custom pin layouts to interface with specific circuit board contact points Pneumatic actuation for reliable probe contact Sophisticated test sequencing capabilities Without automated testing infrastructure, validating each board's functionality would create production bottlenecks and inconsistent quality control. Cyth Systems designed and built a bed-of-nails test fixture using NI CompactRIO data acquisition hardware and LabVIEW software for test sequencing and data acquisition. Mechanical Fixture Design Custom-fabricated bed-of-nails fixture with pneumatic hood actuation for probe engagement Spring-loaded electrical probes with unique board layout for contact with circuit board test points when hood lowers into place Pneumatic control push buttons mounted externally for operator system override Data Acquisition & Control Component Model Specifications Function Data Acquisition Backplane NI CompactRIO Multi-slot chassis, Real-Time Operating System (RTOS) I/O integration and connectivity for measurement and control Multifunction I/O NI-9381 8 AI, 4 AO, 4 DIO, 0-5V Analog and digital I/O Voltage Output NI-9264 (qty 2) 25 kS/s/ch, ±10V, 16 channels Simultaneous sampling for functional testing Analog Input NI-9219 100 S/s/ch, 4 channels Universal analog input for measurements Comprehensive Functional Testing   Test Sequencing & Programming LabVIEW software controls test execution and sequences hundreds of individual tests Test sequences run with TestStand software to validate board functionality Automatic generation of test reports to document all measurement data and pass/fail status   Test Process Operator places circuit board in fixture Operator actuates control button to pneumatically lower hood, creating contact between electrical probes of fixture and the contact points on the PCBA Automated test sequences executed with measurement data acquired by cRIO system Comprehensive report generated and stored upon test completion Hood releases pneumatically and operator removes tested board Left: A bed of nails test fixture is used to test Cyth’s Circaflex circuit boards. Right: The NI CompactRIO enables the fixture’s data acquisition. Cyth’s PCBACheck reference architecture accelerated the development of a comprehensive functional circuit test (FCT) solution for CircaFlex hardware. The final solution accommodated varied capabilities throughout the technology stack: Custom bed-of-nails fixture tailored to board geometry and functionality NI CompactRIO and C Series modules for all control, measurement and sensor requirements Cyth PCBACheck reference architecture software built on: LabVIEW low-level programming of complex measurement, control and data acquisition TestStand sequencing of tests programs, report generation and pass/fail reporting on HMI This approach enabled comprehensive electrical characterization, including operational power testing to validate device performance across varied functionality. Sustainable Test Design Cyth developed the complete fixture from proof of concept to finished product in 12 weeks, integrating mechanical design, pneumatic actuation, data acquisition hardware, and test software into a production-ready system. The system performs functional circuit testing in minutes, reducing test cycle time and overall cost compared to manual validation methods. Cyth expects their CircaFlex test solution to As a robustly architected and flexible test solution, it is expected to ensure CiraFlex product quality and function well into the future.

  • PCBA Functional Test and Device Verificational Test Scaled with Cyth PCBACheck

    Scalable automated PXI testing solution enables comprehensive functional verification of Bluetooth earpiece devices. Left: Clamshell bed-of-nails test fixture for enabling parallel test of PCBAs. Right: Bluetooth audio device Project Summary Consumer electronics manufacturer accelerated time to market and drove high product quality through an automated PCBA and functional verification test (FVT) solution. System Features & Components Custom bed-of-nails fixture simultaneously tests eight printed circuit board assemblies (PCBAs) Ultra-low power measurements to test consumption in device battery-saving modes RF shielded enclosure for ensuring high-quality Bluetooth transmission quality measurement PCBACheck reference architecture with test automation and sequencing built on Cyth's architecture built on NI LabVIEW and TestStand software Outcomes Comprehensive automated validation of circuit board functionality High throughput testing capabilities with parallel testing Automated data collection with pass/fail results and automated reports for regulatory compliance purposes Technology at-a-glance Cyth PCBACheck Reference Design Hardware: PXI-1082 PXI Chassis PXI-4138 Source Measurement Unit (SMU) PXI-4081 Digital Multimeter (DMM) JTAG Programmer RF Enclosure with RF signal generator & analyzer Software: NI LabVIEW   NI TestStand Bluetooth Audio Device Testing The electronic devices we used on a daily basis, like smartwatches, wireless keyboards, heart rate monitors and many others are built on printed circuit board assemblies (PCBAs) that integrate capabilities like transmission, power management, and a seamless user experience into a package sometimes smaller than a postage stamp. Circuit board manufacturers are pressured to deliver reliable products while simultaneously maintaining high throughput. Every board that is produced must function perfectly before final assembly to ensure high product quality, bolster customer satisfaction and support strong brand reputation. A leading consumer electronics manufacturer was on the verge of scaling up the production of their new device, a premium Bluetooth earpiece, but they were limited by their manual testing processes. Bringing their product to market was dependent on the implementation of circuit board functional verification test (FVT) pre-assembly and final production test of the final device post-assembly. Scaling Production Capacity through Automation The bluetooth earpiece manufacturer needed guidance on an overall testing methodology and technology stack to meet their high-throughput production metrics. Bringing a high-end product to market required rigorous quality assurance to ensure seamless, reliable functionality across multiple complex functions: Wireless integration with cellphones Touch-sensitive controls Automatic, intelligent power management Each capability required unique, specialized testing and validation before final assembly into the consumer product. Throughput metrics required that they test eight boards simultaneously to meet production demands while ensuring comprehensive validation of every component. An automated solution was critical to avoid production bottlenecks and ensure comprehensive validation coverage across every component. Accurate measurements necessitated an enclosed RF test environment; the process included: Verifying firmware upload Measuring power consumption across states Confirming Bluetooth signal quality To get their product to market sooner they couldn’t afford to take the time to learn best practices as they went; they decided to bring in a testing expert to help bring their PCBA and end-of-line testing needs to life. They decided to partner with Cyth Systems because of their custom bed-of-nails design experience and expertise in transforming complex test requirements into reliable, scallable end-of-line testers. FCT through FVT Cyth Systems delivered a turnkey solution with the capabilities to perform functional circuit testing (FCT) of eight audio device PCBAs and perform final verification (FVT) test once the PCBA was built into the final product. The solution, built on cyth's PCBACheck reference architecture, integrated three core technology platform components into a single automated workflow:   1.     NI PXI hardware for comprehensive I/O coverage 2.     LabVIEW programming for custom signaling and control 3.     TestStand for test sequencing and operator interface Cyth customized a clamshell bed-of-nails fixture to the accommodate the board features and connectivity requirements. The primary hardware is detailed in Table 1. Component Model Key Specifications Function PXI Chassis PXI-1082 8-slot capacity Chassis for connectivity with measurement instruments and control hardware Source Measure Unit PXI-4138 4-quadrant, ±60V, ±3A, 100fA resolution Ultra-low power consumption measurement Digital Multimeter PXI-4081 7.5-digit precision, ±1000V, 1.8MS/s High-precision electrical characterization Programmer JTAG Firmware & audio file upload Automated device programming Test Fixture PCBACheck Bed-of-Nails 8-board capacity Simultaneous board contact and testing RF Testing RF Enclosure Signal generator & analyzer Bluetooth signal quality validation in isolated environment Embedded PC Embedded PC Operator interface and system coordination Table 1. Critical hardware for system processing and measurements The LabVIEW software provided seamless device communication and natively supported device drivers. TestStand orchestrated ted test sequences with intuitive guidance for test operators through the embedded PC’s HMI, shortening training times and ensuring consistent test execution. NI PXI chassis installed into rackmount unit The PCBA probing and testing was enabled through the integration of: Custom clamshell bed-of-nails fixture that can accommodate the test of eight parallel units RF testing environment including signal generator and analyzer for Bluetooth connectivity and performance validation JTAG programmer for firmware and audio file upload Power consumption testing during simulated device “sleep” and “wake” modes Comprehensive device driver integration for seamless hardware communication Left: Bluetooth earpiece device printed circuit board assemblies (PCBAs) before FCT. Right: Probes located on the test fixture’s hood. Bottom: single earpiece device PCBA. The automated test process included an eight-step sequence: Test operator loads eight PCBAs into clamshell bed-of-nails fixture Clamshell and RF enclosure cover are shut by operator to enable electrical contact between the fixture’s probes and the circuit board and block out external electromegnetic interference (EMI) Firmware and audio files automatically loaded onto boards LabVIEW and NI TestStand software execute test sequences with pre-defined final verification test parameters Power consumption test performed, simulating the device's “asleep” vs. “awake” modes Bluetooth features and signal performance are validated using an RF signal generator & analyzer, to simulate transmission between the earpiece and a cellphone Pass/fail results displayed for operator; next steps recommended if fail results displayed Data is acquired by the PXI hardware and automatically saved in pre-defined storage locations Test operator opens the enclosure cover, removes tested boards, and repeats. Designed for Scalability The PCBACheck reference architecture, based on LabVIEW, TestStand and NI PXI hardware, shortened test development timelines and delivered significant manufacturing and quality improvements for the bluetooth earpiece manufacturer. Testing throughput increase: automation of previously manual processes and capability to simultaneously test eight units greatly mitigated quality issues and dramatically increased production throughput Complete functional test coverage: Comprehensive validation of RF communication, power management, touch sensors, and firmware integration to ensure readiness for consumer use Zero-defect launch: Validation of every manufactured unit ensures reliable field performance and increased customer satisfaction Automated compliance documentation: Automated data capture, analysis, report generation and storage performed for every device under test (DUT) for regulatory compliance and traceability Reduced manufacturing costs: Reduction of manual testing processes streamlined operator workflows and improved test consistency and reliability Cyth’s proven PCBACheck reference design built on a critical technology stack enabled rapid production scaling. The turnkey and intuitive system helped operators ramp up quickly, with the LabVIEW and TestStand based architecture abstracting away complex test processes into an improved workflow that is adaptable to future product variations and  testing requirements. The automated test solution also enhanced the device manufacturer’s ability to perform reliable, repeatable RF testing to ensure the support of advanced wireless protocols. Test automation is a key differentiator for the device manufacturer, as the products they now ship to their clients are reliable, high-quality and delivered on-time. The scalable architecture of the automated testing solution will be adaptable to future product requirements and testing needs to continue high product quality and reliability. Autoamted functional validation for every single device helps to protect brand reputation and strengthen market position within the wireless audio market. Read More Articles about Circuit Board Test: The Benefits of Automated PCBA Testing in Modern Manufacturing | Cyth Systems Implementing Automated Testing in Your PCBA Manufacturing Process | Cyth Systems Components of Automated Testing System for PCBA | Cyth Systems

  • Power Plant Asset Health Data Visualization Enabled by NI cDAQ

    Applied research institute enabled live generator and excitation system asset health updates to power plants with generating capacities up to 348 MW. *As Featured on ni.com Original Author: Nemanja Milojčić, Electrical Engineering Institute "NIKOLA TESLA" Edited by: Cyth Systems Synchronous power generators in operation at a thermal power plant Project Summary Applied research institute enabled live generator and excitation system asset health updates to power plants with generating capacities up to 348 MW. System Features & Components Millisecond-level transient capture for excitation systems and synchronous generators Custom signal conditioning equipment for isolating voltage and current measurements LabVIEW UI-based data visualization with live graphs, on-demand data recording, and external triggering options Multi-facility deployment capability supporting generators from 20 MW to 348 MW Outcomes Successful deployment across multiple power generation facilities monitoring generators ranging from 20 MW to 348 MW capacity Enabled detailed transient analysis and proactive maintenance without compromising plant safety or normal operations Scalable monitoring architecture with planned SCADA integration and remote Ethernet access enable the continuous integration capabilities required bymodern power plants Technology at-a-glance NI cDAQ-9172 chassis NI 9203 analog input modules NI 9245 digital input modules NI LabVIEW software Industrial 15-inch touch panel PC Electrically isolated voltage and current sensors Custom signal conditioning equipment Understanding Transient Behavior Modern electrical grids depend on the seamless operation of power plants that supply electricity to millions of homes and businesses. When these facilities experience unexpected failures or transient events, it can impact entire regional power networks, and potentially cause widespread blackouts that affect the essential infrastructure communities depend on. Disturbances to the excitation system, even on a microsecond-level, can trigger catastrophic equipment failures and grid instabilities that could cost utilities providers millions in lost revenue and emergency repairs. In modern power plants, the main sources of electrical energy come from synchronous generators. This type of generator is an AC electrical machine that rotates at a constant speed synchronously with grid frequency. It uses an excitation system to control the strength of the magnetic field through its rotor windings, thereby regulating voltage output and reactive power. Understanding and monitoring the transient behavior of these systems is crucial for maintaining grid stability and preventing equipment failures.   Asset Health Monitoring The Nikola Tesla Institute of Electrical Engineering, an applied research institute that designs and manufactures complete excitation systems for synchronous generators, took on the challenge of designing and implementing a solution to monitor the behaviors of excitation and generator systems during normal operation and failures. They recognized the need for a data acquisition solution completely independent of the control signals of the excitation system to enable experimentation without risking the health of primary equipment or interfering with normal plant processes. Instantaneous Data Visualization The research engineers decided to built their monitoring solution with NI CompactDAQ (cDAQ) hardware and LabVIEW software. A few of the key system hardware components were: NI cDAQ-9172 chassis  with NI 9203 analog input modules and NI 9245 digital input modules Industrial 15-inch touch panel PC  for operator interface and manual system control Custom signal conditioning equipment  with isolated voltage and current sensors to bring output signals into a range compatible with the NI cDAQ and maintain linearity across measurement ranges The NI CompactDAQ integrated into the control cabinet. The LabVIEW-based application provided comprehensive monitoring capabilities: Live visualization  of the entire excitation system on a primary screen Live graphs of analog signals  that could be initiated on-demand or by an external trigger Intuitive operator interface  for quickly viewing data, adjusting triggering conditions, and monitoring digital inputs To iteratively improve on their monitoring system design and safely validate its performance, the research team implemented a phased deployment strategy that included multiple power plants: First deployment: Nikola Tesla A Thermal Power Plant in Obrenovac, Serbia; new excitation system for generator No. 4 (308 MW) Second deployment: Kostolac B Thermal Power Plant in Kostolac, Serbia; generator No. 1 (348 MW) Planned third deployment: Potpec Hydro Plant in Priboj; generator B (20 MW) Software front panel displayed on touch panel PC at Nikola Tesla A thermal Power Plant in Obrenovac, Serbia The research team is currently working on implementing a few features to make the system even more robust and provide even more detailed information for power plant operation teams. Generator synchronizer monitoring with comprehensive data recording capabilities throughout synchronization processes Automatic reconnection systems for the power plant's 6 kV self-supply Complete capture of all changes in busbar systems Linking the cDAQ monitoring system to power plant supervisory control and data acquisition (SCADA) power plant control equipment Making all measurement data accessible on local, secure networks Proactive Maintenance Enablement The research engineers from the Nikola Tesla Institute of Electrical Engineering were able to deliver multiple operational benefits to their power plant clients through their synchronous generator monitoring solution: Multi-facility deployment success:  Capability to monitor generators ranging from 20 MW to 348 MW capacity High-bandwidth transient event capture:  10 kHz data acquisition bandwidth enabled monitoring of the excitation system disturbances and generator response characteristics on the microsecond-level Enhanced analysis capabilities:  Live system health status, intuitive data visualization and flexible data recording capabilities significantly improved the analysis of excitation system performance Proactive maintenance:  Early identification and resolution of potential issues before power generation is impacted As the researchers incorporate more capabilities into their systems, they look forward to offering their clients: Expanded monitoring scope:  Live health status of generator synchronizer and 6 kV self-supply Future-ready architecture:  SCADA integration and remote Ethernet access for comprehensive power plant monitoring Modular foundation:  NI hardware and LabVIEW software architecture provides scalable platform for continued facility expansion Throughout system development and continuous system improvements, the NI cDAQ paired with LabVIEW software has provided the research team with a scalable and flexible foundation to adapt their system to the current and future needs of their power plant clients.   Original Author: Nemanja Milojčić , Nikola Tesla Institute of Electrical Engineering Edited by: Cyth Systems

  • Production Capacity up 350% with Automated Dispensing

    Medical equipment manufacturer automated precise gel casting process using Cyth's multi-channel syringe pump system controlled by LabVIEW. Syringe pumps and motor controllers in custom fiberglass housing for gel electrophoresis cartridge casting automation Project Summary Medical equipment manufacturer automated precise gel casting process using Cyth's multi-channel syringe pump system controlled by LabVIEW. System Features & Components Multi-channel Kloehn syringe pumps for precise dispensing of five liquid solvents Stepper motor control with encoder feedback via Modbus TCP communication SICK proximity sensors for conveyor position monitoring and dispensing synchronization Custom LabVIEW user interface with preset and manual measurement capabilities Automated conveyor integration for continuous production workflow Custom aluminum frame and fiberglass housing fixtures for pump mounting Outcomes Production capacity increased by 350%, enabling processing of over 1,000 cartridges daily Quality consistency improved dramatically with measurement accuracy reaching ±0.3% Material waste reduced by 35% through improved processes Annual labor cost savings of $180,000 achieved through operational efficiencies Return on investment realized in just 14 months, demonstrating rapid payback period Technology at-a-glance NI CompactRIO real-time control system LabVIEW control architecture Kloehn syringe pumps and pump manifolds Custom multichannel pump housing fixtures Precision stepper motors controlled via Modbus TCP communication protocol SICK proximity sensors 10A time delay fuse Custom aluminum frame and fiberglass housing Enabling Scientific Research & Advancements Each time a genetic disease is diagnosed in a medical lab or a breakthrough discovery is made in DNA research, gel electrophoresis technology plays a central role in separating and analyzing the DNA fragments that make these scientific advancements possible. This common laboratory technique, which sorts DNA molecules by size using an electric current through a gel medium, is foundational to many medical discoveries, forensic investigations, and research advancements that directly impact public health and safety. The manufacture of gel electrophoresis cartridges, the consumable components essential for this process, has traditionally relied on labor-intensive manual processes, creating inefficiences and bottlenecks for laboratories worldwide. A leading medical equipment manufacturer was intimately familiar with this issue; their manual gel casting operations required skilled technicians to precisely measure and mix five different liquid solvents for each cartridge, creating quality inconsistencies, production limitations, and escalating labor costs that threatened their ability to meet growing market demand. Many of their pharmaceutical customers were demanding higher volumes of high-quality cartridges to support their expanding research and diagnostic initiatives. The cartridge manufacturer realized that their manual processes were hindering their ability to grow and threatening existing client relationships. They needed an automated solution that could deliver precision, scalability, and the reliability that their customers depend upon. Scientist dispensing a sample into a gel electrophoresis cartridge Manual Processes Limit Growth The medical equipment manufacturer was faced with a critical operational bottleneck. Their gel electrophoresis cartridges were in high demand, but their manual casting processes greatly limited their growth potential. Their existing workflow required skilled technicians to manually measure and dispense five different liquid solvents in precise ratios to achieve their proprietary gel mixture. This time-consuming process introduced variability that could impact cartridge performance: Scalability constraints: Manual processes couldn't scale to meet growing demand, forcing the company to hire additional staff or decline orders Quality variability: Human error in measuring and mixing solvents led to inconsistent gel properties and customer complaints Material waste: Imprecise measurements resulted in significant waste, directly impacting profit margins Cost Reduction & Competitive Differentiation The medical equipment manufacturer needed to automate the casting of their electrophoresis cartridges to keep up with market demand and preserve profit margins. Market expansion requirements: Growing demand required that they triple production capacity within 18 months. Quality standardization: Customer application requirements demanded consistent gel properties with less than 2% variation. Cost reduction pressures: Rising labor and material costs squeezed profit margins. Competitive differentiation: Faster delivery times and superior quality consistency could provide a significant competitive advantage. To successfully automate their casting process, they required a system capable of: Handling five different liquid solvents with varying viscosities Dispensing the solvents in precise ratios with ±1% accuracy Integrating seamlessly with existing conveyor systems Automated Production Workflow The medical equipment manufacturer chose to work with Cyth Systems because of their expertise in precision liquid handling automation applications across pharmaceutical manufacturing, clinical diagnostics, and research laboratories. Cyth built an automated gel electrophoresis cartridge casting process using the NI CompactRIO platform, their proprietary LabVIEW motor control architecture, syringe pumps and precision motor controllers that seamlessly integrated with the customer’s existing conveyor system. Multi-Channel Precision Dispensing: Proprietary five-solvent gel mixture handled by five, coordinated Kloehn syringe pumps that precisely dispensed liquid solvents of carying viscosities. Motor encoder feedback via Modbus TCP ensured ±0.5% dispensing accuracy across all channels. Intelligent Conveyor Integration: Cartridge positions continuously measured by SICK proximity sensors as they moved left to right on the production conveyor. Synchronized dispensing sequences were automatically triggered when cartridges reached their optimal position, enabling continuous production workflow without manual intervention. Flexible Control Architecture: Intutive system operation enabled through the custom LabVIEW user interface (UI). Operators could select from preset formulations for different gel types or manually adjust measurements based on the end customer’s requirements. To ensure consistent gel casting with the correct properties, Cyth implemented algorithms to translate user inputs into precise motor control commands and pump cycle amounts. Automated Production Workflow: Once operators turn on the conveyor and select gel formulations through the UI, cartridges are released onto conveyor slots where proximity sensors trigger automatic dispensing as each cartridge passes the solvent pump stations, followed by automatic syringe refilling cycles that prepare the system for the next cartridge. Diagram of plastic cartridges being cast with gel on a conveyor. System Order of Operations: Operator manually turns on the conveyor. Operator uses the LabVIEW UI to either select a preset formula or to manually enters the required measurements of each of the five solvents required to make the customer’s gel. The cartridges are systematically released by motors onto the conveyor holder slots. As the cartridges move left to right their position is monitored using SICK proximity sensors. When a cartridge is in the corrext position, the multi-channel pumps send a control command via the encoder to the stepper motors. The stepper motors dispense and cast the solvents through the syringe pumps into the cartridge. The stepper motor refills the syringe with liquid solvent for the next cartridge passing through the conveyor. Steps 5 through 8 repeat. Multiple sets of multi-channel Kloehn syringe pumps assemblies for dispensing solvents into cartridges. Superior Quality Consistency The automated gel casting system delivered significant operational improvements to the medical device manufacturer. ROI Achieved in 14 months: Fully automated casting operations eliminated the previously manual dispensing process, ensuring consistent, hands-free production flow, and delivering a 14-month payback period. Quality Control Improvement: Precision control over proprietary gel mixture formulations improved measurement accuracy to ±0.3%, virtually eliminating quality-related customer complaints, ensuring consistent gel quality across all cartridges and reducing material waste due to over-dispensing. Scalability and Throughput: Automated fill cycles and continuous conveyor operation minimized downtime between cartridge casting cycles and enabled processing of 1,000 cartridges daily, increasing production capacity by 350%. Reduced Labor Requirements: Automation of complex mixing and dispensing processes freed up skilled technicians to focus on higher-value activities, resulting in $180,000 of labor cost savings annually. The impact of the automated gel casting system extended beyond operational improvements, fundamentally strengthening the medical equipment manufacturer’s competitive position. Their rapid delivery times and superior quality consistency strengthened existing client relationships and attracted new customers, like major pharmaceutical companies. Furthermore, the modular design of the system built on NI CompactRIO hardware positioned the manufacturer for sustainable future growth, with plans currently underway to expand their product line and leverage proven automation frameworks throughout their broader facility.

  • Custom EMF Measurement Solution Doubles End-of-Line Test Throughput

    Medical manufacturer automated EMF device testing with dual-station NI PXI system, reducing test cycles from hours to minutes while doubling throughput. Dual-station test system built on NI PXI platform Project Summary Medical manufacturer automated EMF device testing with dual-station NI PXI system, reducing test cycles from hours to minutes while doubling throughput. System Features & Components Dual-station automation enabling single operator to manage two test stations simultaneously High-speed PXI analog I/O integrated with fluxgate magnetometers for precise electromagnetic field measurement Automated firmware uploading and multi-level power testing to simulate battery conditions LabVIEW user interface with automated pass/fail determination and database integration Seamless integration with existing Helmholtz coil test fixtures Barcode scanning and automated data logging for quality control traceability Outcomes Reduced testing time from hours to five minutes per device with automated measurement protocols Doubled testing throughput through dual-station operation managed by single operator Turnkey solution delivered within 8-week timeline and project budget Technology at-a-glance NI PXIe-6341 Multifunction DAQ NI PXI-4110 Programmable DC Power Supply NI PXI-2564 16 SPST Relay Module NI LabVIEW  software architecture LabVIEW Database Connectivity Toolkit MEDA Fluxgate Magnetometers Barcode scanners Dell Embedded Industrial PCs Ruggedized, custom mobile cart design Electromagnetic Compatibility Electromagnetic field (EMF) medical devices are regularly used to support patient well-being in applications like pain management, nerve stimulation, and muscle rehabilitation. These EMF medical devices rely on precisely controlled magnetic pulses to deliver therapeutic benefits, and therefore must meet rigorous safety and performance standards to ensure therapy efficacy and patient safety. The FDA requires EMF medical devices to comply with Electromagnetic Compatibility (EMC) standards to ensure these devices are compatible with their electromagnetic environment. Compatibility entails: Immunity: medical device must not malfunction when exposed to electromagnetic interference (EMI) Emissions: medical device’s own EMF emissions must not interfere with other medical devices or electronics in its environment Due to the complexity of EMF medical devices and the high-stakes associated with patient safety and capital expenditure investments in medical equipment, the manufacturing and quality control of EMF devices has traditionally required labor-intensive, manual testing methodologies. End-of-line Test Bottlenecks A leading medical device manufacturer faced significant production and end-of-line test bottlenecks because of this paradigm. Production was constrained by slow, manual testing procedures which prevented them from scaling up their operations to keep up with growing market demand. When regulatory compliance requirements and quality control standards demanded faster, more consistent testing processes, the manufacturer realized their manual approach would no longer suffice. Our client had an industrial test fixture that incorporated a Helmholtz coil to produce a region of uniform magnetic field. When there is a change in the magnetic field of the Helmholtz coil, a current is induced because of the detected magnetic waves. This concept was incorporated into the client's design of their industrial test fixture to test their product—an electromagnetic medical device. The greatest limitations of their existing test workflow were: Manual Testing Inefficiencies:  Their existing electromagnetic field measurement process was labor-intensive and time-consuming, creating quality control bottlenecks Single-Station Limitations:  Their current testing setup created workflow bottlenecks, with operators managing only one test station at a time To produce a high-quality, EMC product, keep up with customer demand and protect their market position, they required: Increased Testing Throughput:  Growing market demand required faster, more automated testing procedures to meet delivery timelines (Leads to need for automated test) High Measurement Accuracy:  Precise electromagnetic field measurements were essential for regulatory compliance and ensuring therapeutic device effectiveness The company needed an automated solution to measure and quantify the electromagnetic waves their medical devices emitted while simultaneously increasing testing throughput and maintaining high accuracy standards. Automated, Parallel Testing This medical device manufacturer approached Cyth Systems for help mitigating the testing bottlenecks that threatened their growth and capture of market share. Cyth designed and built a turnkey automated solution using NI PXI hardware and LabVIEW software capable of: Powering and pulsing the devices-under-test (DUTs) Interfacing with the industrial test fixture to induce and read current measurements Automatically acquiring and analyzing all measurements Parallel Test System Architecture: The system's architecture enabled a single test operator to simultaneously interface with and run two industrial test stations, dramatically increase testing efficiency and throughput. A LabVIEW user interface and software architecture provided operators with simple controls that logged acquired data directly to the client's database via Ethernet. Custom EMF Measurement Solution: Our engineering team designed the system to measure and quantify the electromagnetic waves the client's medical device emitted using fluxgate magnetometers that measure the direction, strength, and relative change of magnetic fields. We acquired these measurements using high-speed PXI analog inputs for maximum precision and speed. At all times, the device's provided power was precisely controlled to measure the power consumption of the device under test (DUT). The analog output lines of the PXIe-6341 were used to generate waveforms to enable the measurement of changes in the pulsation of the device's electromagnetic field (EMF). HMI for test operators using dual-station test cart System Order of Operations: An operator loads the EMF medical device into the client's industrial test fixture, scans the barcode, and connects the required wiring The operator begins the test on the LabVIEW user interface (UI) menu Our system uploads firmware to the client's device and performs a low-level power on before testing the client's device at different power levels (to simulate battery power conditions) The device is pulsed, and our system automatically reads and acquires the electromagnetic signals using the NI PXI high-speed analog I/O, with readings measured using the fluxgate magnetometer The percentage error between control and measured magnetic field readings is calculated by LabVIEW algorithms and deems the device a pass or reject sample on the user interface The measured data is automatically uploaded by our system into the client's database via Ethernet Once the test is complete, the UI prompts the operator to load the next sample and repeat the process Sustainable by Design The automated parallel testing solution delivered transformative results that exceeded the medical device manufacturer's initial expectations for operational efficiency improvements, product quality improvements, and streamlined regulatory compliance: Operational Efficiency: Five-minute test cycles: Test time reduced from hours to approximately five minutes per device. Dual-station capability: Single test operator enabled to manage two test stations simultaneously, parallelizing testing and doubling testing throughput. Automated data management: Manual data entry errors eliminated through automated datalogging. Streamlined compliance: Automated datalogging to servers ensured accurate, consistent regulatory compliance and device calibration documentation.   Quality and Compliance Benefits: Enhanced precision: Fluxgate magnetometer measurements ensured measurement accuracy levels compliant with FDA regulations. Improved repeatability: Automated testing protocols eliminated human variability and increased product quality consistency. Real-time pass/fail determination: Immediate pass/fail results enabled rapid decision-making for operators and reduced bottlenecks impacting test throughput.   Technical Achievement: 8-week delivery: System design, build and testing completed within client's aggressive, two-month timeline requirement. Budget compliance: Solution delivered within the client's fixed budget. Seamless integration: Final solution interfaced flawlessly with medical device manufacturer's existing industrial test fixtures and database infrastructure.   Market Position Impact The automated testing solutions's impact extended beyond operational improvements by strengthening the medical device manufacturer's competitive position in the market. The high test throughput achievement enabled the company to meet growing market demand without sacrificing the quality required for medical device regulatory compliance. The precision measurement and automated documentation capabilities of the automated testing solution became key differentiators for the medical device manufacturer, as they could now to guarantee consistent, repeatable testing results that exceeded industry standards. The automation framework was designed with adaptability to future customer and product requirements in mind, leveraging the I/O modularity of NI PXI hardware, with the flexibility of the LabVIEW software to position the medical device manufacturer for sustainable future growth.

  • Hardware-Timed Automation Accelerates Gas Meter Testing

    Industrial gas meter manufacturer improved product quality and validated accuracy by incorporating NI CompactRIO into end-of-line piston prover test. Cyth Engineer with high-precision calibration & prover system Project Summary Industrial gas meter manufacturer automated their end-of-line piston prover testing with an NI CompactRIO solution that improved quality control processes and validated product accuracy to meet the United States’ units and measures standards. System Features & Components cRIO instrumentation incorporated 80+ sensor inputs/outputs for handling comprehensive flow, valve control, temperature, pressure, and humidity measurements Custom closed-loop PID algorithm for precise piston control, integrated safety control loops, and pressure release valves for safe operation at high pressures Positioning system achieved accurate measurements, with nanometer-level precision, using linear encoder and laser detection technologies calibrated with standard gauge blocks Outcomes Achieved the nanometer-levels of precision and accuracy required for fiscal gas meter calibration and validation by the National Institute of Standards and Technology (NIST) and the Office of Weights and Measurs (OWM) Enabled continuous, automated testing through implementation of the cRIO system that simultaneously manages both the control of actuating hardware and the measurement of necessary sensors Successfully deployed system within project timeline constraints despite equipment access limitations making remote development necessary Technology at-a-glance NI cRIO-9074 NI C Series Modules NI 9425 industrial digital input module NI 9477 industrial digital output module NI 9208 current input module (flow, temperature, pressure, and humidity sensors) NI 9217 analog input module NI 9401 5VTTL digital input and output module NI 9263 analog output module NI LabVIEW Real-Time & LabVIEW FPGA Modbus and Ethernet/IP industrial communication protocols Custom PID control algorithms Safety and Compliance in Fiscal Custody Transfer In the natural gas industry, accuracy is not just important, it’s legally mandated. Industrial natural gas meters are fiscal custody transfer devices, the critical measurement point where customers are charged for their energy consumption, making measurement precision a requirement for protecting the interests of consumers and providers. These measurement devices must meet stringent accuracy standards set by the Office of Weights and Measures (OWM) at the National Institute of Standards and Technology (NIST). The accuracy of these meters directly impacts: Consumer trust and fairness  – ensuring customers pay only for what they actually consume Regulatory compliance  – meeting strict standards set by government agencies Economic stability  – supporting fair trade within the multi-billion dollar energy market Safety and reliability  – maintaining proper pressure and flow monitoring in gas distribution systems Every industrial gas meter must be rigorously tested and calibrated before deployment and must also be re-certified annually to maintain accuracy. This calibration process relies on specialized piece of equipment, a natural gas prover. Provers are reference standards capable of generating known, precise volumes of gas to enable the verification of a meter’s readings. Modernizing Legacy Equipment A leading manufacturer of industrial natural gas meters was at an inflection point - their competitive position could change due to the age of their existing piston prover design. As an exclusively analog system, it could no longer meet the demands of modern infrastructure expansion and industrial customer needs. They were experiencing several pain points that were becoming increasingly problematic: Quality Control Bottlenecks:  Slow, inconsistent manual testing processes created production delays and strained customer relationships. Each gas meter required extensive manual intervention during testing, making it difficult to scale production to meet growing customer demand. Accuracy Concerns: With analog controls, achieving repeatable, precise measurements was challenging. The lack of digital precision meant potential variations in test results, which could lead to meters being incorrectly certified or requiring costly retesting. Compliance Pressure:  US Units and Measures boards maintain strict accuracy standar ds for fiscal measurement devices. Any uncertainty in their calibration process could result in regulatory issues, customer complaints, or even legal liability if meters proved inaccurate in the field. Competitive Disadvantage:  Other manufacturers were modernizing their testing capabilities, offering faster delivery times and more rigorous quality assurance. The company needed to modernize or risk losing market share to competitors with more advanced testing systems. Fully-Automated Testing The manufacturer's primary goal was transforming their analog piston prover into a state-of-the-art, automated testing system capable of handling the most demanding accuracy requirements while improving production throughput and quality consistency. They needed a fully automated, gas meter testing solution that could achieve nanometer-level precision in control and measurement, handling over 80 different sensor inputs and outputs, all while maintaining the safety standards required for high-pressure gas testing operations. The greatest engineering challenges for their team were: Dual-System Architecture:  The modernized prover needed two separate but coordinated subsystems—one to control the mechanical operations and another for automated testing and measurement tracking. These systems had to communicate seamlessly to coordinate the entire test process. Precision Requirements:  The system needed to meet the exacting accuracy standards required by regulatory bodies, with the ability to calculate air volume versus meter readings within tolerances that would satisfy US Units and Measures board requirements. Multi-Rate Testing:  The prover had to test meters at three different flow rates, determined by precise piston head acceleration and speed control, requiring sophisticated motion control capabilities. Understanding the Physical System The main chamber of the piston prover was a cylindrical drum body measuring 6 feet in diameter and 20 feet in height. A piston pushed air out of the drum body and into the meter via an outflow valve and pipe system. The piston’s movement had to be precisely controlled to calculate the exact amount of air pushed out of the body. The calibration and certification process for a natural gas meter compares the calculated air volume pushed through the system with the measurement of the gas meter to determine its accuracy. Sufficiently accurate meters are rated as ready for market deployment; inaccurate meters would undergo further calibration to ensure they would be deployment-ready. The prover system’s repeatability and accuracy were critical, as each validated meter required annual re-certification to maintain their functional accuracy certifications. Complex Remote Development The gas meter manufacturer chose to enlist the help of Cyth Systems to tackle their technical challenges because of our expertise developing precision control systems and capability to develop complex automation solutions remotely. Cyth’s engineering team selected the NI CompactRIO (cRIO) platform to fulfill the system’s control and automation requirements. A couple of key features of the cRIO influenced this decision: Real-time performance: The programmability of the NI   Linux Real-Time Operating System (RTOS) and FPGA, using LabVIEW Real-Time and LabVIEW FPGA, enabled the implementation of hardware-timed programming loops to run at 25-nanosecond intervals. These tight timing tolerances were necessary for meeting the system’s safety relay requirements. Comprehensive I/O coverage: The system had a high channel count and a large mix of I/O and sensor types including flow, valve control, temperature, pressure and humidity sensor readings. The compatibility of the cRIO platform with NI’s C Series modules  enabled the rapid and reliable integration of all the I/O required. despite the high channel count and high mix of I/O. Automated measurement system handling 80+ sensor inputs and outputs Cyth designed a precision control system with advanced closed-loop PID algorithms programmed in LabVIEW Real-Time and FPGA modules. The system continuously monitored piston speed and pressure feedback to enable precise acceleration and deceleration control across three different flow rates, ensuring reliable operation over thousands of measurement cycles. Advanced motion control: Custom closed-loop PID algorithms managed piston acceleration and deceleration with continuous speed and pressure feedback adjustments throughout the testing process. Nanometer-precision positioning:  Linear encoders combined with laser detection systems and metrology standard gauge blocks ensured absolute accuracy in piston positioning for fiscal meter calibration. Flexible multi-rate testing:  System operated reliably across the full range of loads and flow rates required for comprehensive meter validation. The piston prover’s two separate systems: control and automated test. Implementing Comprehensive Safety Systems Considering that pressure inside the prover could reach over 200 PSI, operational safety was critical for this system. Cyth's development team implemented two distinct safety loops that continuously monitored all critical parameters, including pressure levels, piston position, and system status to provide multiple, redundant protection mechanisms. Automatic Safety Override:  An independent, dedicated safety control loop was implemented to instantaneously override the system if any unsafe conditions were detected. Emergency Stop:  A comprehensive emergency stop sequence, including a pressure release valve and a hard stop for the motor driving the piston, was incorporated to enable operators to immediately halt testing in case of emergency. Piston prover control system. Overcoming development obstacles The project's most significant challenge was developing and testing the prover's control systems remotely, since the massive equipment located in Texas was too large and cost-prohibitive to ship to Cyth's San Diego facility. Cyth overcame this challenge by developing hardware-in-the-loop (HIL) simulation and conducting rigorous factory testing to enable rapid deployment within the customer's two-day integration window. Hardware-in-the-loop simulation:  HIL model developed on NI cRIO-9074  enabled comprehensive testing of piston controls and sensor validation through physical transducer actuation without access to actual equipment. Rigorous pre-deployment testing:  System pre-assembly and Factory Acceptance Testing (FAT) performed in San Diego to ensure all components were verified and ready for immediate integration at the gas meter manufacturer's facility. Rapid site deployment:  Full system integration and Site Acceptance Testing (SAT) performed within the customer's maximum two-day downtime window The piston prover control systems during installation. Operational Excellence Through Test Modernization The comprehensive two-part control and automated testing system upgrade enabled the industrial gas meter manufacturer to successfully modernize their end-of-line testing capabilities. Their quality control processes were dramatically improved while expanding their testing capabilities and streamlining regulatory compliance processes. Precision Achievement:  The deployed system achieved the nanometer precision accuracy required by the piston prover's air delivery system, meetin g all US Units and Measures board standards for fiscal custody transfer applications. Operational Excellence:  Since deployment, the system has been running consistently and reliably, helping the customer validate their industrial natural gas meters for both consumer and provider applications. The automated nature of the system has improved testing throughput while maintaining the high accuracy standards required for regulatory compliance. Platform Advantages:  The NI CompactRIO hardware platform met all high-speed communication requirements while managing over 80 sensor inputs and outputs through LabVIEW programming. The platform's modularity and programming flexibility were critical to the system's development success and ongoing maintainability.

  • Micron-Scale Inspection via Precision Vision & Motion

    Medical validation company achieved micron-scale visual quality inspection using Cyth's vision and motion solution built on NI sbRIO and Cyth CircaFlex. Project Summary Medical validation company enhanced quality assurance capabilities with a micrometer-scale syringe lubrication inspection system built by Cyth using the NI sbRIO-9651 and Cyth CircaFlex-315. System Features & Components Micrometer-scale positioning accuracy for consistent syringe rotation and imaging alignment Automated inspection workflow integration with existing production systems Outcomes Achieved micrometer-scale precision in lubrication coverage measurement, eliminating variability of validation accuracy. Dramatically reduced inspection time per syringe while improving consistency across production runs Enabled expanded pharmaceutical contracts through enhanced quality assurance capabilities Technology at-a-glance NI sbRIO-9651 System on Module Cyth CircaFlex-315 Applied Motion stepper motors with encoders and integrated drives 20 Megapixel CMOS global shutter camera Custom LED array with RC Series LED strobe controller Telecentric illuminator NI LabVIEW image processing algorithms Emergency Drug Delivery Life-saving medical devices like EpiPens® rely on a silicon lubricant coating in the inner bore of the syringe to ensure precise and reliable deli very of emergency medications during allergi c reactions such as anaphylaxis. Even microscopic variations in syringe lubrication coating can affect delivery accuracy and patient outcomes. The coverage, consistency, and distribution profiles of the silicon-based lubricant coating inside every auto-injector must meet exacting standards and comply with federal regulations. Manual Process, Aging Technology A medical validation company specializing in auto-injector quality control faced critical limitations with their existing syringe lubrication inspection system. Manual processes and limited inspection tools presented risks to existing client relationships due to lack of scalability and aging technology.   Inconsistent Performance:  Lighting intensity decreased over time, causing measurement variations across production runs. Limited Capability:  The system couldn't capture high-resolution images necessary for detecting microscopic lubrication defects. Scalability Constraints:  Manual inspection processes couldn't meet increasing production volumes and faster client turnaround requirements. Expanding Throughput Capacity To scalably grow their business by maintaining and steadily expand their customer base, the validation company required a higher throughput, higher performance system that would also help simplify their regulatory compliance activites. Market Expansion:  Advanced and scalable validation capabilities would enable the pursuit of larger pharmaceutical contracts previously beyond their capacity. Regulatory Compliance:  FDA and international standards demand rigorous inspection processes with documented precision and repeatability to ensure patient safety. They required a technical solution capable of: Precise Inspection:  High-resolution 2D imaging with consistent lighting, micrometer-levels of precision, and seamless integration with existing workflows. Automated Measurements and Datalogging: Overall reduction in manual processes to mitigate human error, optimize repeatability and increase mean validation rate   Synchronized Motion & Image Acquisition Cyth built the validation solution using the NI sbRIO-9651 and Cyth CircaFlex-315 control hardware with Applied Motion stepper motors to deliver micrometer-scale positioning accuracy to ensure consistent, high-quality imaging. For the validation company, the technical differentiation of the inspection solution was critical to attaining their throughput and quality goals. They required a highly synchronized and precise validation solution to be able to pursue larger customers and   Synchronized Precision:   Micrometer-scale accuracy and consistent imaging achieved through tight synchronization between stepper motors, dynamic lighting sources, and camera through the real-time capabilities of the NI sbRIO-9651. Cyth's CircaFlex platform enabled direct connectivity to stepper motor control I/O to minimize system jitter and ensure tight coordination of all system components. LabVIEW Real-Time software, running on Linux Real-Time Operating System (RTOS) enabled precise process synchronization between motor rotation, auto-injector illumination and image acquisition Cyth's CircaFlex platform provided all control signals directly to the motors' drives. Stepper motors were programmed to make predefined movements with accuracy on the micrometer scale. Advanced Lighting Control:  Custom LED strobing algorithm eliminated lighting degradation issues while extending lifespan through optimized pulse-width modulation control. The programmable logic controller integration with CircaFlex enabled synchronization between LED strobing sequences and camera acquisition cycles to ensure consistent illumination profiles and eliminate exposure timing variability. Custom strobing algorithm with optimized pulse-width modulation to ensure consistent syringe illumination and extended LED lifespan through controlled duty cycles and thermal management. CircaFlex provided built-in signal conditioning and hardwired connection to FPGA I/O on the sbRIO-9651 necessary to smoothly and precisely coordinate the system with millisecond-levels of precision Programmable Logic Controller (PLC) configured to strobe LEDs in tandem with camera capture sequences to ensure well-illuminated, detailed images. Automated Image Processing:  Real-time image processing algorithms stitched together multiple captures together to create a 2D image that comprehensively represents the profile of the lubrication coating of the auto-injector. 20-Megapixel CMOS global shutter camera with telecentric illuminator for detailed interior imaging Hardware, camera, stepper motor, and programmable controls The final solution was contained in a compact, custom enclosure. The overall syringe lubrication inspection process included: Syringe oriented vertically then rotated at a predefined rate using a stepper motor with a built-in drive Illumination of syringe using a strobing array of LEDs Reflected light captured with high-resolution camera to create 2D images of interior lubricant coating High-definition composite images of interior syringe coating stitched together automatically Images analyzed automatically with pass/fail results and data for regulatory compliance needs The resulting high-defintion images enabled the validation company to accurately quantify the quality of syringe lubricant coating. The automated and precise nature of the vision inspection system streamlined regulatory compliance and augmented throughput capacity. Full Compliance, Market Expansion The upgraded system positioned the medical validation company as a leader in auto-injector lubrication inspection technology. The system exceeded their performance expectations and helped accelerate their growth.   Precision Measurement:  Delivered micrometer-scale lubrication coating measurements, eliminating variability that previously compromised validation results. Improved Compliance:  Facilitated quantification of images, data storage and reporting to streamline regulatory compliance activities. Enhanced Throughput Capacity:  Process automation dramatically reduced average cycle time per syringe, improving consistency across production runs. Business Growth:  Pharmaceutical clients reported significantly improved confidence in validation results, leading to expanded contract opportunities and full regulatory compliance .

  • Multi-PCBA Test Solution Delivers Broad Functional Test Coverage for FDA Compliance

    A medical device manufacturer required a complete functional test solution consolidating their product verification test phase and quality control process for their FDA-compliant product. Project Summary Cyth developed a unified test platform using PXI, LabVIEW, and TestStand spanning test plans, fixturing, and a common automation architecture for six PCBAs used in patient monitoring devices. System Features & Components PXI instrumentation mapped to multi-DUT test coverage Custom fixturing for 6 constituent PCBAs, including bed-of-nails connectivity to test points RF environmental enclosure for wireless test Measurement software (LabVIEW) and test automation executive (TestStand) with common architecture for efficient reuse across devices-under-test (DUTs) Outcomes Minimized capital costs through instrumentation configurations shared across devices-under-test Reduced engineering development and maintenance effort with common measuremeent code and test automation software Successfully achieved schedule milestones for factory acceptance testing (FAT) Technology at-a-glance PXIe Chassis: PXIe-1078 Data acquisition: PXI-6229 Sensors and signal conditioning: pressure transducer, digital signal attenuator, programmable DC loads Power: PXIe-4112 and PXIe 4113 programmable power supplies Switching: PXI-2564 SPST relay module and PXI-2534 matrix switch Serial communication: PXI-8432 RS232 interface module Environmental: RF Faraday enclosure Test automation software: LabVIEW and TestStand Enabling Responsive Patient Care Vital signs monitoring equipment is the quiet workhorse of patient care. These systems take critical physiological measurements, such as temperature, respiratory rate, blood oxygenation, and blood pressure to provide clinicians with quantifiable data to assess patient health status and automatically detect changes that may indicate emergencies requiring active intervention. Like many medical devices and hospital equipment, modern vital signs monitors have evolved from simple measurement devices to sophisticated digital systems capable of continuous monitoring, wireless data transmission, and integration with electronic health records (EHRs), enabling more responsive patient care and improved clinical outcomes. While these products have become more intelligent and automated, the need for rigorous testing across design and production phases continues to grow. Enhanced Visibility & Monitoring The product incorporates four vital measurements across two distinct measurement subsystems: Cuff attached to the patient’s arm – measures systolic and diastolic blood pressure Pulse oximeter clipped to the patient’s finger – measures oxygen levels, heart rate, and temperature These subsystems are physically connected back to a processing and display unit. The system is also capable of transmitting patient readings wirelessly and securley to local devices, such as a nurse’s tablet, for enhanced visibility and monitoring. Patient's vital signs monitored automatically Comprehensive Test Coverage The engineering team responsible for testing the product faced several core challenges: Finding a test solution that would provide comprehensive test coverage for six individual PCBAs and final assembly verificcation RF testing for wirless connecitivty in a controlled, interference-free environment Reproducible and reliable test data for FDA approvel and ongoing compliance requirements They were looking for outside help from an engineering firm experienced with PCBA and final assembly functional test, ranging from fixture design, instrumentation selection and connecivity, to test software development. Designing a Unified Test Platform After engaging with the client on the product's test requirements, budget, and timeline, our engineering team started by mapping the intended test coverage to instrumentation capable of performing the measurements. We then recognized an opportunity to optimize footprint and overall hardware utilization by consolidating the various DUTs into three bed-of-nails fixtures and two test enclosures. This allowed us to refine the hardware BOM, lowering overall cost spread across the DUTs. Instrumentation Selection: PXIe Chassis: PXIe-1078 Data acquisition: PXI-6229 Power: PXIe-4112 and PXIe-4113 programmable power supplies Switching: PXI-2564 SPST relay module and PXI-2534 matrix switch Serial communication: PXI-8432 RS232 interface module NI PXI instrumentation including: PXIe-1078 chassis, programmable power supplies, serial communications interface and switching After finalizing the instrumentation BOM, our team planned out the signal routing diagram, including strategic switching, from the PXI modules' channels all the way to the DUTs' test points, as accessed through cabling and the fixtures' pogo pins. Throughout this process, we leveraged our PCBACheck reference design for advanced starting points on fixture CAD, layout, and sub-components. In effect, this versatile test fixture design allowed for multiple DUTs to be tested simultaneously, increasing the throughput and efficiency of the system without requiring additional instrumentation. The circuit board positioning in the test fixtures. Environmental RF Test One of the more complex challenges involved validating the product's wireless interface. To do so, we needed to create a controlled environment free of RF noise and interference, opting to design in an RF Faraday enclosure to create such an environment. We developed a comprehensive test protocol that exercised the wireless PCBA across multiple power modes using a 3-bit control signal. Through a digital signal attenuator, we measured transmission power and signal quality across the ISM Band (Industrial, Scientific, and Medical), specifically at 838 MHz and 916 MHz frequencies. This test methodology supported the project requirements of: Validating signal strength at various Tx/Rx distances Verifying proper power mode transitions Documenting signal quality metrics for regulatory compliance Test Software & Regulatory Compliance: Our engineering team developed the test software in conjunction with the hardware design and build aspects of the project. Using our PCBACheck software framework, we developed individual test modules in LabVIEW using a hardware abstraction layer in the form of driver APIs and pre-existing measurement expertise. From there, we incorporated those discrete, reusable code modules into multiple TestStand sequences capable of executing the test plan for each individual DUT. TestStand, and the PCBACheck automation framework for functional test provides the following benefits: Test sequence development – graphical sequence editor for code module integration and validation against test requirements Test execution – flexible process models and multi-threaded resource management Data access – customizable operator interfaces, report generation, and database connectivity Implementing a unified test automation framework across the DUTs helped with code reuse, debugging, and test data repeatability. Having reliable, consistently formatted test data helped our client with their Factory Acceptance Test (FAT) phase and lowered the effort to collect and report on compliance metrics for the FDA, avoiding many hidden costs and unpredictable headaches. Golden PCBA sample loaded into bed-of-nails-fixture during FAT Full Turnkey Test Coverage Overall, the Cyth team delivered a complete turnkey solution of two PCBA functional test enclosures and a final assembly test for our client's patient monitoring product within schedule and budget. The unified test platform provided a versatile fixture design and intelligent instrumentation routing, helping control capital equipment costs. The test automation software provided full test coverage for the various individual DUTs, making test data easily accessible and repeatable. In effect, this solution helped our client spin up their factory test capabilities sooner than an in-house approach, clearing the product's path to market and easing downstream quality control efforts.

  • Millisecond Control for Simulating Human Lung Behavior

    Cyth delivers lung simulation tool to MedTech startup, bringing complex mathematical models to life on NI sbRIO with FPGA millisecond control. Project Summary Medical technology startup developed a breakthrough lung simulator using Cyth's CircaFlex platform to achieve human-like respiratory accuracy for healthcare training, eliminating mechanical limitations of existing simulators through software-controlled parameter adjustment. System Features & Components Deterministic, closed-loop control to achieve microsecond-level motor control and millisecond response times for real-time simulation of lung physiology equations Software-controlled lung simulation to enable automated operation and seamless integration with existing heart simulation device to enable comprehensive training scenarios Linear actuator design to accurately simulate inhalation and exhalation volume exchanges with precise physiological feedback Outcomes Achieved human-like respiratory accuracy with nanosecond motor control and millisecond system response times Created simulator that is expected to disrupt the market, offering continuous programmatic parameter control at lower manufacturing costs than mechanical competitors Enabled comprehensive medical training across full spectrum of respiratory diseases and emergency scenarios through integrated cardiovascular-pulmonary system Transitioned manufacturing to Cyth Systems Technology at-a-glance NI sbRIO-9651  System on Module (SOM) Cyth CircaFlex-304  modular control board Cyth CircaFlex Stepper Drive Module LabVIEW Real-Time Module LabVIEW FPGA Module Mass Flowmeter and Controller (FMA-A2321) SICK Displacement Measurement Sensor (OD1-B100C50I14) SCN5 series Dyadic's Mechatronics Cylinder Round Bellow with Cuff Ends Revolutionizing Medical Training Today, you will take approximately 22,000 breaths of air ( Breathing , n.d.). Each one a part of the complex interplay of biological processes that many never need to think about. When it comes to medical emergencies and chronic respiratory issues, medical professionals must make split-second decisions about which life-saving interventions a patient might need. Many times, surgeons face an uphill battle when it comes to learning how to make those decisions. Opportunities to handle unique situations and uncommon issues cannot be properly addressed in medical textbooks or by operating on a cadaver. Furthermore, surgical teams must manage extensive patient profiles filled with complex cases, ones that are nearly impossible to learn during typical training. One medical technology startup recognized that this gap in respiratory training was putting healthcare providers and patients at risk. They set out to revolutionize how medical professionals can prepare for some of these critical moments: Emergency Medicine Training : Preparing doctors for asthma attacks, collapsed lungs, and respiratory failure scenarios Surgical Education : Training anesthesiologists and surgeons on ventilator management during operations Nursing Competency : Ensuring respiratory therapists can recognize and respond to changing patient conditions Medical Device Training : Teaching proper ventilator operation and troubleshooting across different patient scenarios Modeling Life-like Human Physiology Traditional lung simulators on the market were mechanical, inflexible devices that failed to adequately prepare medical professionals because they couldn't realistically replicate varying patient profiles and breathing models. The startup recognized their need for an advanced solution partner to help them improve: Training Realism Deficiencies:    Existing lung simulators required manual adjustments to change airway resistance, meaning students couldn't experience the seamless, dynamic changes that occur in real patients. Mechanical iris systems and solenoid-based designs created jerky, unrealistic responses that failed to replicate the smooth, continuous characteristics of human respiratory function. Healthcare providers were lacking exposure to the full spectrum of respiratory diseases and emergency scenarios they could encounter. Integration & Complexity Barriers:   The MedTech startup had already developed a sophisticated heart simulator, but existing lung simulators were challenging to integrate into a single system for comprehensive cardio-pulmonary medical training. Available solutions were either prohibitively expensive for educational institutions or so mechanically complex that they required extensive maintenance and specialized technical support, limiting their practical deployment in training environments. Designing-In Differentiation Considering the startup's ambitious vision to create the most realistic, responsive lung simulator ever developed, the system had to execute calculations and corresponding physical responses within milliseconds to maintain realistic human breathing patterns, as any delays would immediately break training realism and compromise educational value. Creating a life-like simulation required a system with dynamic range and continuity across the full spectrum of respiratory conditions, including: Continuous adjustment of airway resistance, from healthy breathing to severe disease states Dynamic changes in lung compliance based on simulated conditions like emphysema, collapsed lungs, and asthma attacks Precise parameter control for seamless transition between emergency scenarios Authentic physiological responses that match real patient variability The new lung simulator needed to function both as a standalone training device and as an integrated component with the medtech startup's existing heart simulator to help ensure surgical teams have access to comprehensive training simulations that demonstrate the intricate interactions between cardiovascular and pulmonary systems during medical emergencies. Striking a balance between advanced capability and economic viability was critically important to help encourage market adoption of the lung simulator. The final design had to be manufacturable at a cost point that would make it accessible to medical schools, hospitals, and training centers while maintaining the sophisticated performance characteristics required for effective education. Advanced Control Architecture The startup chose Cyth Systems because of their existing working relationship and proven expertise in solving complex real-time control challenges. Their team's expertise with precision motion control and LabVIEW programming made Cyth uniquely qualified to tackle the demanding requirements of human respiratory simulation. The system had to execute calculations and physical responses within milliseconds to maintain specified breathing patterns, as any delays would immediately break training realism and compromise educational value. The NI sbRIO-9651 was selected as the control platform to integrate into the final solution because it addressed the need for high-accuracy mathematical calculations alongside precise system control. What are the key benefits of NI sbRIO-9651?: 667 MHz dual-core CPU enabled multitasking and parallel processing Zynq-7020 FPGA provided deterministic, real-time system performance Compatibility with LabVIEW FPGA software  streamlined FPGA programming because it abstracted away the low-level complexities of Hardware Description Languages (HDLs) Comparing CPU and FPGA-based processing The capabilities of the NI sbRIO-9651 were further expanded by the Cyth CircaFlex-304 . This COTS daughterboard for the NI sbRIO enabled: rapid connectivity to digital TTL lines and analog voltage input channels I/O expansion capability to enable comprehensive lung simulation control and futureproof the product CircaFlex-304 On top of this reliable hardware platform, Cyth designed a software solution that could execute the customer's complex lung function equations in real-time. Incorporated Cyth's proprietary, field-tested real-time software architecture to ensure system reliability and maximize the processing capabilities of the CPU Developed custom FPGA behavior to instantaneously calculate pressure and volume variables, based on the customer's mathematical equations, to control motor speed with life-like accuracy Precision Motion for Linear Actuation: Cyth developed a linear actuator system using a precision motor paired with a rubber bellows. Inhalation was simulated by actuating the motor expanding the volume of the bellows. Exhalation was simulated by the motor returning to the home position, decreasing the volume of the of the bellows. Integration of SICK displacement measurement sensor ensured all components operated within timing tolerances to continuously demonstrate critical organ interactions during medical emergencies. Software-Controlled Variability: Cyth's solution enabled programmatic control of airway resistance and lung compliance entirely through software, eliminating the need for manual adjustments required by most lung simulation solutions Cyth's custom software control provided smooth, continuous adjustment ranges Mechanical wear and continuous maintenance requirements mitigated by software-enforcement of hardware operating ranges Precision Timing Solutions:  Initial testing revealed communication delays that compromised human-like responsiveness, so Cyth's engineers chose to bypass the stepper motor's control board: Spliced directly into TTL lines for step and direction control Replaced the RS232 communication with custom CircaFlex Stepper Drive module Achieved nanosecond-range motor operation and one-millisecond system response delays The NI sbRIO-based design, paired with Cyth's CircaFlex platform, enabled seamless integration with the customer's existing heart simulator. The integration of these two simulators created a comprehensive cardiopulmonary training system capable of demonstrating the critical interactions between these organ systems during medical emergencies. Cyth's LabVIEW FPGA programming expertise, paired with their field-tested control system software architectures, allowed them to create an intuitive solution that medical educators can use to program diverse disease scenarios while maintaining the mathematical precision required for authentic training experiences. With the integration of the lung and heart simulators into a single system at an optimized price point, the MedTech startup decided to entrust the manufacture of their products to Cyth's Manufacturing Engineering team in San Diego, California. Economically Viable, Technically Superior The medtech startup expects to disrupt the lung simulation market by outperforming their competitors with a solution capable of seamlessly and reliably delivering   comprehensive training scenario coverage with realistic physiological responses. The simulator's nanosecond-level motor control and millisecond response times deliver life-like respiratory dynamics to prepare healthcare providers for real-world emergencies. For the MedTech startup, the most differentiated capabilities that the NI sbRIO and Cyth CircaFlex brought to the solution were: FPGA-enabled precision  for calculating simulation parameters with continuous adjustment response times Hardware standardization across heart and lung simulators   for improving system reliability and simplifying manufacturing processes Flexible hardware and software platforms  for ensuring adaptability of system to future requirements The MedTech startup is primed to penetrate their target market of educational institutions with a clear business case: Technically superior simulations deliver high training effectiveness Economically viable price point facilitates capital equipment acquisition The MedTech startup and Cyth continue to collaborate on advancing cardiopulmonary simulation. Their goal of continuous improvement in healthcare training technology ensures that these products will remain at the forefront of medical education industry. Citations Breathing . Breathing | Canadian Lung Association. (n.d.). https://www.lung.ca/lung-health/lung-info/breathing

  • Biotech Startup Accelerates Funding with Scalable Reference Design

    Biotech startup secured millions in funding by demonstrating a bioreactor proof-of-concept in three months using Cyth’s embedded control system architecture.  Project Summary A biopharmaceutical startup with novel bioreactor technology needed a flexible prototype to rapidly refine cell therapy production recipes within months to secure investor funding. Cyth delivered a comprehensive control system solution that enabled rapid prototyping and iteration, allowing the startup to demonstrate their prototype to investors in three months and exceed funding goals. System Features & Components Breakthrough bioreactor technology with unprecedented cell growth yields poised to disrupt industry Three months to develop and refine prototype system to secure investor confidence during next round of funding Regulate temperature, moisture, carbon/oxygen levels, mass flow, and pH levels with microsecond-levels of precision Flexible control system that can be efficiently integrated into future bioreactor models Outcomes Rigorously tested prototype enabled the startup to demonstrate unparalleled cell growth yields, secure investor confidence and exceed their funding goals. This success positioned them for accelerated market expansion with a scalable control solution for rapid product line growth. Technology at-a-glance Hardware NI sbRIO-9608 Cyth CircaFlex-315 Stepper drives (x4) pH monitoring module Carbon, oxygen, and nitrogen level sensor module Temperature control module Mass flow control module Fluid flow control module Software NI LabVIEW NI LabVIEW FPGA Critical Medical Therapies Cell and gene therapies offer groundbreaking treatments for some of humanity’s most common and menacing diseases, yet their cost and production complexity hinder scalability. Cyth’s customer, a biopharmaceutical startup, developed a novel bioreactor technology capable of dramatically increasing the yield of cell-based therapies. To secure investor confidence, they needed a flexible prototype to rapidly refine recipes with minimal downtime and waste. Time was critical, they required an optimized prototype and recipes within months to secure investor confidence and continue developing their innovative product. By partnering with a control system design expert, they met their timeline, advancing their goal of making life-saving therapies accessible to more patients. Flexibility or Performance? Why not Both? Bioreactor systems are used for the research & development of biopharmaceuticals like vaccines, cell therapies, and gene therapies. The most expensive gene therapy available today costs $4.25M per dose (Buntz, 2024) , with the average cost of gene therapies costing between $1M and $2M per dose (2023) , making reliability and precise control during production a necessity. These self-contained systems regulate all aspects of the cell culture vessel, including: Temperature Moisture Carbon and oxygen levels Mass flow valves pH level Depending on the cell-based therapy produced and intended yield volumes, there is a lot of variation in the number and type of system I/O, making it necessary to leverage a control system that is capable of coordinating complex control loops, while enabling flexibility in accommodating evolving production needs. With plans to quickly raise a round of funding, then capture and expand market share by releasing a family of bioreactor models, this bioreactor startup recognized the importance of a flexible control solution that would enable them to quickly scale system I/O and build out their product line.  This startup had their product requirements finalized, but with a small team and minimal embedded design experience, they knew it would be challenging to develop a prototype on their own. Rapid Prototyping & Iteration To get their products to market quickly while optimizing reliability, this startup decided to work with a product design expert.   For the job, they turned to Cyth Systems because of their proven success in designing novel Life Sciences equipment for broad-based customer adoption. Cyth worked with this company   to transform their requirements into a design, including a core control system, sensors, mechanicals, and a project plan for rapid design iteration, including: Creating a prototype system within weeks  Incorporating the company’s   ongoing product   feedback from firsthand use over the course of weeks Refining the design into a robust product that is easily manufactured Functional block diagram: (1) process control hardware (e.g. thermocouple, pH probe, mass flow controller, etc.) monitors and controls bioprocesses, (2) control system hardware (sbRIO and CircaFlex) enables simplified connectivity to process control hardware and integrated signal conditioning, (3) Windows PC, running LabVIEW and LabVIEW FPGA software, includes a user interface for modifying recipes, and (4) database connectivity allows for secure and reliable data management. This startup’s proof-of-concept system required integration of:  Stepper drives (x4) pH module Carbon, oxygen, and nitrogen level sensor module Temperature module Mass flow (gas release valve) control module Fluid flow control module Cyth selected the NI single-board RIO (sbRIO)  platform for its: Deterministic control and processing capabilities FPGA-enabled coordination of complex control loops Native compatibility with LabVIEW and LabVIEW FPGA software, enabling the rapid implementation and modification of control and processing features NI sbRIO-9608 The CircaFlex platform enhanced the sbRIO’s capabilities by: Simplifying connectivity and signal conditioning to mitigate the need for custom hardware development Enabling rapid I/O module integration Cyth CircaFlex-315 The sbRIO and CircaFlex hardware, paired with Cyth’s field-tested bioprocess software architecture, allowed for rapid implementation and refinement of the startup’s control loops. The prototype enabled them to test out their bioprocesses and assess the machine’s usability. Example bioprocess user interface for monitoring process control and modifying recipes Through this approach of rapid prototyping and iteration, this startup had a Cyth-tested prototype within a matter of weeks that included: High-quality measurements Tight synchronization between complex control loops Simple UI to enhance and facilitate user experience Flexibility to modify recipes in real time   The flexibility of Cyth’s bioprocess control reference architecture enabled this startup to reuse the design for multiple products, helping them rapidly expand their offerings with a solution that scales from R&D through full-scale pharmaceutical manufacturing. The major components included in the final design were: NI sbRIO-9608 Cyth CircaFlex-315 NI LabVIEW NI LabVIEW FPGA Designed for Scalability The rapid development cycle and rigorous parallel testing enabled this startup to demonstrate the prototype to investors in three months and exceed their initial funding goals. For investors, the most compelling elements of this bioreactor startup’s prototype were: Unprecedented cell growth yields, poised to disrupt the bioreactor industry Market-readiness of prototype with a low-risk and high-ROI R&D investment Reliability of system, built on an industry-standard hardware platform with a field-tested software architecture The solution was designed with scalability and repeatability of recipes in mind. The robust, intuitive user experience mitigates the learning curve for users and encourages adoption from the lab through commercialization. System uptime reduces the risk of wasted reagents, and precise control paired with continuous monitoring ensures the execution of recipes exactly as specified by the operator. As this startup prepared to go to market, they wanted to expand their product line to include more sizes and variants of bioreactors. The flexibility of the embedded reference design made it quick and easy to scale up production on all product lines simultaneously. Citations Buntz, B. (2024, April 26). With costs up to $4.5M, cell and gene therapies redefine norms. Drug Discovery and Development. https://www.drugdiscoverytrends.com/how-price-safety-and-efficacy-shape-the-cell-and-gene-therapy-landscape/  Staff, G. (2023, December 27). Cell and gene therapy manufacturing costs limiting access. https://www.genengnews.com/insights/cell-and-gene-therapy-manufacturing-costs-limiting-access/

  • E-Bike Battery Testing and Validation Using BatteryFlex

    E-Bike Battery Testing and Validation is performed using the BatteryFlex platform. Battery Testing Project Summary A Southern California E-Bike (Electronic Bike) manufacturer approached us regarding a system to test and validate their E-Bike battery pack assembly. The team felt it was crucially important to do a variety of tests, including deep discharge and repeat full-power charge cycling, at least in the early days of manufacturing. Battery Test Solution & Results Using our BatteryFlex propriety testing software and PXI data acquisition platforms, we custom-designed a fixture meeting our client’s needs and product specifications for improved quality assurance of their E-Bike batteries. This has improved our client’s warranty return rate by 12%. Industry Consumer Electronics, Manufacturing Technology at-a-glance Cyth Systems' BatteryFlex 4-Quadrant SMU (±60V, ±3A, 100fA Meas) 7.5 Digital Multimeter (±3A, 1.8MS/s) Thermocouple or RTD Measurement Custom Serial Communication Battery Testing Project Background The explosion of E-Bikes in the last few years has enabled people to go the extra mile. With assisted pedaling to full assisted riding, e-bikes combine the benefits of being active with a rechargeable battery that maintains integrity over thousands of charges.   A Southern California E-Bike manufacturer approached us regarding a system to test and validate their E-Bike batteries. Their undesirable level of battery malfunctions and warranty returns prompted them to seek a solution for the testing of their E-Bike batteries. Left: PXI data acquisition platform. Right: BatteryFlex LabVIEW user interface (UI) showing live test data Upon customizing our BatteryFlex platform to our customer’s needs, we began to run overnight tests of 8+ E-Bike batteries. They are simultaneously loaded into the BatteryFlex fixture and with detailed test reports generated by morning. We performed the following tests for a pass or fail test according to the customer’s specifications: Open Circuit Voltage (OCV) Power Cycle Test Capacity (Static, Script, Pattern/Pulse) DC Internal Resistance (DCIR) AC Internal Resistance (ACIR) A pass validated the battery’s function for integration into the final product whereas a failure prevented a faulty battery from reaching the customer. Using the BatteryFlex platform increased the accuracy of the client’s voltage and current measurement of their outsourced Lithium-Ion batteries. BatteryFlex Features High-Resolution, High-Speed Measurements PXI Platform Playback and Record Load Cell Scenarios Data Logging, Analysis, and Storage Output to any format report or datasheet Thermal Monitoring and Measuring Safety Interlocks and Shutdown   Customized Battery Test without the risk Our BatteryFlex platform allows for the accurate test and measurement of our customer’s outsourced E-Bike batteries ensuring a faster speed-of-test and improved quality assurance. Our automated fixture identifies the individual capacity of each battery cell undergoing simultaneous testing to determine if they are a pass or fail according to the customer’s specifications. We have improved our customer’s E-Bike battery warranty return rate in active partnership by 12%.  We also provided a proof of concept developed into a complete test fixture deployed at our customer’s headquarters within a 10-week timeline.   Datasheet and Battery Flex Overview BatteryFlex System Specifications BatteryFlex BatteryFlex Datasheet for Download

  • Saving $400M Through Condition-Based Maintenance Approach

    Discover how a utility provider is saving millions per year with an asset condition monitoring solution built on NI Single-Board RIO and LabVIEW FPGA. Electrical substation with control panels and industrial circuit breakers. Project Summary An electrical utility provider, managing tens of thousands of circuit breakers, faced critical operational blind spots due to their reactive maintenance approach. They had no means of acquiring or analyzing the day-to-day operational data needed to implement preventative maintenance strategies and improve overall operational efficiency. System Features & Components Asset health monitoring system capable of operating in extreme ambient conditions across geographically extensive service areas Comprehensive circuit breaker event data acquisition, analysis and alarming Dual form factor design: portable units for scheduled maintenance and permanently installed systems for continuous monitoring Cost-optimized price point per unit to make widespread condition-based monitoring economically viable across tens of thousands of circuit breakers Outcomes The collaboration delivered transformational results with projected savings of $400 million over 20 years and a six-year payback period. Operational efficiency improved through a reduction in reactive maintenance events, enhanced service reliability, improved data reporting and a decrease in the number and duration of service outages. The electrical utility provider transformed their reactive and rigid service schedule into a data-driven, condition-based maintenance approach. Technology at-a-glance Circuit Breaker Analyzer (CBA) - Portable Unit: NI CompactRIO-9053 NI LabVIEW Real-Time & FPGA MSO RTOS daemon software Lens Client™ software for laptop-based configuration and analysis   Circuit Breaker OnLine Monitor (CBOLM) - Permanent Unit: NI Single-Board RIO-9651 (SOM) Cyth CircaFlex-580 custom daughterboard NI LabVIEW Real-Time & FPGA MSO RTOS daemon software Lens Enterprise™ Server for centralized data management Critical Grid Infrastructure Circuit breakers are critical parts of the electrical infrastructure, serving as crucial power delivery devices that continuously transfer electricity across the network. They instantaneously respond to current levels outside of pre-determined thresholds, making them vital system management devices for protecting grid infrastructure and enabling operators to intervene when circuit breaker maintenance is necessary. Proactive Asset Monitoring On a mission to mitigate the risks associated with circuit breaker maintenance, one utility provider knew they needed more data insights to reduce instances of unexpected service outages and circuit breaker events. Considering that circuit breakers in the electrical grid are expected to have a long operating life and high availability, it was critical for this utility provider to identify a solution that would enable them to leverage their service and maintenance crews optimally, while mitigating any risk to crews posed by unnecessary site visits. Their existing solution for addressing circuit breaker maintenance needs was to evaluate and diagnose issues during planned maintenance events or other onsite crew operations, resulting in the inability to capture the data from unanticipated breaker events during typical, day-to-day operations. Traditional maintenance approaches can result in operational blind spots, preventing comprehensive equipment performance tracking and leaving utility providers vulnerable to unexpected failures. A lack of data insights made it difficult to proactively stabilize the workload for service personnel, and challenging to recommend preventative maintenance services for their assets. To hit their grid availability and cost reduction goals, this utility provider needed a solution that would enable them to proactively manage their infrastructural assets. Building in Predictive Maintenance The types of deterioration that precede, and cause, most circuit breaker failures are detectable ahead of an actual failure. The ideal solution for safeguarding the utility provider's infrastructure and workforce would be to install smart electrical grid technologies for the measurement and analysis of all circuit breaker events. They needed a continuous asset health monitoring system that was reliable enough to continuously monitor the circuit breaker equipment and robust enough to handle the extreme weather and conditions throughout their extensive service area. With tens of thousands of circuit breakers across their service area, they needed to optimize the price point per installed monitoring system to make widespread condition monitoring economically viable. For this utility provider, evaluation and diagnosis of circuit breaker issues were limited to planned maintenance events or onsite operations, leaving them without visibility into normal and fault-condition breaker operations. This lack of insight made workload stabilization difficult for service personnel and prevented effective implementation of preventative maintenance. To maximize asset availability and optimally manage the workload of service crews, the utility provider needed to continuously monitor their circuit breakers for activity outside of normal operating thresholds. Data insights into circuit breaker activity would enable the utility provider to proactively prioritize, and schedule, maintenance based on the urgency and criticality of the issues identified. This electrical utility’s regulatory requirements mandated annual testing of every circuit breaker in operation. Compliance with regulations and specifications is critical for public safety and ensuring reliable access to electricity. To ensure regulatory compliance, a circuit breaker must exhibit timely and adequate responses to electrical events. The performance of these circuit breakers is then provided to these regulatory bodies in the form of data that characterizes the response of a circuit breaker. Considering that circuit breaker operations can occur at any time, compliance for a particular circuit breaker could be confirmed by observing how the breaker responded to an electrical event. With a solution for remotely and continuously logging data insights, this utility provider could dramatically reduce their cost of compliance without ever having to send a maintenance crew to the site. Seeking expert guidance, this utility provider turned to MSO Technologies for help delivering a solution capable of continuously monitoring their assets, providing data insights for predicting potential circuit breaker failures and ensuring regulatory compliance. The utility provider wanted two form factors of the same system to provide the insights they needed while they scaled out and deployed permanent monitoring solutions to their tens of thousands of circuit breakers. Unit Description Hardware Software Circuit Breaker Analyzer (CBA)   Portable and ruggedized Portable unit, taken onsite for testing during planned maintenance events NI CompactRIO-9053   NI LabVIEW Real-Time & FPGA   MSO RTOS daemon Circuit Breaker OnLine Monitor (CBOLM)   Permanently installed Permanently installed units, event-triggered data monitoring system NI Single-Board RIO-9651 (SOM)   Cyth CircaFlex-580 NI LabVIEW Real-Time & FPGA   MSO RTOS daemon Table 1. Descriptions of two form factors of circuit breaker monitoring systems One form factor would be a portable unit for assessing circuit breaker health during typical, reactive or programmed maintenance events. The data gathered onsite would then be immediately used to identify issues and aid personnel in returning the asset to service. The second form factor would be a permanently installed system that would continuously monitor, capture breaker events, and publish the data to a remote server for immediate analysis and notification if warranted.   Connecting Circuit Breakers to the IIoT MSO Technologies is an expert in delivering operational technologies and solutions to the energy industry that help seamlessly connect assets and business systems to the Industrial Internet of Things (IIoT). MSO turned to Cyth Systems for help because of their successes delivering highly responsive deployed measurement systems into extreme and challenging environments. Cyth designed and built the field signal interface for high-speed data acquisition. MSO’s software managed event trigger detection, data aggregation, and data publishing for subsequent analysis.  To address all the circuit breaker event measurement use cases for the utility provider, MSO and Cyth defined requirements for two physical form factors of system running the same software stack. Circuit Breaker Analyzer (CBA) Portable unit that utility maintenance teams take onsite to perform necessary measurements during scheduled maintenance events Using Lens Client™ software on a laptop to configure event thresholds and consume pre-trigger and post-trigger data published by the CBA for immediate analysis, charting, and upload to the central Lens Enterprise™ Server. NI CompactRIO hardware in a portable, ruggedized enclosure Standard connector enables connectivity with any brand of circuit breaker CBA Diagram - Components and I/O Circuit Breaker OnLine Monitor (CBOLM) Permanently installed enterprise IIoT solution that interfaces with both the centralized server as well as clients to the server Continuously measures key circuit breaker health indicators and selectively logs data based on pre-determined thresholds sbRIO platform chosen for its compatibility with the RIO software stack, smaller physical and energy footprints, cost point at volumes of scale and measurement performance in harsh and fluctuating outdoor environments Datalogging is triggered by circuit breaker events Pre-trigger and post-trigger data is logged to Lens Enterprise™ Server Standard connector enables connectivity with any brand of circuit breaker CBOLM Installed at utility provider's substation The CBA and the CBOLM are both based on the NI RIO platform, enabling re-use of the same software stacks and codebases across both system types. MSO developed the solution for interfacing between the utility provider’s IT infrastructure and the NI RIO-based data acquisition and analysis systems. They created a daemon to run on the NI Linux Real-Time Operating System, responsible for event threshold detection, securely logging event data and publishing to the remote server. Cyth developed a custom CircaFlex daughterboard to mate with the NI sbRIO-9651. The CircaFlex board enabled Cyth to incorporate the signal conditioning and connectivity necessary for ensuring data integrity and system reliability. The data acquired by the CBA and CBOLM is eventually pushed to a central database. The cRIO-9053 in the CBA unit acquires data that is then passed to a PC running MSO’s Lens Client™ software, which analyzes the data, charts the data for user review, and transfers the data to the central database. The NI sbRIO-9651 in the CBOLM continuously reads the signal trace inputs. When a circuit breaker event occurs, datalogging is triggered by the sbRIO to ensure that the data immediately before and after the breaker event is captured and sent via MQTT to the central server. The published data is inserted into the central database, analyzed, and notifications to maintenance personnel are made, if required. This data is then available for on-demand and remote viewing, so maintenance experts can retroactively view and analyze circuit breaker behavior and events to decide on the optimal path to resolution. The compatibility of MSO’s code with the RIO hardware on both the CBA and CBOLM, combined with Cyth’s field-tested software architecture, helped accelerate the development of both solutions and will help ensure system sustainability long into the future. The implementation of MSO’s circuit breaker monitoring solution delivered significant advantages across multiple operational dimensions: Hardware Customization with Software Flexibility: Commercial-off-the-shelf (COTS) hardware customized through software down to the transistor level provided exceptional measurement quality without sacrificing reliability Code Portability: Existing cRIO code was seamlessly transitioned to the sbRIO platform, reducing development time and validation requirements Optimized Connectivity: The 320-pin connector of the sbRIO-9651 system-on-module (SOM) provided superior integration capabilities for long-term deployment Scalable Architecture: This “semi-custom” approach delivered an ideal balance of performance and cost-effectiveness for large-scale deployment across thousands of units Comprehensive Integration: NI software libraries enabled rapid I/O integration and communications protocol support, while Real-Time Operating System capabilities ensured optimal, reliable performance Ultimately, MSO’s enterprise software solution enabled the utility provider to: Constantly acquire and analyze key indicators in circuit breaker events Incorporate numerous, custom trigger definitions to enable detection and reporting of asset health and maintenance concerns continuously Design in safeguards, like short-term onboard storage to help protect against data loss when a system gets disconnected from the IIoT Deliver accurate and timely alerts through real-time analysis and reporting capabilities   Improved Operational Efficiency Over the expected 20-year operating life of the CBA and CBOLM systems, MSO’s end-customer, the utility provider, stands to save ~$400 million, with a nominal payback period of six years. A large portion of cost savings can be attributed to increased service reliability, enhancing grid stability and performance. The installation of CBOLM units across the utility service area is expected to: extend the life of their existing circuit breaker population by one to three years dramatically reduce outages experienced by customers reduce the need for unplanned, reactive maintenance events Performing maintenance at the first sign of abnormal, or out-of-tolerance, circuit breaker behavior can help reduce further damage or deterioration. The CBOLM systems provide an unprecedented level of data insights into circuit breaker health, which enables the utility provider’s maintenance teams to prioritize the assets with the greatest need and intervene before a damaging or catastrophic failure can occur. These need-based insights and maintenance prioritization are expected to increase average circuit breaker life by 1-3 years. The availability of proactive, data-driven, condition-based service recommendations fundamentally reshaped service scheduling, enabling the utility provider to prioritize maintenance based on actual equipment condition rather than rigid, predetermined intervals. Reducing the frequency of unplanned maintenance activities has enabled this utility provider to reduce their cost of asset maintenance, and most importantly, limit the risks to their maintenance personnel by reducing the instances of unplanned maintenance activities. Since the CBOLM system captures and logs data any time a circuit breaker event occurs, it helps dramatically reduce the number of maintenance events that are needed to ensure regulatory compliance of the circuit breakers. Maintenance teams have access to circuit breaker operating data via the centralized server, and therefore, can report out on the regulatory compliance of breakers without needing to travel to site. This enhanced approach to monitoring and managing assets will result in substantial, long-term cost savings for this utility provider: improved operational efficiency, enhanced regulatory compliance, and a designed-in approach for proactive maintenance with the promise of substantial long-term cost savings. Through transforming complex equipment monitoring needs into actionable intelligence, MSO and Cyth Systems empowered the utility provider to set new industry standards for predictive maintenance and grid management.

  • Precision Control System Advances Global Health

    Learn how a pharma process startup is exceeding vaccine quality metrics with their disruptive microfluidics technology built on the NI cRIO platform. Project Summary A pharma process startup needed a robust control system to integrate their disruptive pharmaceutical manufacturing IP into a full-scale production line. Cyth delivered a cRIO-based control system with a scalable software architecture for enabling future volume deployments. System Features & Components Enable reduction in cost per dose of vaccine through scale-out of biopharmaceutical production in flexible, localized deployment models Mitigate risks to project completion, due to cash flow constraints, through rapid prototyping and extensive parallel testing Develop and deploy a cRIO-based system that can be optimized for future deployments on NI's sbRIO platform Support the solution's regulatory compliance through highly-vetted system BOM, detailed project documentation and regular weekly collaborations Outcomes The beta system exceeded initial purity and homogeneity metrics, demonstrating superior vaccine quality and compelling a multinational pharmaceutical manufacturer to integrate this startup’s technology into their full-scale production line. The precision, reliability and scalability of the solution are helping to further global health initiatives by driving vaccine production costs down and enabling highly-localized manufacturing. Technology at-a-glance Hardware NI cRIO-9066 NI C Series modules Peristaltic pumps Mass flow controllers Stepper motors Customized industrial enclosure and electromechanicals Software NI LabVIEW NI LabVIEW Real-Time NI LabVIEW FPGA Reduce Cost with Scalable Technologies Following the COVID-19 pandemic, the need for equal, global access to high-quality vaccines has become evident. In many parts of the developing world, vaccines are difficult to access and prohibitively expensive. To achieve a 70% vaccination rate, low-income countries would need to increase their health expenditure by 30-60%, while high-income countries would only need to increase theirs by 0.8%. (1) To help reduce the cost of vaccines in the developing world, and further global health equity, one biopharmaceutical startup is determined to leverage their rapidly scalable microfluidics technology to: Increase the shelf life of injectable drugs through precise control in manufacturing Enable local production of vaccines in areas with limited infrastructure Facilitate the implementation of precise and repeatable vaccine production into continuous manufacturing workflows To prove the quality and efficacy of the mRNA vaccines manufactured with their novel microfluidics technology, they needed to provide their pharmaceutical manufacturing end-customer with a beta system capable of: Precisely and deterministically controlling fluid flow Occupying a relatively small footprint, compared to typical manufacturing “skids” Seamlessly integrating into a continuous manufacturing workflow To meet this demand, this startup turned to Cyth Systems to bring their vision to life in a few months, ensuring they could deliver an industrialized, robust system to a global biopharmaceutical manufacturer. Flexible, Deterministic Process Control Biopharmaceutical production tasks require a delicate orchestration of variables, including temperature, pressure, mass and flow rate. Slight, unexpected variations in any of these variables could compromise the quality and shelf life of the biopharmaceuticals produced, which could result in decreased efficacy of the drug for patients or high costs to manufacturers if regulation compliance is breached. To ensure the delivery of high-quality and cost-effective vaccines to patients, it’s necessary to: Optimize the scalability of bioprocesses from drug development through manufacture Mitigate the total cost of goods by minimizing batch failures and maximizing yield Maximize the ROI of capital assets through continuous manufacturing processes with minimal downtime To overcome these challenges, this startup needed a high-performance and versatile platform. They needed instrumentation capable of deterministic, high-reliability process control, while being flexible enough to adapt to changes in interconnected processes in the manufacturing line. Even though they had clear requirements for the platform they would integrate with their IP, their limited experience with advanced control system design put them at risk of missing delivery deadlines or running out of cash before a final solution was completed. Scalable and Flexible Architecture To implement an advanced control system and deliver a high-reliability product to their end customer in a few months, this biopharmaceutical startup partnered with Cyth Systems, an expert in industrial process automation and embedded control systems design. The solution was built on NI’s CompactRIO (cRIO) platform, a flexible, high-performance control system ideal for complex biopharmaceutical applications. With such high levels of precision and control needed, the platform’s real-time operating system (RTOS) and field-programmable gate array (FPGA), were critical. NI cRIO-9066 chassis RTOS key features: Deterministic performance: predictable and consistent execution of tasks with minimal jitter Real-time control: precise timing and low-latency responses Reliability: stable platform for mission-critical applications, reducing the risk of system failures Security: native support for Security-Enhanced Linux FPGA key features: High-speed performance: control loop rates can exceed 100 kHz, enabling rapid response for time-critical processes Custom timing and triggering: precise, reliable control of system operations through the implementation of advanced timing and triggering directly on the hardware Parallel processing: inherent parallelism of the FPGA enhances overall system efficiency through simultaneous execution of multiple tasks Flexibility and customization: software implementation of custom logic, signal processing and control algorithms Cyth worked closely with the startup throughout system development process. Their objectives were twofold: To meet the startup’s immediate beta-testing requirements. To establish a clear, scalable path to long-term product development goals by implementing a scalable and flexible software architecture. Custom user interface alongside cRIO with custom breakout boards and wiring, all developed and built by Cyth. The core aspects of the solution included: Real-time control: High levels of time determinism were essential to ensure precision operation and seamless orchestration of processes in the production line. Integrated inline sensing : Sensors for temperature, pressure, flow rate, and mass were integrated inline to ensure continuous, closed-loop monitoring of the production process. Iterative Development: Regular project meetings were held to incorporate feedback, allowing for ongoing refinement of the system to meet evolving needs. Scalable Architecture: The system was designed with scalability in mind, ensuring it could grow alongside the startup’s expanding production capabilities. Additionally, the cRIO security features facilitated secure, remote access to the system, enabling Cyth to help the biopharma startup incorporate adjustments and optimizations to the system during Site Acceptance Testing. The final solution was a package of hardware and software: Hardware: cRIO-9066 NI C Series modules Peristaltic pumps Mass flow controllers Stepper motors Custom, folded aluminum enclosure with printed vinyl labels Software: NI LabVIEW NI LabVIEW Real-Time NI LabVIEW FPGA NI C Series Modules Localized Manufacturing Cyth transformed the biopharma startup’s requirements into a robust and flexible platform for producing vaccines. The beta system delivered to the biopharmaceutical manufacturer was integrated into a small-scale production line. The vaccines produced exceeded the initial metrics for purity and homogeneity. Recently, this multinational biopharmaceutical manufacturer decided to integrate this pharma process startup’s technology into their full-scale production line because of its exceptional performance and ability to adapt to future production requirements. Specifically, they highlighted: Excellent quality of manufactured mRNA vaccines High levels of precision in process control Expandability of I/O for more complex, or higher throughput production in the future High reliability of system throughout continuous manufacturing testing This novel microfluidics solution enables the production of superior quality vaccines in highly localized settings. The high precision, reliability and scalability of this solution are poised to dramatically reduce the cost of vaccine production. As this technology is integrated into full-scale manufacturing, it brings the biopharmaceutical startup closer to realizing its vision of eliminating global vaccine inequity by increasing access to life-saving biologics worldwide. Citations United Nations Development Programme. (2021, April 18). Global Dashboard for vaccine equity: Data Futures Exchange . Data Futures Exchange. https://data.undp.org/insights/vaccine-equity

  • Precision frequency controller enhances radiation targeting accuracy in cancer treatment

    Discover how a leading medical device manufacturer developing next-generation tomotherapy equipment used NI and Cyth technology to build a high-precision X-ray pulse controller. Tomotherapy equipment in a clinical environment Project Summary Cyth developed a precision timing and control system using sbRIO, CircaFlex, and LabVIEW to automate radiation pulse generation and improve positional accuracy for tumor targeting while minimizing collateral damage to healthy tissue. System Features & Components Deterministic control down to 250μs loop rates with beam power pulses up to 400μs improving radiation targeting precision. 40 KHz pulse rate handling and analog readback from pulsing subsystem. Parallel control of three stepper motors for frequency, power, and focus adjustment. Outcomes Accelerated the product’s prototyping and design validation phase by 4-6 months. Minimized system integration costs by providing onsite bring-up support, including documentation, training, and calibration. Technology-at-a-glance NI sbRIO -9606 running 20MHz FPGA control loop CircaFlex for I/O system integration and control loop design LabVIEW control and automation framework Precision stepper motors ·         Tomotherapy as Medical Technology Tomotherapy is a cancer therapy modality that directs radiation doses directly to tumor sites intending to minimize exposure to healthy tissue. During operation, the surgical team performs a 3D CT scan to image the cancerous sites, transmitting data wirelessly to the tomotherapy device which orchestrates the delivery of pulsed radiation, typically in the X-ray band of the RF spectrum. A multi-leaf collimator acts in unison with the pulsing stage to permit or block radiation beams based on the imaging data. The overall effect is to provide precise, personalized treatment to the patient. A medical equipment company sought to develop a new tomotherapy device that pushed the envelope of pulsed radiation control and localization. In the early phases of the engineering design cycle, they needed to prototype and refine a mixed I/O system capable of microsecond-level pulse control. They also needed to validate the performance of this innovative medical device relating back to the overall effectiveness of treatment and patient recovery outcomes. Engineering Challenge The customer faced a complex real-time control challenge. They needed to coordinate the intensity-modulated radiation therapy (IMRT) pulser delivering the X-ray energy with positional feedback control based on CT imagery and the surgeon’s touch. These system requirements translated to microsecond-level synchronization across multiple parallel control loops managing: Pulsed radiation power Stepper motor positioning Other system components These requirements exceeded the capabilities of standard programmable automation controllers (PACs), while developing custom circuitry would have consumed significant schedule time and budget resources. They evaluated using a system-on-chip (SoC), but integrating the electromechnical components of the systemwould be a challenge, nor did they have the in-house FPGA development expertise. The development team needed a solution that could bridge these gaps to provide high-performance control capabilities of FPGAs or custom hardware while keeping keeping the project on track. Closed feedback loop running up to 20MHz capable of 400us beam pulses Pulse processing: 40 KHz pulse rate handling from pulsing subsystem Automation control framework capable of triggering and analog readback digitization 3-axis stepper motor control   Cyth Solution Control System Design :   After refining the project requirements, the Cyth engineering team designed a control system capable of delivering the high-speed I/O and programmable control required for microsecond-level precision. The core hardware architecture included: NI Single-Board RIO (sbRIO-9606) containing a Xilinx FPGA and a CPU capable of 400MHz real-time processing CircaFlex mezzanine board with signal conditioning and connectivity to analog, digital, and stepper motor I/O Communication interfaces to other system components   Single-Board RIO, CircaFlex, and LabVIEW simplify prototyping and enable rapid iteration during development phases. The FPGA on the sbRIO, programmed in LabVIEW, enabled the primary control loop to run up to 20MHz, while the CircaFlex extended the sbRIO’s I/O capabilities through high-accuracy analog readback from the beam pulser and other system components. To achieve the required positional accuracy, the solution digitized and analyzed a high-speed pulsetrain used for triggering capability and feedback control for three stepper motors that direct frequency, power, and focus parameters. Circaflex provides I/O and breakout connectivity for embedded system design Software Development : Built on the LabVIEW system design platform, the automated frequency controller (AFC) was extensible from the start. Working first to prototype the controller, the Cyth team used CircaFlex to quickly interface with various system I/O and leveraged their experience with automation frameworks to refine the algorithm, closing the multivariable feedback loop on the FPGA. Software features included: Control paradigm defined in software and compiled to the sbRIO’s onboard FPGA Hardware-triggered safety interlocks Real-time system monitoring and user interface Diagnostic capabilities for system bring-up and calibration NI RIO architecture, linking versatile I/O, FPGA processing, and real-time compute. Mechanicals and System Integration : Once the system components and architecture were sufficeintly validated, the Cyth team designed and built an enclosure with connectors for the various I/O in the system. As such, they were able to deliver a functional box that abstracted away the complexities of the controller so that the product team could continue integrating and validating the AFC into their end product. Automation Frequency Controller (AFC) integrated into connectorized enclosure It should be noted that the flexible nature of the LabVIEW-based architecture supported further software design iterations all the way down to the FPGA should the need arise based on system validation in subsequent engineering phases. System Delivery and Bring-Up Design Validation and Deployment :  Following a 10-week design and build period, the Cyth team successfully delivered a working system delivered during a 2-day on-site visit focused on downstream system integration and usability. The Cyth team continued to support bring-up of the final product, including documentation, operator training, and calibration.

  • National Instruments NI-TClk Technology for Timing and Synchronization of Modular Instruments

    Overview Many test and measurement applications call for the timing and synchronization of multiple instruments because of the limited number of stimulus/response channels on a single instrument and/or the need for mixed-signal stimulus/response channels. For example, an oscilloscope may have up to four channels and a signal generator up to two channels. Applications ranging from mixed-signal test in the electronic industry to laser spectroscopy in the sciences require timing and synchronization (T&S) for higher-count channels and/or the need to correlate digital input and output channels with analog input and output channels. Modular Instruments Timing and Synchronization in Applications In the electronics industry, mixed-signal test is an important aspect of testing devices and system on chip (SOC) technology. With the convergence of audio, video, and data in consumer electronics and communications, the need for test of such technology from baseband to RF requires precise T&S. Mixed-signal devices, in essence, contain multiple digital and analog channels. These channels are often tested at the same time in an ATE system to minimize test time and increase throughput. Additionally, the analog channels are tested with coherent sampling instrumentation systems. Coherent sampling systems call for the synchronization of heterogeneous clocks in analog-to-digital conversion (ADC) and digital-to-analog conversion (DAC) test. This synchronization is highly desired to minimize spectral leakage in frequency-domain measurements [1]. The following LabVIEW graph illustrates the effect of noncoherent sampling vs coherent sampling. The white trace corresponds to noncoherent clocking whereby a fractional number of cycles of the analog sine wave are captured. The spectrum leakage in the FFT leads to "skirts" in the spectrum. With the same sampling rate, a coherent sampling system leads to the red trace. One of the key benefits of coherent sampling is the reduction of test times because of shorter signal acquisition times. Shorter acquisition times are attained by not having to capture extra signal cycles to which to apply a digital window for elimination of the spectral leakage. In principle, an ATE system designed to meet the flexible needs of the wide variety of devices in the marketplace should deliver instrumentation with clocks that are different but derived from a master reference clock for coherent sampling. Additionally, this system should be able to deliver arbitrary clock frequencies derived from a master reference clock. In communications, analog and digital baseband I/Q signal generation and acquisition require phase offset accuracy and control [2]. A digital pattern generator/analyzer is synchronized with an arbitrary waveform generator and a digitizer to address digital and analog I/Q signal generation and acquisition. The accuracy and control requirements of phase and gain offset between each channel can be as low as 0.003% and 0.1%, respectively, for signals with bandwidths that approach 5 MHz in 3G W-CDMA schemes, for example. In future 4G communication schemes, such as multiple-input multiple-output (MIMO) antenna systems, requirements for multiple-channel baseband, IF, and RF signal generation and acquisition with tight synchronization will be critical. Digital beam forming, an emerging technology, is playing into multiple applications from 4G MIMO communications to radar applications in defense and aerospace industries. Digital beam forming requires multichannel phase-coherent digitizing systems with digital downconversion engines. In the semiconductor industry, functional digital test can consume up to 1000 digital pins. Typical integrated circuits (ICs) on the market can take somewhat less than 200 pins of digital I/O. In such an application, multiple digital pattern generators and analyzers are synchronized with the requisite pin-to-pin skew and jitter to address high-pin-count ICs. In consumer electronics, component digital video signal generation and acquisition may require up to five distinct signals: the three primary video signals, H-Sync, and V-Sync. With T&S, arbitrary waveform generators and digitizers can be synchronized to generate and acquire high-definition video signals respectively, with pixel rates that can approach 165 MHz. CMOS imaging sensors, a technology expected to become mainstream with the prevalence of camera phones and digital cameras, is an example of mixed-signal technology whereby an arbitrary waveform generator, digitizer, and digital pattern analyzer are synchronized for design validation and verification of the chip or chip set. In the physical sciences, high-channel-count digitizing systems are used in plasma-fusion, laser-scattering experiments, and photon/particle detection and tracking in particle and astrophysics. In these examples, high-channel-count digitizing systems are used for 2D or 3D reconstruction of temporal and spatial phenomena. Such applications call for simultaneous sampling of multiple channels, extending from a few channels to over several hundred. In medical diagnostic systems, 3D digital-imaging systems are fast replacing analog systems since the advent of cost-effective 12 and 14-bit 50 MHz ADCs. Such systems typically scale from a hundred to over a thousand channels. In nondestructive test, 3D ultrasonic imaging is realized with multichannel systems that include 50 MHz digitizers. Optical coherence tomography (OCT), a relatively new imaging method compared to ultrasonic imaging, can require several digitizer channels as well to interface to multiple photodiodes for coherent sampling. The National Instruments Platforms for Modular Instrumentation Today, National Instruments hardware platforms for modular instrumentation are PXI [3] and PCI. Both platforms are modular in nature and use the PCI bus to interface between the PC and the instrument. Introduced in 1997, PXI is an open standard with many vendors who offer a wide range of PXI modules from image acquisition to RF vector signal analyzers. PXI has been gaining rapid acceptance because of its relatively small package, portability, high throughput using the PCI bus, and lower costs, which are made possible through use of standard commercial technologies spawned by the large PC industry. Electrically, PXI extends the CompactPCI standard by adding local buses and synchronization features. For synchronized measurements, key elements built into PXI are the reference clock, the trigger bus, and the star trigger bus [3]. Building Blocks for Synchronization To achieve synchronization across multiple devices, you need to examine the distribution of clocks and triggers. There are two main schemes for synchronization, but before we examine the schemes we need to define the following terms. Sample Clock, Reference Clock, Triggers, and Master and Slave Devices Sample clock is a signal that controls the timing of the analog-to-digital and digital-to-analog conversions performed by the ADC and DAC on digitizers and signal generators respectively. The sample clock is also the signal that controls the rate at which digital waveforms are acquired or generated on digital pattern generators/analyzers. The sample clock is most often a periodic signal, derived from a crystal oscillator on the device. Various crystal oscillator technologies include voltage-controlled crystal oscillators (VCXOs), temperature-controlled crystal oscillators (TCXOs), and oven-controlled crystal oscillators (OCXOs). Reference clock – Many instruments contain phase-lock loops (PLLs). A PLL can lock the frequency of its output to a reference clock at its input. In instruments, a common frequency is 10 MHz, although many instruments allow a variety of reference-clock frequencies. The output of the PLL is typically the sample clock. Using a PLL, the sample clock frequency can be locked to the reference clock frequency. Therefore the absolute frequency accuracy of the sample clock will be identical to the frequency accuracy of the reference clock. Trigger signals control data acquisition at the highest level. External events or triggers are the main methodologies for initiating an acquisition and generation. Triggers come in different forms, including analog, digital, and software. Master and Slave Devices – When creating synchronized measurement systems, you typically designate one device as a master and one or more other devices as slaves. The master device is the device that generates a signal or signals used to control all the measurement devices in the system. The slave devices receive control signals from the master device. The goal of synchronization is to generate and/or receive analog and digital signals precisely among multiple hardware devices. One class of T&S is referred to as homogeneous timing and synchronization – two identical devices with identical settings generating and or acquiring signals with a precise phase relationship between each sample clock, starting at the same instant in time. The following example illustrates homogeneous synchronization: Two digitizers acquire data at 200 MS/s with a precise phase relationship between each sample clock, triggered at the same instant in time, with identical vertical gain settings, AC/DC coupling settings, input impedance settings, DC offset settings, and analog filter settings. An important observation from the previous example is the relevance of many settings to homogenous synchronization. The delays of gain stages and analog filters on the front end of a digitizer lead to a time delay from the front end connector to the ADC, for example. Heterogeneous synchronization can imply many different scenarios. Some following examples illustrate heterogeneous synchronization: Two digitizers acquire data at 200 and 100 MS/s respectively with a precise phase relationship between each sample clock, triggered at the same time, with identical vertical gain settings, AC/DC coupling settings, input impedance settings, DC offset settings, and analog filter settings. An arbitrary waveform generator and digitizer sampling at 100 MS/s with a precise phase relationship between each sample clock and with a set time delay in start of operation, upon reception of an incoming trigger signal. A digitizer, digital pattern generator/analyzer, and arbitrary waveform generator sampling at 50, 200, and 100 MS/s, respectively with a precise phase relationship between each sample clock, and with a defined time delay in start of operation, upon reception of an incoming trigger signal. The preceding examples clearly show that heterogeneous T&S imply a wide range of possibilities to address the application needs. Separate settings on each device can lead to delays of data/signals being sampled at the same instant in time. The key is calibration of the synchronized system, which will be discussed later in this paper. Synchronization Scheme 1 – Synchronization with a Sample Clock The master device can control operation of the measurement system by exporting both trigger signals and a sample clock to the slave devices. For example, a system comprised of multiple digitizers and signal generators has a common sample clock from an appointed master device. As illustrated in Figure 3, the master sample clock directly controls ADC and DAC timing on all devices. For example, National Instruments dynamic signal analyzers such as the NI 4472 and NI 4461 (24-bit 104 kS/s and 208 kS/s respectively) are synchronized using this technique for applications in sound and vibration measurements. This scheme is the purest form of phase-coherent sampling; multiple devices are fed the same sample clock. Thus the same accuracy, drift, and jitter of the sample clock are seen by every device. The disadvantage of this scheme is that it does not address all possible phase-coherent heterogeneous clocking needs.\ Synchronization Scheme 2 – Synchronization with a Reference Clock Synchronization can be implemented by sharing triggers and reference clocks between multiple measurement devices. In this scheme, the reference clock can be supplied by the master device if it has an onboard reference clock, or the reference clock can be supplied by a dedicated high-precision clock source. This advantage of this scheme is that with it you can derive heterogeneous sample clocks from a single reference clock to which all the sample clocks phase locked. The trade-off is that the phase-coherent sampling on each device is not as pure as the direct sample clock approach, because each device clock enters the picture, so device clock jitter must be considered. The method usually employed with this scheme to synchronize and generate sampling clocks is a PLL. Left: Synchronization with a Reference Clock, Right: High-speed sample clocks are synchronized using a PLL. Issues with Synchronization Distributing clocks and triggers to achieve high-speed synchronized devices is beset by nontrivial issues. Latencies and timing uncertainties involved in orchestrating multiple-measurement devices are significant challenges in synchronization, especially for high-speed measurement systems. These issues, often overlooked during the initial system design, limit the speed and accuracy of synchronized systems. Two main issues that arise in the distribution of clocks and triggers are skew and jitter. Sample Clock Synchronization Mixed-signal test by its nature requires different sampling rates on each instrument, because analog waveform I/O and digital waveform I/O necessitate different sampling rates. But they need to be synchronized, and more importantly data needs to be sampled on the correct sample clock edge on each instrument. When sample clocks on disparate instruments are integer multiples of the 10 MHz reference clock, all instruments will have sample clocks that are synchronous to each other – the rising edge of all sample clocks will be coincident with the 10 MHz clock edge. When sample clocks are not integer multiples, such as 25 MHz, there is no guarantee that the sample clocks are in phase, despite being phase-locked to the 10 MHz reference clock, as shown in Figure 6. Standard techniques are used to solve this problem by resetting all of the PLLs at the same time, leading to sample clocks of the same frequency being in phase, as shown in Figure 7. Even though all sample clocks are in phase at this point, the solution is still not complete. Perfect synchronization implies the data clocked from device to device corresponding to within a sample clock cycle. The key to perfect synchronization is triggering, which will be discussed later. Left: 25 MHz Sample Clocks Not Aligned, PLL Synchronization with Reset Clock Skew and Jitter The distribution of the sample clock or the reference clock requires careful planning. For example, a synchronized measurement system calls for simultaneous sampling of 20 channels at 200 MS/s. This requirement implies distributing a clock to 10 two-channel digitizers. For a sample clock skew of 1%, the skew must be under 25 ps. Such a system certainly looks very challenging. Fortunately, skew limitations can be dealt with by calibrating the skew to each measurement device; you can compensate for the skew in the sampled data. The real issue is the clock frequency . Distributing either a 200 MHz direct sample clock or a 10 MHz reference clock introduces jitter into the system. The physical properties of the distribution system play a significant role in the accuracy of the distributed clocks; if the clock paths are susceptible to high-frequency electrical noise then clock jitter becomes a significant problem. Producing a platform for distribution of high-frequency sample clocks becomes expensive in terms of the manufacture, test, and calibration. Thus synchronization through the use of lower frequency reference clocks is the preferred method in many high-frequency systems. Figure 8 is a typical VCXO PLL implemented on National Instruments SMC-based modular instruments. The loop bandwidth is kept at a minimum to reject the jitter coming from the reference clock, while the VCXO on the device has jitter less then 1 psrms. Such a system effectively realizes a low-jitter synchronized system. A very useful property of the National Instruments PLL design is the use of a phase DAC. Using a phase DAC, you can phase-align the output of the VCXO with respect to the incoming reference clock. Nominally the VCXO output is in phase with the reference clock; however, you may need to skew the VCXO output slightly to place the output out of phase by a small margin. This feature is important for aligning sample clocks on multiple devices when the reference clock fed to each device has a small skew due to propagation delays. For example, in the NI PXI-1042 PXI chassis, the distribution of 10 MHz reference clock has slot-to-slot skew of 250 ps maximum with a maximum of 1 psrms jitter. Slot-to-slot skew of 250 ps, while satisfactory for most applications, may not be adequate for very high-speed applications where phase accuracy is important. To overcome this skew, the phase DAC outputs can be adjusted to calibrate for the skew. On the NI PXI-5422, 200 MS/s arbitrary waveform generator and NI PXI-5124 200 MS/s digitizer the sample clock phase/delay adjustment is 5 ps, thus giving the user significant flexibility in synchronizing multiple devices. Trigger Skew and Distribution With sample clock synchronization addressed, the other main issue is the distribution of the trigger to initiate simultaneous operation. The trigger can come from a digital event or from an analog signal that meets trigger conditions. Typically in multichannel systems, one of the devices is made the master and the rest are designated as slaves. In this scenario, the master is programmed to distribute the trigger signal to all slaves in the system including itself. Two issues that arise here are trigger delay and skew. A trigger delay from the master to all the slaves and skew between each slave device is inevitable, but this delay and skew can be measured and calibrated. The challenge in measuring the delay and skew, however, is a two-part process: Automate the measurement of the trigger delay between master and each slave and compensate for it. Ensure that the skew between slaves is small enough to ensure that the trigger is seen on the same clock edge on all devices. The distribution of the trigger signal across multiple devices requires passing a trigger signal into the clock domain of the sample clock such that the trigger is seen at the right instance in time on each device. With sample clock rates less than or equal to 100 MS/s, skew becomes a major obstacle to accurate trigger distribution. A system consisting of ten 200 MS/s devices, for example, requires a trigger being received at each device within a 5 ns window. This places a significant burden on the platform for delivering T&S beyond 100 MHz. The trigger signals must be sent in a slower clock domain than that of sample clock, or you must create a nonbused means of sending the trigger signal (such as a point-to-point connection). The costs of such a platform become prohibitive for mainstream use. Another distribution channel is needed; the trigger signal needs to be distributed reliably using a slow clock domain and transferred to the high-speed sample clock domain. A logical choice is to synchronize the trigger signal distribution with the 10 MHz reference clock. However, this cannot ensure that two boards will see the trigger assertion in the same sample clock cycle when the sample clocks are not integer multiples of the 10 MHz reference clock. To illustrate this point, assume two devices have the simple circuit [4] shown in Figure 9 for trigger transfer from the 10 MHz reference clock domain to the sample clock domain. Left: 10 MHz Reference Clock Domain to Sample Clock Domain Trigger Transfer, Right: Effect of Metastability on Triggers SMC Modular Instrumentation and NI-TClk In 2003, NI introduced the first series of PXI digitizers, arbitrary waveform generators, and digital pattern generators/analyzers based on the Synchronization and Memory Core (SMC) foundation [5]. One of the key technologies implemented on the SMC was NI-TClk (pronounced T-Clock) technology for T&S applications. NI-TClk NI has developed a method for synchronization whereby another signal-clock domain is employed to enable alignment of sample clocks and the distribution and reception of triggers. The objectives of NI-TClk technology are twofold: NI-TClk aligns the sample clocks that may not be necessarily aligned initially despite being phase locked to the 10 MHz reference clock. NI-TClk enables accurate triggering of synchronized devices. NI-TClk synchronization is flexible and wide ranging; it can address the following use cases: Extension of synchronization from a single PXI chassis to several PXI chassis to address large channel systems using the NI PXI-6653 Slot 2 system timing and control module. Homogeneous and heterogeneous synchronization – devices running at the same or different sample rates, using internal or external sample clocks. NI-TClk synchronization can be used with both Schemes 1 and 2, as described previously. Illustration of multichassis synchronization that uses the NI PXI-6653 system timing and control module whereby the 10 MHz reference clock and triggers are distributed from a master chassis to all slave chassis, with NI MXI-4 controlling all slave chassis. The purpose of NI-TClk synchronization is to have devices respond to triggers at the same time. The "same time" means on the same sample period and having very tight alignment of the sample clocks. NI-TClk synchronization is accomplished by having each device generate a trigger clock (TClk) that is derived from the sample clock. Triggers are synchronized to a TClk pulse. A device that receives a trigger from an external source or generates it internally will send the signal to all devices, including itself, on a falling edge of TClk. All devices react to the trigger on the following rising edge of TClk. The TClk frequency is much lower then the sample clock and the PXI 10 MHz reference clock to accommodate the NI PXI-1045 18-slot chassis, where the propagation delay from Slot 1 to Slot 18 may extend to several nanoseconds. If the application calls for multiple chassis where the propagation delay can be higher then normal interchassis delay, you can set the TClk frequency. The issue of "instantaneous" data acquisition comes up; if a trigger condition is met and 10 digitizers are required to be triggered, the issue of latency arises due to the synchronization of the trigger to TClk. This issue is addressed with pretrigger and posttrigger samples on the device sample memory buffer. All NI-TClk supported devices are programmed to accommodate the overhead time that arises from synchronization of the trigger to the TClk. For example, 10 digitizers are programmed to acquire 10,000 samples simultaneously. The sample rate is 200 MS/s (sample period of 5 ns) from which the derived TClk frequency is programmed to be 5 MHz (sample period of 200 ns). This implies that the delay in acquisition resulting from TClk synchronization of the trigger could be as high as 40 samples. NI-TClk supported devices are programmed to automatically pad the memory buffer for the lag between the trigger event and the start of acquisition, and the NI-TClk driver software automatically adjusts the timestamps on all digitizers to reflect the start of acquisition with respect to the trigger event. Overview of NI-TClk Operation with an Internal (PXI) Reference Clock or User-Supplied Reference Clock The devices are synchronized in the following manner. Refer to Figure 12 for the timing diagram that illustrates sample clock alignment and Figure 13 for trigger distribution and reception. Each device is programmed with a sample clock rate, and set to receive the TClk trigger. NI-TClk software automatically calculates the TClk frequency based on the sample clocks and number of devices involved, and TClks are generated on each device, derived from the sample clocks of the devices. The PXI 10 MHz reference clock (in PCI the onboard reference clock of one of the devices is used) is distributed to all devices to phase-lock the sample clocks on all devices. Each device sample clock is phase-locked to the 10 MHz reference clock but is not necessarily in phase with each other at this stage. A common clock signal called the Sync Pulse Clock is distributed through the PXI trigger bus (over the RTSI bus for PCI boards) to all devices whose frequency is similar to the reference clock frequency. Here the 10 MHz reference clock plays the role of the Sync Pulse Clock in addition to being the reference clock. A Sync Pulse is generated from one of the devices when the Sync Pulse Clock (10 MHz reference clock) is logically high through the PXI trigger bus (over the RTSI bus for PCI boards). Each device is initiated to look for the first rising edge of the Sync Pulse Clock upon receiving the Sync Pulse. When the first rising edge of the Sync Pulse Clock is detected, each device is programmed to measure the time between this edge and the first rising edge of the device TClk. The time between these two edges is measured on all devices. TClk measurements of all devices are compared to one reference TClk measurement (the NI-TClk driver software automatically selects one of the devices), and all device sample clocks and TClks are aligned automatically by adjusting the phase DAC outputs on all devices. With the sample clocks on all devices aligned, the trigger signal is distributed from the appointed master to all other devices through the TClk. The trigger signal is emitted with the falling edge of the master device TClk, and all devices are programmed to initiate generation or acquisition with the next rising edge of TClk. This signal is also distributed through the PXI trigger bus (over the RTSI bus for PCI boards). See Figure 13. Two Properties of NI-TClk Synchronization are critical to the success of the method: The distribution of the Sync Pulse is critical to NI-TClk synchronization. The Sync Pulse must arrive at each device such that each device looks for the same rising edge of the Sync Clock Pulse in making the TClk measurement. The skew cannot exceed the period of the Sync Pulse Clock. This issue is easily resolved with the Sync Pulse Clock period being 100 ns. NI-TClk synchronization can easily extend from synchronization within one chassis to several dozen, as the standard delay per foot of 50 Ω cable is of the order of 2 ns. The accuracy of the sample clock alignment is as good as the skew of the Sync Pulse Clock (reference clock). In looking at Figure 12, you can see that the reference clock received on both devices is skewed. The TClk measurements on both devices assume that the Sync Pulse Clock is aligned on both devices; the difference between the two TClk measurements is used to shift the sample clocks to align them. As will be seen in the following section, two levels of performance can be achieved with current technology; out-of-the-box performance and calibrated performance. Left: Timing Diagram of Using TClk to Align Sample Clocks, Right: Timing Diagram of Trigger Distribution Using NI-TClk Overview of NI-TClk Operation with a User-Supplied External Sample Clock In this scheme, NI-TClk synchronization will not align the sample clock on each device, because you are externally supplying the sample clock, bypassing the PLL circuitry. NI-TClk synchronization guarantees the start/stop trigger distribution such that each device starts and stops acquisition/generation on the same sample clock edge. NI-TClk does this by using the same method as mentioned above in using a derived TClk from the sample clock to distribute the trigger signals. Here, the burden of accurate sample clock alignment is placed on the sample clock you supply. To ensure the best performance, supply a low-jitter sample clock (of the order of <1 psrms) for sample rates above 100 MS/s with equal length line cables from the clock source to each device in the system. Refer to Figure 13 for an illustration of trigger distribution and reception. Each device is programmed to receive the TClk trigger and the external sample clock. NI-TClk automatically calculates the TClk frequency based on the sample clocks and number of devices involved. Then, TClk signals are generated on each device, derived from the device sample clock. The trigger signal is distributed from the appointed master to all other devices using NI-TClk; the trigger signal is emitted with the falling edge of the master TClk, and all devices are programmed to initiate generation or acquisition with the next rising edge of TClk. This signal is also distributed through the PXI trigger bus (over the RTSI bus for PCI boards). Refer to Figure 13 for an illustration. Performance of NI-TClk Technology Out-of-the-Box Performance Robust synchronization of multiple devices can be achieved by simply inserting the devices into the PXI chassis and running the devices using NI-TClk software (refer to Figure 14 for an illustration). The key software components consist of three VIs/functions that require you to set no parameters. NI-TClk synchronization can deliver synchronized devices with skews of up to 1 ns between each device in a NI PXI-1042 chassis. The typical skews observed range from 200 to 500 ps. The channel-to-channel jitter between devices is dependent on the intrinsic system jitter of the device. For example, the NI PXI-5421 100 MS/s 16-bit AWG has a total system jitter of 2 psrms. NI-TClk synchronized PXI-5421 devices exhibit typical channel-to-channel jitter of under 5 psrms. With the NI PXI-5122 100 MS/s 14-bit digitizer, the channel-to-channel jitter is typically under 10 psrms. Left: Out-of-the-Box Performance of NI-TClk Synchronization of Two 100 MS/s Digitizers, Right: Channel-to-Channel Jitter Measurements of NI-TClk Synchronized PXI-5421 Arbitrary Waveform Generators The LabVIEW front panel in Figure 15 is a measurement of the skew between two NI PXI-5122 devices in an NI PXI-1042 chassis. The skew is approximately 523 ps in this measurement setup. Each digitizer is set to sample the same 5 MHz square waveform at 100 MS/s. The signal is split and fed into each digitizer with equal length cables. The channel-to-channel jitter is approximately 6 psrms. The statistics are compiled from 49,998 zero crossings of the square waveform. The Gaussian distribution of the histogram reflects that the jitter stems from random noise rather any source of deterministic noise sources in the system. Figure 16 is a measurement of the channel-to-channel jitter of two NI-TClk synchronized PXI-5421 arbitrary waveform generators. Each device was programmed to generate a 10 MHz square waveform at 100 MS/s. The measurement was performed on a Tektronix high-performance jitter measurement Communications Signal Analyzer (CSA) 8200 platform with the 80E04 TDR module. The histogram data in Figure 16 reflects a channel-to-channel jitter of under 3 psrms. The median of the histogram reported is not the skew between the channels; it is the delay from the trigger of the zero crossing of the square waveform to the next rising edge of the measured square waveform (i.e. one channel is used to trigger the measurement of the zero crossing of the second channel). The measurements are compiled in a histogram which reflects the channel-to-channel jitter. Calibrated NI-TClk Synchronization As mentioned previously, the typical skews can range from 200 ps to 500 ps. This skew may not be satisfactory for some applications where the phase accuracy between channels requires a higher level of performance. In this case, manual calibration is required. Manual calibration can lower skews to less than 30 ps between devices. In Figure 17, a LabVIEW front panel illustrates the skew between the NI PXI-5122 100 MS/s digitizer and the NI PXI-5124 200 MS/s digitizer. The skew was found to be of the order of 15 ps with channel-to-channel jitter of 12 psrms. The statistics are compiled from 10,000 zero crossings of the square waveform. Left: Calibrated NI-TClk Synchronization of Two Digitizers – NI PXI-5122 at 100 MS/s and an NI PXI-5124 at 200 MS/s – Typical Skew on the Order of 15 psrms with Channel-to-Channel Jitter of 12 psrms, Right: Magnified View of the Falling Edge of 10 MHz Square Waveform from Manually Calibrated NI-TClk Synchronized NI PXI-5421 Arbitrary Waveform Generators – Skew on the Order of 20 ps. The previous picture is a measurement of the skew between two manually calibrated NI-TClk synchronized PXI-5421 arbitrary waveform generators using the CSA 8200. Notice that the skew is of the order of 20 ps. The waveform generated from the two devices is a 10 MHz square waveform. Manual calibration involves the adjustment of the sample clock on each device with respect to each other using the phase adjustment DACs in the PLL circuitry (refer to Figure 8). In synchronizing two arbitrary waveform generators, for example, the synchronized outputs can be viewed on a high-speed oscilloscope and the sample clock on one AWG can be moved relative to the other, using the phase-adjustment DAC. Through this manual process, the skew between multiple arbitrary waveform generators can be minimized from hundreds of picoseconds to under 30 ps. In synchronizing two digitizers, a low-phase noise signal is fed into each digitizer with equal length line cables. The skew can be measured in software, and the sample clock of one digitizer can be adjusted relative to the other to minimize the skew. The same methods are used in synchronizing digital waveform generator/analyzers. The sample clock adjustment can be achieved with high resolution. On the 100 MS/s devices, such as the PXI-5122, PXI-5421, and PXI-6552, the sample clock delay adjustment resolution is 10 ps and can be adjusted to ±1 sample clock period of 10 ns. On the 200 MS/s devices, such as the PXI-5422 and the PXI-5124, the adjustment resolution is 5 ps and can be adjusted to ±1 sample clock period of 5 ns. Thus, the skew between devices can be manually calibrated with high accuracy.

  • Real-Time Defects Mapping on Integrated Circuits Using NI PXI & LabVIEW

    An example of an PCBA which requires specific fault testing Project Summary Creating a system to localize failure mechanisms causing abnormal electrical behavior, including those linked to complex parameters (such as frequencies, amplitudes, and digital values contained in registers), in integrated circuits (ICs). Solution & Results Improving a conventional faults mapping system using NI PXI hardware and the NI LabVIEW FPGA Module. Industry Electronics, Manufacturing Technology at-a-glance NI PXI-1036 NI PXI-8102 controller NI PXI-7852R LabVIEW FPGA Module Help with Finding Faults Fault localization is complex due to decreased individual pattern sizes, increased metallization levels, and decreased voltage supplies. We needed to localize a defect measuring less than a few micrometers in a component of several square millimeters. There are several ways to do this, including using global fault isolation methods. One method uses a laser to scan an IC surface while measuring current or voltage variations induced by the laser’s photoelectric or thermal effects. With the thermal laser (λ≈1.3 m), the beam locally heats the component to change its electrical behavior. An analog system monitors some parameters (currents or voltages) during scanning. Dedicated software running on a PC then creates a map representing the circuit’s heat sensitivity. Faults are generally localized by comparing the map obtained for a reference circuit with the one resulting from a faulty circuit. We used a Hamamatsu Phemos 1000 that can create maps with 1,024 x 1,024 pixel resolution. Left: Mapping Acquisition Using a Laser-Scanning Phemos 1000 Microscope, Right: Software Developed Using LabVIEW FPGA Conventional Method Limitations With the standard Optical Beam Induced Resistive-Change (OBIRCH) laser thermal stimulation method, we can only measure voltage or current changes under local heating. We extended this method by mapping complex variables such as frequencies, amplitudes, and digital values stored in registers. Hardware System Setup We developed and validated our solution by analyzing a failure in a component that manages cell phone energy (battery power and voltage regulation) and conversions (audio, radio frequency, and supervision). This circuit contains an A/D converter (ADC) to measure various currents and voltages during phone operation. On failing components, conversion results shifted several bits (least significant). We used an NI PXI-1036 chassis equipped with an NI PXI-8102 controller and an NI PXI-7852R field-programmable gate array (FPGA) module. This NI system is inserted between the device interface board and the fault isolation equipment (Phemos 1000). This assembly ensures the component startup and the ADC control. It initiates conversions and collects the results via serial peripheral interface (SPI) bus. It performs scale conversion and transmits data to the fault localization equipment. The laser scans the chip in 72 seconds to build an image made of 1024 x 1024 pixels. Each point must be acquired and processed in less than 65 μs (pixel clock period). We chosed NI hardware because it fully met our requirements. The NI products are low cost, fast enough to process each pixel in less than 65 μs, and programmable with the LabVIEW FPGA Module . Software System Setup We created an autonomous system without requiring expertise in complex programming languages. We used LabVIEW FPGA to program the system because it provides the developer with all the needed layers: drivers, APIs, function libraries, graphical interfaces, compilation and synthesis chains. We downloaded and customized a free SPI controller from IPNet. This block can communicate with various SPI peripherals. We simplified it by removing unnecessary options and created a cell optimized for our needs. We initiated A/D conversions into the FPGA algorithm, retrieved the results, performed a scaling, and exported data to the Fault Isolation Equipment (Hamamatsu Phemos 1000). During the mapping construction, the Phemos 1000 is autonomous; it controls the scanning laser, makes voltage and current measurements, and builds laser excitation sensitivity maps. An external signal can be monitored by using an analog input of the equipment. We connected one of the analog PXI-7852R module outputs on this input. The Phemos 1000 and PXI chassis can operate asynchronously or synchronously. We validated both methods. The asynchronous method is simple to implement, but the pixel processing must be less than 65 μs. The synchronous mode is more complex and has a longer processing time. In our tests, processing was fast enough to use the asynchronous mode. Original Authors: Sébastien CANY, ST-ERICSSON Edited by Cyth Systems

  • Double Decker Hybrid Powertrain Monitored Using Circaflex Embedded Controls

    Vantage Power's Hybrid Double Decker Bus featuring Cyth’s Hybrid Management System (HMS). The Challenge Retrofitting the drive trains of double-decker buses to increase the fuel efficiency of London’s public transportation. The Solution Using Cyth’s embedded control system Circaflex paired with the NI RIO SOM, we designed a communication and monitoring system to improve hybrid buses’ regenerative braking and efficiency. The Cyth Story Vantage Power’s diesel hybrid drive train technology was retrofitted onto the existing double-decker buses of London’s public transportation fleet. Cyth’s embedded control platform, Circaflex, provides scalability for our customer’s I/O requirements as the hybrid monitoring system (HMS) communicates with a diesel and hybrid motor controller and the vehicle’s brake system. Packaged into our HMS was the NI RIO SOM which provides highly deterministic and safety-critical functions. The quick processing of inputs such as the driver's gas & brake pedals, and diesel engine power vs. hybrid engine power levels are accounted for in high-speed repeating calculations. The Hybrid Monitoring System (HMS) is programmed in LabVIEW software to ensure the benefits of a real-time processor and a user-programmable FPGA. Left: The HMS’s rugged weather-proof and vibration-proof enclosure. Center: The Circaflex provides scalable I/O for the HMU’s communication and monitoring requirements. Right: Input and output connectors located in the enclosure exterior, for example, 4 CAN bus connectors for high-speed data communication. The system architecture of the Vantage Power hybrid powertrain and Cyth HMS system. Delivering the Outcome Overall, Vantage Power’s double-decker hybrid powertrain system incorporated Circaflex and NI sbRIO SOM boards for a 40% greater fuel efficiency across all vehicles retrofitted amongst the London fleet. Our Circaflex HMS system provides the communication and monitoring capabilities required of a hybrid drive train. The large number of subcomponents working together in the hybrid power train such as the hybrid battery and motor, diesel engine, regenerative braking system, generator, driver inputs, etc. show the value of a control system with scalable I/O as well as high-speed data acquisition. The NI hardware and software platform enables a deterministic system for the repetitive calculations of the bus's critical functions further improving the functionality and passenger safety of London’s hybrid fleet. Technical Specifications 1 x Circaflex 315 1 x Mezzanine Board 1 x NI sbRIO-9651SOM 1 x Custom Weather-proof Enclosure Circaflex Modules 1 x Inertial Measurement Module 1 x GPS Module Qty 1 x 16 ch 24V Industrial Digital Input Modules (Sinking & Sourcing) Qty 1 x 16 ch 24V Industrial Digital Output Modules (Sinking & Sourcing) Qty 1 x 8ch Analog Voltage Input Module, 100kS/s. 16-bit Qty 1 x 8ch Analog Current Input Module, 100kS/s, 16-bit Qty 1 x 8ch Analog Voltage Output Module, 100kS/s, 16-bit Qty 1 x Strain Gauge 1 x K-Type Thermocouple I/O Connectors 4 x CANbus 2 x Input DOS 1 x Industrial Digital Input 2 x Analog Input

  • Embedded Monitoring System for Measuring Natural Gas Emissions

    Remote natural gas emission sensing instrument. The Challenge Developing a remote sensing instrument for real-time detection and quantification of fugitive natural gas emissions that must also adapt to evolving customer requirements driven by emerging industry regulations. The Solution Using the timing and synchronization capabilities of the NI PXI platform, the integrated high-throughput I/O of a FlexRIO digitizer, and a LabVIEW-programmable FPGA to create the signal processing embedded system in a sophisticated differential absorption lidar product. Introduction The significant growth in the production, usage, and commercialization of natural gas is placing unprecedented demands on the nation’s pipeline system. The Pipeline and Hazardous Materials Safety Administration (PHMSA) develops and enforces regulations for the safe operation of the nation’s 2.6 million mile pipeline transportation system (U.S. Department of Transportation, 2016). Through PHMSA programs, serious pipeline incidents have decreased by 39 percent since 2009, according to the Department of Transportation (DoT). Recent incidents such as the 2010 San Bruno, California pipeline explosion and the 2015 Aliso Canyon gas leak are only two of more than 250 serious pipeline incidents since 2009. Left: Real-world methane plumes discovered by Methane Monitor, Right: Spectral features of the most common atmospheric gasses (above), with methane shown on an expanded scale (below). Natural gas consists primarily of methane. Methane is the second most prevalent greenhouse gas emitted in the United States and accounted for about 11 percent of all US greenhouse gas emissions from human activities in 2014. Methane is emitted naturally and by human activities such as leakage from natural gas systems. The US Environmental Protection Agency says that the comparative impact of methane on climate change is more than 25 times greater than that of carbon dioxide over a 100-year period. Continued natural gas pipeline incidents and leaks, the associated impacts, and oil and gas industry regulations drive the need to promptly detect, classify, and resolve fugitive methane emissions. Under funding from the PHMSA and Ball Aerospace, Ball used more than 50 years of remote sensing expertise to develop a system called Methane Monitor. Methane Monitor identifies methane emissions on the ground from a fixed-wing aircraft. Unlike existing methods of aerial leak survey, Methane Monitor operates from a single-engine, fixed-wing aircraft for lower cost than sensors mounted on helicopters. It images the full plume of methane gas as a more precise method of monitoring leaks, it can notify facility operators immediately of large emission sources, and it provides full reports within hours of the end of the flight. Development of these advantages placed large demands on high-throughput signal acquisition, synchronization, and processing. Lidar Background In light detection and ranging (lidar) systems, a laser source emits a pulse of light. The pulse interacts with targets such as the ground or structures. Some of these interactions result in backscattered photons, which are collected and recorded as a function of time. This time-of-flight data directly corresponds with the range at which the scattering occurred, allowing generation of a 3D model of the illuminated topology. DIAL Background Lidar range measurements are inherently part of differential absorption lidar (DIAL) measurements. DIAL operates at two laser wavelengths: one on-resonance and one off-resonance of a molecule of interest. Since the on-resonance wavelength is more strongly absorbed by the molecule, the difference between the two signals correlates to the amount of the molecule in the laser’s path. Thus, DIAL systems can measure the range and quantity of target molecules in the atmosphere (U.S. Department of Commerce, 2016). Challenges DIAL systems look at sharp absorption lines in the spectrum, and Methane Monitor targets the methane molecule (CH4). We designed Methane Monitor so we could compare the resonance features uniquely from other molecules that might confuse the measurement. These measurements require a signal-to-noise ratio approximately 500 times better than what’s needed to establish range alone. Methane Monitor system hardware. The environment imposes challenges because return signals are subject to changes in ground reflectivity. Imperfections in the laser impose challenges because the pulse energy and wavelengths of the two pulses vary independently across firings. Hence, Methane Monitor calibrates every measurement for background reflectivity and normalizes the received energy to the transmitted energy. Methane Monitor also measures a calibrated methane sample before each target measurement. We can use the calibrated methane measurement to correct shot-to-shot instabilities in laser wavelength by reverse calculating the absorption constant. Methane Monitor performs the background, reference, and receive measurements each time the laser fires. The on-resonance and off-resonance pulses are separated non-deterministically by a few hundred nanoseconds. The range depends on the customer’s survey objectives and the aircraft’s altitude and is generally 500 m to 1 km above ground level (AGL). Timing and Synchronization Methane Monitor’s timing and synchronization centers on the PXI-6683H module, which includes a GPS-aligned system reference clock to the laser and PXI embedded systems. The system reference clock is available to all PXI Express peripherals. The PXIe-6341 X Series DAQ uses reference clock synchronization to synchronize analog commands and telemetry. A PXIe-7965R FlexRIO FPGA module runs the custom digitizer and DIAL algorithms. The FPGA block diagram is synchronized to the system reference clock out of the box. The PXI-6683H also generates asynchronous counter-reset signals for the FPGA through PXI trigger lines. Counter values are packaged with each measurement. They can verify, geo-locate, and interpolate the measurements against data obtained from a position and orientation system (POS) and steering mirror controller. Custom Triggering Pulses from each serialized signal are precisely acquired about the peak A/D converter count using level-triggered circular buffers. The serialization, custom triggering, and custom acquisition reduce the data throughput. Timestamps are assigned to each peak for the lidar range measurement. DIAL Analysis The FPGA performs several quality checks on the data. For example, it verifies that ground pulses were received, and it sets various flags based on pulse parameters. The FPGA reshapes each pulse to correct deterministic electrical effects. It executes Methane Monitor’s methane concentration algorithm every time the laser fires and streams telemetry to a LabVIEW application running on a PXIe-8135 controller. The LabVIEW application provides the operator with an instantaneous view of the captured pulses, measurements, performance, system health, and more. The LabVIEW application serves as the final data product to Ball Aerospace’s lidar visualization software that overlays the range and concentration measurement on the context camera image. All data is logged to an NI 8260 1.2 TB PXI SSD. We used DIAdem software to post process Methane Monitor’s data for quality assurance and continuous improvement. Benefits and Impacts Over 100 hours of flight time have been logged, and the methane detection threshold has been determined as a function of wind speed. We have detected methane flow rates as low as 50 standard cubic feet/hour (SCFH). We can configure Methane Monitor’s sensing swath width up to 200 meters wide. The system has a spatial resolution and geo-location accuracy of better than 2 meters each. Methane measurements are color-coded and superimposed on co-bore sighted context images to provide a real-time view of methane emissions to the operator. Original Authors: Steve Karcher, Ball Aerospace & Technologies Edited by Cyth Systems

  • PXI Instrumentation for Partial Discharge Monitoring of Hydro Generators

    Using PXI hardware for a monitoring & analysis system of hydroelectric power plant generators. The Challenge Cepel needed to implement an online partial discharge (PD) monitoring and analysis system for hydroelectric power plant generators to aid the predictive diagnosis of stator electric insulation. The Solution Cepel developed a modular instrumentation system (IMA-DP) for online monitoring of PD in hydro generators using PXI hardware instrumentation from NI and signal processing algorithms implemented with LabVIEW. The company has adopted the system as an effective predictive maintenance tool. Introduction The Brazilian Electric Energy Research Center (Cepel), considered the largest electric energy research center in the Southern Hemisphere, has engaged in research and development for more than 40 years. Our efforts are focused on generation, transmission, and distribution of electric power. The Department of Transmission Lines and Electrical Equipment supports the maintenance engineering teams of various companies, trains professionals, and develops new technologies for predictive diagnosis and equipment prognosis. Generators are prominent among other electrical system equipment and continuous monitoring is recommended. In the 1990s, we began developing the Asset Oriented Monitoring System (SOMA) for online monitoring of mechanical, thermal, and operational parameters in rotary machines. We adopted the PXI platform due to the ruggedness, flexibility, and modularity requirements of the DAQ hardware. However, to keep up with other state-of-the-art equipment, there was still a demand for monitoring the stator insulation of generators. Left: PXI used to measure Partial Discharge or hydroelectric generators. Center: LabVIEW User Interface, Right: Measured Signals Electrical insulation has always been the weak spot of any high-voltage equipment. In addition to failures in maintenance and possible contingent events, the equipment’s own normal duty cycle causes insulation materials subject to vibration and thermal cycling to age and lose their dielectric properties. It is therefore necessary to assess the health of the insulation in such equipment with a view to the continuity of power supply and to the reduction of failures. Partial discharge (PD) monitoring is an effective way to assess insulation integrity in high-voltage electrical equipment. In relation to other techniques traditionally recommended for monitoring insulation in rotary machines, PD measurement presents the highest sensitivity, permits the localization of defects, and is the only technique that can monitor generators online. Theoretically, PD measurement could have detected the estimated 89 percent of failures that occurred in insulation. To meet the demand for online monitoring of stator insulation in rotary machines, we needed to implement an effective online PD monitoring system that was also compatible with the previously adopted generator monitoring hardware platform using the PXI instrumentation standard and taking advantage of its flexibility and modularity. Measurement Features PDs are localized dielectric breakdowns of a small portion of a solid or fluid electrical insulation system under high voltage stress, which does not bridge the space between two conductors. They occur in regions of gaseous insertions that represent imperfections in the dielectric (Figure 1). A PD can indicate defects that may evolve into insulation failures with serious consequences. PD signals are pulses with frequency components that can reach hundreds of MHz, stochastic by nature, with variable amplitude, and heavily immersed in noise (Figure 2). The acquired pulses must have amplitudes recorded and be correlated with the phase of the voltage cycle. The products of the measurement are two-dimensional histograms, in which the repetition rate of the pulses is grouped as a function of their amplitudes and the phase angle (Figure 3). Block Diagram of a Digital PD Measurement System IMA-DP Digital PD measurement systems comprise digital signal processing (DSP) units implemented in an FPGA and in conventional processors, with some additional signal conditioning circuits and A/D converters (Figure 4) We developed the PD Analysis and Instrumentation System (IMA-DP) to measure the PD in the HF band (<30 MHz) according to the technical standards IEC 60034-27 and IEEE Standard 1434-2014. We could develop the system proposed here due to the availability of suitable modular hardware components in NI’s PXI platform. The key to development was selecting the NI PXIe-5122 digitizer that features a 20 V dynamic range, 14-bit resolution, 100 MHz sampling rate, and 40 MHz of bandwidth. The module also includes circuits for impedance matching, selectable anti-aliasing analog filters, amplifiers, and attenuators. We added a PXIe-2593 NI Switch module (Figure 4a) to the input of the PXIe-5122 (Figure 4b-e), which can expand the system for sequential measurement of up to 16 channels, significantly reducing monitoring costs per channel. The PXIe-5122 digitizer sends the acquired raw data to the FPGA of the PXIe-7965R module (Figure 4f) for heavy processing in real time. Sending data uses the PXI Express bus over a peer-to-peer connection. Final processing and consolidation of results happens in the PXIe-8135 embedded controller (Figure 4g). Figure 5 shows the complete measurement path. Figure 6 shows the built hardware and process flow diagram. Original Authors: André Tomaz de Carvalho, Eletrobras Cepel Edited by Cyth Systems

  • Bullet Train Test using LabVIEW & TestStand Software

    Bullet Train Test. The Challenge High-speed and commuter trains are extremely complex systems with thousands of components. The validation process is timely, highly complex, and expensive. Siemens Mobility Rail Solutions sought to develop a cost-efficient way to test high-speed and commuter trains. The Solution Siemens used NI hardware, TestStand software, VeriStand software, and the LabVIEW FPGA Module to build a fully functional digital twin of a whole train, including a functional simulation of all the wiring. Train control, traction, and brake controllers are integrated as real devices, but the system is easily extendable for any other controller. Introduction In 1879, Werner von Siemens invented the first electric locomotive. Today, Siemens Mobility Rail Solutions focuses on rail infrastructure and rolling stock. As a part of the rolling stock segment, we develop solutions for high-speed and commuter rail customers and are a global player in 60 countries around the world. The rail and transportation industry has grown since the industrial revolution, with many regional differences. Each train is a customer-specific solution that must be integrated in an operator-specific environment. For regulated homologation processes, many requirements must be validated and approved. The train itself is a rare and expensive asset, so we need to find solutions that address the validation and homologation requirements with a more cost-efficient test environment. There have been different train simulation solutions in the past, but they usually implemented a behavior of the interfaces to test a dedicated single controller. This application realizes the complete logic behind the interfaces and creates a realistic behavior of the train, including its whole electrical construction. Compared to earlier solutions with fixed dedicated test environments, this application is a highly scalable solution. An iron train is a physical, mocked-up train that emulates all the inputs and outputs of a real train. We could run the digital twin of a train on a laptop as well as at the level of an iron train, which makes this application the first of its kind. We designed it for use with many different variations of trains requested by our customers. This strategic investment is the backbone of all future Siemens Mobility projects in the high-speed and commuter rail market. New Train Control Architecture In 2011, the German rail company Deutsche Bahn AG ordered 170 high-speed trains with an option of up to 300 trains for more than $7.4 billion from Siemens Mobility. These trains, known as ICx, are completely different than conventional constructions, which reflect the changing demands of our markets. We based the architecture on a power car concept and a flexible train configuration. Left: Live Feed from Train Driver’s Controls Front Panel, Right: Variant Schema of the Main Configurations. This concept should address high flexibility in terms of train length and passenger count. We invented a new car-based train control architecture. Each single car of the train has an encapsulated train control system. Every new architecture introduces changes such as new communication bus technology, new automation systems, and new interfaces. To address the changes and the corresponding risks of new technology, we need adequate quality assurance. The Application Using PXI real-time devices, NI EtherCAT chassis, VeriStand, and a variety of I/O modules, we created a test bench that simulates complete train functionality of 40 different subsystems. We took advantage of the NI product portfolio for a complete set of I/O interfaces, a stable run time, and tooling setup, which is difficult in many industrial applications. There was no other solution that better suited our various interfaces and requirements in terms of modularity. The main feature of our solution is the functional simulation. The core of this functional simulation is the representation of the electrical schema of the train in the simulation environment, which can be visualized and manipulated in real time. We import this electrical schema into VeriStand as an electro-mechanical logic model, which, combined with models that represent the functional logic of the physics and complex elements, like simulated controllers (for example, a door controller), enables us to build a complete digital twin of a train. Overall, we created a model library with 350 intelligent elements, like controller models, and 150 electrical models, like connectors, switches, and relays. Out of this library, a VeriStand project generator automatically creates a complex hardware-in-the-loop (HIL) system with more than 58,000 simulated components. Vigorous testing is performed on high-speed rail in Germany before it is deployed and to passengers. Finally, the test bench consists of 16 test racks: 12 cars of the train and four additional racks with test bench functionality such as fault injections. Each car test rack includes a complete train control system, communication buses, two brake controllers, and, in the power car types, the traction controller. We could integrate real components like this because the high reliability and determinism of well-integrated FPGA technology helped us simulate all necessary I/O interfaces (for example, 192-speed sensors). Each single rack is like a complete HIL environment. We realized the simulation hardware with 12 PXI systems and and 42 NI-9144 EtherCAT chassis. We use PXI-6683H timing and synchronization modules to synchronize the PXI systems. We also use several different analog and digital CompactRIO and PXIe-2727 programmable resistor modules as interfaces to the train controller devices. Our test bench implements CAN bus, PROFINET bus, and other TCP-based protocols. We could automatically generate nearly 70 percent of the model logic and VeriStand mapping from engineering and construction data sets. We imported the electrical schema data set out of an ECAD system. We generated the bus interfaces from digital interface specifications of the individual communication participants. This leads to a 100X cost reduction compared to an iron train. Original Authors: Matthias Reinholdt, Siemens AG Mobility Division Edited by Cyth Systems

  • Probing of Large-Array, Fine-Pitch Microbumps for 3D ICs

    Fully Automatic Test System to Evaluate Microbump Probing The Challenge Performing die tests prior to stacking 3D ICs to achieve sufficient compound stack yield by probing the interconnect micro bumps for pre-bond test access. The Solution Building a unique fully automatic system to characterize prototype probe cards for large-array, fine-pitch microbumps for 3D ICs on our advanced test wafers using NI PXI instruments and the Semiconductor Test System (STS). 3D-Stacked ICs to Conquer the World The research on 3D stacked IC (3D-SIC) technology has advanced to the point that virtually all semiconductor companies have now released or announced 3D-SIC products, or are developing such products in stealth mode. In 3D-SIC packages, multiple chip dies are stacked vertically, which results in a dense integration, possibly involving heterogeneous technologies, in an ultra-small footprint with considerable benefits for performance, power, and cost. One challenge in stacking ICs is to retain a high compound yield and not include faulty dies. This requires testing the dies before stacking them, for example, through the interconnect microbumps. But engineers have long considered it impossible to probe these microbumps because the arrays are too large (≥1,000) and the pitches too small (≤40 µm). We developed a solution: a fully automated system to characterize prototype probe cards for large-array, fine-pitch microbumps on advanced test wafers using the Semiconductor Test System (STS) from NI. State-of-the-art microbumps have the following specifications (see Figure 2): Landing bump: Cu, diameter 25 µm Top bump: Cu/Ni/Sn, diameter 15 µm Imec is the world-leading R&D center for nano-electronics and digital technology, headquartered near Leuven, Belgium, and with 3,500 researchers. We use state-of-the-art infrastructure, including our 200 mm and 300 mm wafer fabs, to perform research for a multitude of industries, including eight of the top 10 semiconductor companies. Our research program on 3D system integration is an imec Industrial Affiliation Program in which our own staff work alongside engineers from our industrial partners, key suppliers, and leading academic partners toward radical innovation and pre-competitive development. State-of-the-Art Microbumps, Cu/Ni/Sn Imec has contributed to the field of 3D-SICs for over a decade through research into: Through-silicon vias (TSVs) that allow making electrical connections to a silicon substrate’s back side Dense microbump interconnects between stacked dies Wafer thinning, bonding and debonding Various (die-to-die, die-to-wafer, and wafer-to-wafer) stacking approaches We have also studied architecture, design, manufacturing, test, reliability, and thermal aspects of 3D-SICs through simulations and actual measurements on numerous test chips. Challenges in Probing 3D-SIC Microbumps Due to its many high-precision steps, semiconductor manufacturing is prone to defects. Therefore, every IC needs to undergo electrical tests to weed out defective parts and guarantee product quality. This is also true for 3D-SICs, which typically contain complex die designs in advanced technology nodes, and therefore need to be tested through today’s most advanced test and design-for-test approaches. In addition, a number of test challenges are unique to the 3D-SIC stacking process itself. One of these is testing dies prior to stacking, which is essential to obtain acceptable compound stack yields and not lose good dies in a stack with one faulty die. The non-bottom dies of the stacks have their functional access exclusively through large arrays of fine-pitch microbumps, which are too dense for conventional probe technology. A common approach to obtain pre-bond test access is to equip these dies with dedicated pre-bond probe pads [1][2][3]. Unfortunately, this approach comes with drawbacks such as an increased silicon area and test application time, and a reduced interconnect performance. To avoid the many drawbacks associated with dedicated pre-bond probe pads, imec and key partners set out to enable probing directly on the microbumps, a task previously thought impossible. State-of-the-art microbumps have the following specifications (see Figure 2): Landing bump: Cu, diameter 25 µm Top bump: Cu/Ni/Sn, diameter 15 µm Demonstrating Feasibility of Microbump Probing With the Help of NI To address these challenges, we teamed up with leading probe card supplier Cascade Microtech (Oregon, USA), who provided us with prototypes of their advanced Pyramid® Rocking Beam Interposer (RBI) probe cards (see Figure 4a). These probe cards contain an IC-design-specific probe core which includes a thin film with MEMS-type probe tips (see Figure 4b). Cascade’s high-density probe cores support >1,200 core I/Os, which is sufficient for WIO1. The RBI probe tips require less than 1 gf/tip to make proper electrical contact. The heel of the tip makes physical contact to the wafer (see Figure 4c), such that the probe mark is typically only 6 µm × 1 µm (see Figure 7). To prove the feasibility of microbump probing with these probe cards, we built a unique full-automatic test system (see Figure 5) consisting of (1) a dual CM300 probe station from Cascade Microtech (Germany), (2) a hard-docking STS test head with PXI test instruments from NI, (3) a test head manipulator from Reid-Ashman (Utah, USA), and (4) test program and data analysis software based on LabVIEW and developed at imec. The NI STS test head is a T2 model that contains two PXI racks with test instruments. Rack 1 holds instruments for parametric and functional tests. Rack 2 is dedicated to microbump probing. It contains a PXI-4072 digital multimeter (DMM) connected to an ultra-wide switch matrix (SMX1–9) consisting of nine concatenated PXIe-2535 modules of 136 output columns each. This allows us to connect each of the four channels of the DMM under software control to any of the 9×136=1,224 SMX output columns. Figure 6 shows that the system supports two-point and four-point (Kelvin) resistance measurements between any pair of probe tips (for daisy chains) as well as between a single probe tip and all other probe tips ganged (for characterization of that single probe tip when all probe tips are shorted through the probed wafer). Results and Conclusion On 300 mm test wafers (which we designed and manufactured in-house and contain microbumps with various metallurgies, pitches, diameters, and sizes), we successfully demonstrated the following: All WIO1 probe tips do land on the corresponding microbumps (see Figure 7). The actual contact resistance between probe tip and microbump is Rc ~ 0.1 Ohm. However, the resistance of the trace through the thin film membrane on the probe core is often included in the measurement: Rc ~ 5 Ohm (see Figure 8). Probe marks on Cu are small, while probe marks on Sn are larger, but can be removed through reflow. We demonstrated experimentally that there were no stacking interconnect yield differences between all four cases of bottom/top microbumps probed/not-probed [5]. Through cost modeling, we demonstrated that, for single-site testing, the Pyramid® Probe cards, although expensive, financially outperform pre-bond testing through dedicated probe pads [5]. Original Authors: Ferenc Fodor, IMEC Edited by Cyth Systems

  • Hyundai Improves Production Test Time using PXI, LabVIEW, and TestStand

    Hyundai Kefico ECU Functional Test Fixture The Challenge We needed to sustainably meet manufacturing test deadlines for increasingly complex powertrain electronic control units (ECU) with over 200 pins and 20,000 test steps; while ensuring test times comply with throughput needs and cost of tests is reduced to remain competitive in the market. The Solution Using the PXI, LabVIEW, and TestStand platforms to build a standard architecture, we achieved flexible test system configurations of all powertrain ECU types and reusable test scripts and procedures that guarantee test coverage alignment from R&D to manufacturing while allowing global, standard test deployment and operation. Introduction Automotive technology is accelerating faster than ever before. Trends like powertrain electrification, wide adoption of advanced safety systems, and enhanced driving and comfort functionalities significantly increase the amount of software needed. As a result, electronic control units (ECUs) are more complex and in higher demand. One of the most important of these is the powertrain ECU. Beyond ensuring proper operation of the powertrain that moves the vehicle, these ECUs impact the environmental performance of the vehicle, its economy, and driving experience, which are factors buyers seriously consider. Hyundai Kefico, a subsidiary company of Hyundai, has provided powertrain automotive electronics since 1972. Like other automotive suppliers that want to remain competitive on the market, our engineers at Hyundai Kefico faced increased test demands and tighter emission regulations while also managing budget and timeline challenges. When our powertrain ECUs reached 200 pins and the functional test needed to ensure quality stretched to 20,000 test steps for an increased variety of ECU types, it became clear that we could not use traditional test engineering approaches to keep up with the pace of vehicle electronics. We needed a change. The NI PXI hardware platform can test complex ECUs upwards of 200 pins. A New Approach In the past, an ECU functional tester required that we design sensor/actuator emulators, vehicle communication modules, test execution engines and applications, test procedures, and test result management tools for each type of ECU. In other words, we developed a new tester for every new ECU, with minimum reuse of test engineering assets and a negative impact to the cost of test. To solve this problem, we started with the development process and created the Common Platform Tester (CP-Tester), and the standardized ECU Functional Tester development process (Figure 1). We based the CP-Tester on standardized test assets called CP-Standard, which define sensor/actuator emulation, vehicle communication, test execution (test engine), operator interface (test application), and test result management. System Success The CP-Tester has a few key components that streamline the test development process. R&D or product engineers can use a test scripting modeling tool called CP-Editor to configure each test step and parameter by choosing from over 200 prebuilt functions to develop test sequences. They can map these test steps to the appropriate hardware I/O and reconfigure them for different ECU types. The CP-Server is another component that engineers can use to effectively manage test result data to improve upon new test requirements. Our engineers can realize these three benefits from the CP-Tester: Shorter tester development times because of its adaptability to various types of powertrain ECUs Efficient use of test engineering assets because it can reuse and reconfigure test steps from R&D to manufacturing More value out of manufacturing test data due to data handling and traceability in standard format We chose the NI PXI platform because it is better suited to deal with the complexity of our powertrain ECUs. Benefits of NI PXI solutions include: High and flexible channel counts (over 200 pins) with different layouts I/O configuration with source and measurement capabilities Ability to connect dummy loads (resistance and inductance) to properly test ECUs Wide variety of switching options and ease of use with NI-SWITCH to increase I/O flexibility Ability to customize I/O through FPGA to implement special sensor communication protocols such as SENT (Single Edge Nibble Transmission and SAE J2716) Most turnkey ECU testers on the market require 10–12 months to adopt new test plans for new products, and they still require significant interaction with the vendors and high costs. Given the importance of a short development time, we took advantage of NI’s automated test solutions to become independent and develop our own flexible standard tester within three months. This resulted in an 80 percent reduction of development time, while giving us the ability to add functionality like CAN with flexible data-rate in the future, as product requirements evolve. At the company level, given the higher demands for ECUs, the NI PXI timing and synchronization features improved our test time by 15 percent and cut the test system cost by 30 percent, which has helped us be more competitive in the market. In addition, we can procure and assemble the CP-Tester at any of our manufacturing sites around the globe thanks to NI’s global presence. For the first 17 CP-Testers, we achieved a 45 percent better project ROI and saved over $1M compared to our previous solution. Original Authors: Minsuk Ko, Hyundai Kefico Edited by Cyth Systems

  • Control System for Hotbar Bonder in X-ray Sensors using LabVIEW & CompactRIO

    Custom soldering fixture. The Challenge A manufacturer of custom soldering fixtures approached us with the need for a system to control independently moving hot bars used to solder LCD screens. The Solution Using the NI CompactRIO and LabVIEW software platforms we designed and built a control system capable of user-programmable motion in 8 axes alongside PID loops for temperature and force to provide our customer with the complex controls required for machine functionality. The Cyth Process The Hotbar Bonder is a fixture for the custom soldering of LCD screens. Cyth’s engineering team designed and installed the fixture’s controls cabinet featuring a CompactRIO 8-slot chassis, and 4-slot expansion chassis. The fixture’s two hotbar soldering irons move independently and are controlled using LabVIEW stepper motor control architectures. The hotbars soldered 200+ pins simultaneously. using programmatically applied PID loops for heat & force measurements. Left: NI cRIO-9038 8 slot chassis featured in the Hotbar fixture control system, Right: NI 9152 C Series Motor Drive Interface Module Order of Operations An operator loads the LCD bed onto a trolley that moves along a fixed track. A camera instructs the operator as to the LCD bed’s alignment. The operator inputs the soldering instructions and recipe into a LabVIEW user interface, which auto-calculates the fixture’s actions required to solder the LCD’s entire area. Control System Features Soldering tips contain zero-crossing solid state relays that apply 120V, controlled by the cRIO. Solid state relay pulse with modulate (PWM) at 30% duty cycle within a closed loop temperature PID to hit the required temperatures for the soldering recipes, with a thermocouple to measure temperature readings. Recipe editing within LabVIEW user interface (UI), enabling the operator to sequence where the hotbar moves in x & y-axis positioning, as well as setting the required force and heat, and looping the instructions for the bed’s entirety The CompactRIO contains eight Third Party Stepper Drives – one for each axis of movement. The NI 9152 C Series Motor Drive Interface Module instructs the stepper motor drive when to move and the drive amplifies the voltage to the motor. It contains a15-pinn DSUB for the drive step command signals, and a 20 pin MDR connector provides incremental encoder feedback inputs. Multiple 24V power supplies to ensure the required fixture current. Industrial PC for an operating system that displays the camera feed in real-time, for operator viewing during alignment & printing function. Relays to ensure high voltage & high current safety. Delivering the Outcome The Cyth control system has enabled our customer to bring their hotbar bonder to market. The NI CompactRIO and LabVIEW software platform enable us to provide deterministic controls in a real-time system that consistently executes tasks in a specific time constraint. The user programmability of the fixture’s software interface also greatly enables the end customer. From development to deployment, Cyth has assisted our customer in a turnkey controls system for LCD hotbar bonding.

  • Multichannel Frequency Synthesizer ATE System

    National Instrument automated test fixture featuring PXI hardware. The Challenge Design, develop, and deploy a flexible and precise automated test equipment (ATE) system for a 6-channel tunable and a 4-channel fixed-frequency synthesizer. The Solution Using the LabVIEW graphical system design environment with NI RF hardware to develop a flexible and high-speed ATE system that uses the latest technology and saves time and money. Technology NI PXIe-6537 module for 2-channel TTL pulse generator NI PXIe-5162 4-channel oscilloscope NI PXIe-2543 module PXI-2596 module NI PXIe-5652 signal generator to test path calibration NI PXIe-5450 signal generator for DUT reference frequency (75 MHz) PXI-5691 amplifier for splitter loss compensation USB-5680 power meter NI PXIe-5663 vector signal analyzer (external LO mode) with QuickSyn Multichannel Frequency Synthesizer ATE System Problem Background and Solution Our customer designs and manufactures high-performance RF signal sources using frequency synthesis techniques for generating an output frequency, which support a wide range of commercial and industrial RF applications. The customer’s device under test (DUT) superficial testing includes 10 measurements at three frequencies (the start, middle, and end frequency of the synthesizer’s tunable bandwidth). This requires 5–8 hours of time from a professional engineer. The DUT also supports pulse modulation through two input TTL channels. Individual test design, manual assembly, system calibration, and reporting are the most time-consuming procedures for DUT engineers. Our company has developed a 12-channel frequency synthesizer ATE system with testing capabilities from 10 MHz to 6.6 GHz. List of measurements includes: Output signal frequency range Maximum frequency deviation from nominal value Output power Frequency setting time Amplitude modulation depth Amplitude-frequency response (flatness) in tunable bandwidth Delay instability of an output RF pulse versus input synchronization pulse Rising\falling edge delays of an RF pulse versus input IF pulse rising\falling edges Radio pulse rise and fall time RF pulse amplitude flatness Radio pulse amplitude instabilities generated in .5 s phase noise, offsets from the carrier 1 kHz, 5 MHz Output signal amplitude noise Spurious emissions, harmonics, and subharmonic The customer’s synthesizer phase noise was sufficiently low at 120 dB c\Hz in 800 MHz. With our current configuration, users can achieve residual FM specifications, low nonharmonic, and excellent SSB phase noise up to -135 db c\Hz (800 MHz, 10 kHz offset). Conclusion It took our team 4.5 months to organize the project, design the ATE system architecture, develop, program, and install the system at the customer site. Using our ATE system, the customer can decrease the testing time by up to 30X and measure 25 parameters for 10-channel and 400 frequency steps (10 MHz to 6.6 GHz). Original Authors: Davit Zargaryan, 10X Engineering LLC Edited by Cyth Systems

  • Circaflex & NI Single-Board RIO Power Syringe Lubrication Inspection Demo

    Circaflex and NI Single-Board RIO control the syringe lubrication inspection demonstration. The Challenge A pharmaceutical test and validation company approached us, requiring a tradeshow demonstration capable of showcasing their test and measurement process for the inspection of silicon lubricant utilized in self-administering syringes.   The Syringe Lubrication Inspection Solution We paired the NI Single-Board 9651 (sbRIO) SOM with our Circaflex embedded control board to showcase the control and monitoring of a machine vision solution that captures images of Syringe Lubrication Inspection for improved and measured quality assurance.   The Story EpiPens are devices used to administer medication to an individual experiencing a severe allergic reaction, also known as anaphylaxis. Blocking the body’s response to an allergen, the importance of the EpiPen administering itself correctly in critical situations could not be higher. The product’s success depends on its ability to administer a predetermined drug dosage every time.   Our clients ensure that this occurs through the test and measurement of the silicon lubrication located in the interior of the self-administering syringes. Their system captures images (using cameras) of syringes individually used for inspection and analysis to meet strict FDA medical standards and detect defective syringes along their line. They asked us to create a fully capable demonstration system that they could use to showcase their processes. Cyth's Circaflex embedded control board is used to control inputs and outputs (pulse and steps) of the demonstration's LabVIEW motor control architectures. The Process An operator places a syringe in the rotating holder, located in the system housing. The system's gripper holds the syringe in a vertical position while the first stepper motor rotates the syringe at a predefined rate. A custom LED array casts and reflects light from the syringe towards a camera. The light reflected off the syringe is then gathered by our camera to recreate a two-dimensional image of the lubrication located in the syringe’s interior. A programmable logic controller (PLC) strobes the lighting in tandem with the camera’s capture sequence. Use of Circaflex and the sbRIO’s deterministic nature enabled the synchronization of the camera and lighting together for a predefined exposure time ensuring consistent lighting and improved camera imaging consistency. Using software to stitch together a high-definition image, we can accurately quantify the coating of the syringe’s interior lubricant. All of this is controlled and synchronized using the NI sbRIO 9651 SOM and the Cyth Circaflex platform. The LabVIEW motor control architectures measuring inputs and outputs (pulse and steps) are controlled by the pairing of the NI Single-Board 9651(sbRIO) SOM with our Circaflex embedded control board for high-speed data acquisition and measurement. The system's enclosure houses the stepper motors, camera, and hardware required to image the syringe.   Delivering the Outcome Throughout the project Power Syringe Lubrication Inspection, our sales and engineering teams collaborated closely with the client to ensure their timeline and project requirements. We were able to provide the client with a high-quality inspection system with additional tradeshow demonstration features that fulfilled their needs and met their budgetary requirements. This included the system’s ability to scan and render a 2D image of a syringe’s interior lubricant for comparison and analysis and to give this data readout live using their test and measurement software. Our improved system and high-quality inspection processes now ensures the ability of our customer to showcase their improved silicon lubricant inspection technology.   Technical Specifications 2 x Applied Motion NEMA 17 Integrated Drive + Motor with Encoder 1 x Applied Motion NEMA 23 Integrated Drive + Motor with Encoder 1 x 20 - Megapixel CMOS Global Shutter Camera 1 x Telecentric, HP Illuminator (beam diameter 60 mm), White 1 x RC Series LED Strobe Controller 1 x NI sbRIO-9651 SOM (System on Module) 1 x Circaflex 315

  • Power Line Monitoring Using PowerFlex

    PowerFlex - Power Line Monitoring & Measurement System PowerFlex is a ready-to-use platform for energy monitoring and measurement of signals directly from the power line. Typical Applications: Continuously monitor up to 16 mains or auxiliary signals 24 x 7 x 365 Trigger Events from any input based standard or custom trigger definitions  Record event data on all channels, perform calculations and measurements Store data nearly indefinitely on the device Ethernet, fiber optic, or wireless secure networking The Challenge An industrial controls provider approached us with the need for a system to provide 24/7 power grid monitoring in extreme weather conditions. The Solution Using Cyth Circaflex and NI Single-Board RIO hardware we designed a rugged monitoring system that continuously monitors the voltages across circuit breakers used in power grid applications. The Cyth Process Sub-stations are the locations in a grid where power is stepped down from high transmission voltages to levels that are safe for homes and businesses. To maintain fail safes in this process, circuit breakers are installed to prevent overcurrent or short-circuit failures. These circuit breakers must be tested at sub-stations annually according to state and federal law and this task has traditionally been performed by on-site work crews. An industrial control provider approached us with the need to mitigate the risk of work crews visiting sub-stations in person by creating a system that could monitor and test these breakers remotely. Monitoring circuit breakers remotely requires a system to run 24/7. Our engineering team began by designing hardware that would run headless while continuously monitoring live events in real-time. This hardware, the NI Single-Board RIO paired with our Circaflex control system provided the I/O capabilities required of the system. The NI sb-RIO offers a user-programmable Field Programmable Gate Array (FPGA) which when compiled acts similarly to a hard-coded, custom silicon chip.  FPGAs are extremely deterministic and robust, and using the FPGA in this application ensured that all information on short-circuit events were captured. Complimenting the capabilities of the NI-sbRIO our engineering team designed the Circaflex 580, an embedded signal conditioning board. We applied signal conditioning to step down the high voltages present at the substation (50 – 150 kVs) to a signal level the sbRIO was capable of reading. When the breaker is tripped, the unit does real-time analysis of voltage waveforms using proprietary algorithms to ensure the breaker is functioning within specifications. The Circaflex and NI sb-RIO controller provides the unit with the conditioning, tracking, and data storage capabilities required to monitor the voltages present at the substation. Predesigned I/O is suitable as-is for most  Energy Monitoring applications: 12 AC Voltage measurements 3 Current Transducers (CT’s) 8kV Isolation 10k Samples/sec synchronous data Customizable voltage and current ranges Powered by AC line power  Unlimited Customization Options Unlimited Sensor inputs  Signal Conditioning External devices or cameras Communication / Networking Customization not required, but... Fully customizable if necessary

  • An Overview of the PXI Platform

    Powered by software, PXI is a rugged PC-based platform for measurement and automation systems. PXI combines PCI electrical-bus features with the modular, Eurocard packaging of CompactPCI and then adds specialized synchronization buses and key software features. PXI is both a high-performance and low-cost deployment platform for applications such as manufacturing test, military and aerospace, machine monitoring, automotive, and industrial test. Developed in 1997 and launched in 1998, PXI is an open industry standard governed by the PXI Systems Alliance (PXISA), a group of more than 70 companies chartered to promote the PXI standard, ensure interoperability, and maintain the PXI specification across its mechanical, electrical, and software architectures. Figure 1: The PXISA defines requirements to ensure interoperability between vendors, and leaves flexibility for vendor-defined functionality. PXI systems are composed of three main hardware components: chassis, controller, and peripheral modules. The hardware systems are driven by software, often with individual portions of LabVIEW, C/C++, .NET, or Python code being organized by test management software (for example, TestStand). Figure 2: A PXI system includes a chassis, controller, instrumentation, and software. PXI Platform Chassis The PXI Chasses is the backbone of a PXI system and compares to the mechanical enclosure and motherboard of a desktop PC. It provides power, cooling, and a communication bus to the system, and supports multiple instrumentation modules within the same enclosure. PXI uses commercial PC-based PCI and PCI Express bus technology while combining rugged CompactPCI modular packaging, as well as key timing and synchronization features. Chassis range in size from four to 18 slots to fit the needs of any application, whether its intentions are to be a portable, a benchtop, a rack-mount, or an embedded system. Figure 3: NI PXI chassis vary in size from four to 18 slots. PCI and PCI Express Communication The PCI bus gained adoption as a mainstream computer bus in the mid-1990s as a parallel bus with a theoretical maximum of 132 MB/s shared bandwidth. PCI Express was introduced in 2003 as an improvement to the PCI standard. The new standard replaced the shared bus used for PCI with a shared switch, which gives each device its own direct access to the bus. Unlike PCI, which divides bandwidth between all devices on the bus, PCI Express provides each device with its own dedicated data pipeline. Data is sent serially in packets through pairs of transmit-and-receive signals called lanes, which enable 250 MB/s theoretical bandwidth per direction, per lane for PCI Express 1.0. Since the introduction of PCI Express, the standard has continued to evolve to allow faster data rates while maintaining backward compatibility. PCI Express 2.0 doubles the per-lane theoretical bandwidth to 500 MB/s per direction, and PCI Express 3.0 doubles this again to 1 GB/s per direction, per lane. Multiple lanes can also be grouped together into x2 (“by two”), x4, x8, x12, and x16 lane widths to further increase bandwidth capabilities. Figure 4: PCI Express provides a high data throughput and low communication latency bus, ideal for test and measurement applications. Equivalently, the PXI Express standard evolved from the PXI standard to incorporate the PCI Express bus. This increased bandwidth allows PXI Express to meet even more application needs like high-speed digitizer data streaming to disk, highspeed digital protocol analysis, and large-channel-count DAQ systems for structural and acoustic test. Because the PXI Express backplane integrates PCI Express while still preserving compatibility with PXI modules, users benefit from increased bandwidth while maintaining backward compatibility with legacy PXI systems. PXI Express specifies PXI Express hybrid slots to deliver signals for both PCI and PCI Express. With PCI Express electrical lines connecting the system slot controller to the hybrid slots of the backplane, PXI Express provides a high-bandwidth path from the controller to backplane slots. Using a PCI Express-to-PCI bridge, PXI Express provides PCI signaling to all PXI and PXI Express slots to ensure compatibility with hybrid-compatible PXI modules on the backplane. In doing so, these PXI Express hybrid slots provide backward compatibility that is not available with desktop PC card-edge connectors, in which a single slot cannot support both PCI and PCI Express signaling. Timing and Synchronization One of the key advantages of a PXI system is the integrated timing and synchronization . A PXI chassis incorporates a dedicated 10 MHz system reference clock, PXI trigger bus, star trigger bus, and slot-to-slot local bus to address the need for advanced timing and synchronization. These timing signals are dedicated signals in addition to the communication architecture. The 10 MHz clock within the chassis can be exported or replaced with a higher stability reference. This allows the sharing of the 10 MHz reference clock between multiple chassis and other instruments that can accept a 10 MHz reference. By sharing this 10 MHz reference, higher sample rate clocks can phase-lock loop (PLL) to the stable reference, improving the sample alignment of multiple PXI instruments. In addition to the reference clock, PXI provides eight transistor-transistor logic (TTL) lines as a trigger bus. This allows any module in the system to set a trigger that can be seen from any other module. Finally, the local bus provides a means to establish dedicated communication between adjacent modules.  Building on PXI capabilities, PXI Express also provides a 100 MHz differential system clock, differential signaling, and differential star triggers. By using differential clocking and synchronization, PXI Express systems benefit from increased noise immunity for instrumentation clocks and the ability to transmit at higher-frequency rates. PXI Express chassis provide these more advanced timing and synchronization capabilities in addition to all the standard PXI timing and synchronization signaling. Figure 5: The timing and synchronization capabilities of PXI and PXI Express chassis provide the best-in-class integration of instrumentation and I/O modules. In addition to the signal-based methods of synchronizing PXI and PXI Express, these systems can also leverage synchronization methods using absolute time. A variety of sources including GPS, IEEE 1588, or IRIG can provide absolute time with the use of an additional timing module. These protocols transmit time information in a packet so systems can correlate their time. PXI systems have been deployed over large distances without sharing physical clocks or triggers. Instead, they rely on sources such as GPS to synchronize their measurements. Power and Cooling The I/O and instrumentation modules that populate a PXI chassis vary in their amount of required power. NI PXI Express chassis provide at least 38.25 W of power and cooling to every peripheral slot; some chassis push slot cooling capacity even further and can provide 58 W or 82 W of cooling to a single slot. This extra power and cooling make advanced capabilities of high-performance modules, such as digitizers, high-speed digital I/O, and RF modules, possible in applications that may require continuous acquisition or high-speed testing. Chassis vary in total power, so it is always a best practice to perform a system-level power budget when configuring a new system. Figure 6: The PXIe-1085 24 GB/s chassis includes high-performance, field-replaceable fans. Controller As defined by the PXI Hardware Specification, all PXI chassis contain a system controller slot located in the leftmost slot of the chassis (slot 1). Controller options include remote control modules that allow PXI system control from a desktop, workstation, server, or laptop computer as well as high-performance embedded controllers with either a Microsoft OS (Windows 7/10) or a real-time OS (LabVIEW Real-Time). PXI Embedded Controllers PXI embedded controllers eliminate the need for an external PC and provide a high-performance, yet compact in-chassis embedded computer solution for your PXI or PXI Express measurement system. These embedded controllers have extended temperature, shock, and vibration specifications and come with an extensive feature list such as the latest integrated CPUs, hard drive, memory, Ethernet, video, serial, USB, and other peripherals. By providing these peripherals on the controller’s front panel, overall system cost is minimized because you don’t need to purchase individual PXI or PXI Express cards to gain similar functionality. The controller comes pre-configured with LabVIEW Real-Time or Microsoft Windows and all the device drivers pre-installed. NI’s embedded controllers also have managed life cycles and offer vendor support to ensure test system longevity and compatibility with the PXI ecosystem. PXI embedded controllers are typically built using standard PC components in a small PXI package. Performance benchmarking done by NI R&D also ensures the development of controllers that are optimized for test and measurement applications to guarantee that code and algorithms run faster. For example, the PXIe-8880 has a 2.3 GHz eight-core Intel Xeon E5-2618L v3 processor (3.4 GHz maximum in single-core, Turbo Boost mode), up to 24 GB of DDR4 RAM, solid-state drive, two Gigabit Ethernet ports, SMB trigger, and standard PC peripherals like two USB 3.0 ports, four USB 2.0 ports, DisplayPort, and GPIB. When NI releases a new PXI embedded controller, it offers the controller shortly after major computer manufacturers like Dell or HP release computers featuring the same high-performance embedded mobile processor. Because NI has been in the business of releasing PXI embedded controllers for more than 15 years, the company has developed a close working relationship with key processor manufacturers such as Intel and Advanced Micro Devices (AMD). For example, NI is an associate member of the Intel Embedded Alliance, which offers access to the latest Intel product roadmaps and samples. Figure 7: The PXIe-8880 embedded controller, featuring the eight-core Intel Xeon E5 processor, is ideal for high-performance, high-throughput, and computationally intensive test and measurement applications. In addition to computing performance, I/O bandwidth plays a critical role in designing instrumentation systems. As modern test and measurement systems become more complex, there is a growing need to exchange more and more data between the instruments and the system controller. With the introduction of PCI Express and PXI Express, NI embedded controllers have met this need and now deliver up to 24 GB/s of system bandwidth to the PXI Express chassis backplane. Figure 8: NI has continued to deliver the latest and most powerful processing technology to the PXI platform for the last 20 years. Rack-Mount Controllers To provide an alternative computing and control option, NI offers external 1U rack-mount controllers. They feature high-performance multicore processors for intensive computation and multiple removable hard drives for high data storage capacity and high-speed streaming to disk. These controllers are designed to be used with MXI-Express and MXI-4 remote controllers for interfacing to PXI or PXI Express chassis. In this configuration, the PXI/PXI Express devices in the PXI system appear as local PCI/PCI Express devices in the rack-mount controller. Figure 9: Rack-mount controllers with MXI-Express or MXI-4 remote controllers can be used to control PXI or PXI Express chassis. PC Control of PXI Through MXI-Express technology , PXI Remote Control Modules provide a simple, transparent connection between a host machine, like a desktop PC, and the PXI chassis and instruments. During start-up, the computer recognizes all peripheral modules in the PXI system as PCI boards, allowing further interaction with these devices through the controller. PC control of PXI consists of a PCI/PCI Express board in your computer and a PXI/PXI Express module in slot one of your PXI system, connected by a copper or fiber-optic cable. Copper cables offer higher data throughput capability, but are generally shorter (1 to 10 meters), while fiber-optic cables are available in much longer options (up to 100 meters), but may have lower data throughput capability. Most PCs are immediately compatible with PXI remote control solutions. Furthermore, compatibility with MXI-Express devices is extended to even more PCs through NI's MXI-Express Bios Compatibility Software . Laptop Control of PXI You can equivalently control a PXI Express system from a laptop computer using the PXIe-8301 remote control module from National Instruments. Laptop control of PXI Express consists of a PXI Express module in slot one of your chassis and a Thunderbolt 3™ cable connected to your laptop. Figure 11: The PXIe-8301 remote control module is ideal for ultra-portable applications. Multichassis Configurations Multichassis configurations allow two or more PXI chassis to be managed by a single master controller. As a unified system, multiple chassis can take advantage of benefits such as cross-chassis synchronization, separation of instrument types to optimize data throughput, and peer-to-peer transfers between instruments in separate chassis. The most common method of forming a multichassis system is through daisy chaining. A daisy-chain topology consists of one or more slave (downstream) chassis connected in series to a master (upstream) chassis that is controlled through a PC or PXI embedded controller. When using a daisy-chain topology, each slave chassis is visible to and controllable by the host machine. Figure 12: A PXIe-8364 host interface module is placed in a peripheral slot of the master chassis containing an embedded controller. While the above solution requires an additional module in a peripheral slot for daisy chaining, some PXI Remote Control Modules contain built-in daisy-chaining capability through the inclusion of two ports—one for an upstream connection and one for a downstream connection. Figure 13: A desktop PC with a PCIe-8375 is connected to a master PXI Express chassis through a PXIe-8375 remote control module. Some host interface cards contain two downstream ports, allowing for a star topology. Rather than connecting two slave chassis in series (daisy chain), the star topology connects two slave chassis in parallel, allowing each chassis to communicate directly to the host rather than through an intermediary chassis. Figure 14: The PCIe-8362 host interface card contains two MXI-Express connections, allowing two PXI Express chassis to be controlled through a desktop PC using a star topology. Peripheral Modules NI offers more than 600 PXI modules. Because PXI is an open industry standard, nearly 1,500 products are available from more than 70 different instrument vendors. In addition, since PXI is directly compatible with CompactPCI, you can use any 3U CompactPCI module in a PXI system as well. A common misconception regarding the small PXI footprint is that this space savings comes at the cost of performance. It is important to understand that the PXI platform can offer this space savings not by lowering performance but by modularizing the system. Every traditional boxed instrument requires a separate processing circuitry system, display, and physical interface. For PXI-based instrumentation systems, these functions are designated to specific components shared among multiple instruments. A PXI embedded controller acts like a central processing and control hub for all the different instruments in the PXI chassis. It also provides a human interface through its connectivity to external peripherals such as a video monitor, keyboard, and mouse. Figure 15: NI offers over 600 different PXI modules. Software running on the embedded controller interacts with the different PXI instruments to define the actual functionality of the test system. With these standard functions designated to an embedded controller that offers state-of-the-art performance, PXI instruments need to contain only the actual instrumentation circuitry, which provides effective performance in a small footprint. Software The development and operation of a Windows-based PXI or PXI Express system is no different from that of a standard Windows-based PC. Therefore, you do not have to rewrite existing application software or learn new programming techniques when moving between PC and PXI-based systems. Using PXI, you can reduce your development time and quickly automate your instruments by using G in LabVIEW, an intuitive graphical programming language that is the industry standard for test, or NI LabWindows™/CVI for C development. You can also use other programming languages such as those that are part of Visual Studio .NET, Visual Basic, Python, and C/C++. In addition, PXI controllers can run applications developed with test management software such as TestStand. Test management software includes not only a test executive, but also a fully featured test architecture that provides you the flexibility to customize behavior to meet specific needs like sequencing, branching/looping, report generation, and database integration. Test management software along with PXI modular instrumentation provides an integrated solution that can both simplify test development and reduce maintenance for long-term success. As an alternative to Windows-based systems, you can use a real-time software architecture for time-critical applications requiring deterministic loop rates and headless operation (no keyboard, mouse, or monitor). Real-time OSs help you prioritize tasks so that the most critical task always takes control of the processor, reducing jitter. You can simplify the development of real-time systems by using real-time versions of industry-standard development environments such as the LabVIEW Real-Time and LabWindows/CVI Real-Time modules. Engineers building dynamic or hardware-in-the-loop PXI test systems can use real-time testing software such as VeriStand to further reduce development time. Figure 16: TestStand manages a PXI system’s test code regardless of the programming language used.

  • Controlling a Stepper Motor Using the RIO Platform and LabVIEW

    Stepper motors are widely used in applications requiring precise control of position and speed, such as robotics, CNC machines, and camera positioning systems. In this article, we'll explore how to control a stepper motor using National Instruments' Single-Board RIO (sbRIO) platform and LabVIEW, specifically utilizing LabVIEW RT and LabVIEW FPGA for real-time control and precise signal generation. What Is a Stepper Motor? A stepper motor is an electromechanical device that moves in discrete steps rather than continuous motion like a DC Motor. Each step corresponds to a fraction of a revolution, enabling precise control of angular movement. Typical Stepper motor Stepper motors are typically driven by a series of digital pulses that modify a waveform sent to electromagnets in the stepper. The common wiring setup of the electromagnets includes two main windings: A/A' (A and A not) and B/B' (B and B not). These windings represent the coils that, when energized in sequence, cause the motor to rotate and go to a designated rotary position. The diagram below illustrates a typical stepper motor wiring setup, which helps to convey this concept. A typical stepper motor wiring setup. Role of the RIO Platform and LabVIEW and CircaFlex™ in Stepper Control National Instruments' Single-Board RIO (sbRIO) and CircaFlex™ boards offer a flexible, powerful platform for controlling stepper motors. However, they cannot directly connect to the stepper motor's windings, as they do not supply the necessary current and voltage in the correct waveform. This is where a stepper motor driver comes into play. The sbRIO, programmed using LabVIEW RT and LabVIEW FPGA, handles the logic of motor control—deciding how far to move, how fast, and in which direction. These decisions are based on various inputs, such as user commands, sensor data, or image processing algorithms. Example of Stepper Motor Movement Calculation To control a stepper motor effectively, we need to calculate the steps required to achieve a desired movement. Suppose you want to move an object a certain distance based on an image captured by a camera. The sbRIO processes the image and converts the necessary movement from pixels to millimeters. Then, using system specifications like the ball screw's pitch and the stepper motor's resolution, you can calculate the exact number of steps required. Let’s assume: The ball screw pitch is 5 turns per millimeter. The stepper motor has 2,000 steps per revolution. If the software determines that the motor needs to move 3 mm, the necessary number of steps can be calculated as follows: Steps required = Distance in mm × Turns per mm × Steps per revolution Steps required = 3 mm × 5 turns/mm × 2000 steps/turn =30,000  steps This example demonstrates how easily sbRIO and CircaFlex™ can convert real-world units into precise motor movements. Outputting the Steps: Velocity and Frequency Calculation Once the required number of steps is calculated, the system needs to output the steps at the correct frequency to control the motor's speed. If we want the object to move at 10 mm per second, we can calculate the required step frequency: Frequency (steps per second) = (Steps required / Distance to travel ) × Speed Frequency (steps per second) = (30,000 / 3  mm) × 10 mm/sec = 100,000  steps/sec The software in sbRIO and CircaFlex™ can then output these steps to the motor driver at the correct frequency. Controlling Duty Cycle and Direction Each stepper motor requires a pulse signal with a duty cycle, which is typically 50%. This means that for every pulse, the signal is high for 1 microsecond and low for 1 microsecond. The sbRIO outputs these pulses through its FPGA using LabVIEW FPGA, which gives precise control over timing and signal generation. Additionally, the software determines the direction of movement, either forward or reverse, based on system logic or user inputs. The correct signal is sent to the stepper motor driver, which then outputs the appropriate waveforms to the motor windings. Acceleration and Deceleration Profiles It’s crucial not to start or stop the stepper motor abruptly. Rapid acceleration or deceleration can cause missed steps or mechanical issues. Instead, an acceleration profile should be calculated to gradually increase the motor's speed. For example, if we want to reach 3 mm/sec, the software can apply a 30 mm/sec² acceleration, which brings the motor to full speed in 100 milliseconds. In this diagram, the motor accelerates over the first 100 milliseconds, maintains the target speed, and then decelerates to zero before stopping. Note, in more sophisticated systems, the acceleration does not have to be strictly constant, even the acceleration can speed up and slow down. This more advanced calculation takes into consideration a desired value of "jerk", which is the rate of change of acceleration, or the second derivative of velocity. Therefore the amount of change in acceleration (m/s²) per second, jerk is measured in mm/s³! Integrating sbRIO and CircaFlex™ with the Stepper Driver Once all the necessary parameters—steps, frequency, duty cycle, and direction—are calculated, the sbRIO or CircaFlex™ outputs the corresponding signals to the stepper motor driver. The driver amplifies these signals and converts them into the correct waveforms for the motor windings, enabling precise motor control. The stepper motor driver has its own variety of settings, depending on the manufacturer. Conclusion Controlling a stepper motor using the sbRIO platform and LabVIEW provides a robust, flexible solution for embedded applications requiring high-precision motor control. By leveraging LabVIEW RT and LabVIEW FPGA, engineers can create complex control algorithms that are executed in real time, ensuring smooth, efficient operation. The combination of sbRIO and CircaFlex™ is ideal for embedded control projects, offering powerful integration capabilities with stepper motors and other systems. Whether you're moving an object based on sensor data or controlling motion in a CNC machine, this platform offers a comprehensive, scalable solution. For more information on how to implement stepper motor control using sbRIO, CircaFlex™, and LabVIEW, or to inquire about our integration services, feel free to contact our engineering team. We're happy to assist with any embedded control project. We have the experience and skills to help you choose the correct products to operate a stepper motor, and we can also help demonstrate equipment or provide startup assistance with any equipment you purchase.

  • CompactRIO Revolutionizes 3D Printing

    A 3D-Printed Craniomaxillofacial Implant and Model of Facial Reconstruction Surgery The Challenge Industrial laser-based 3D printing processes have been around for many years, but the industry must tackle many challenges such as production throughput, quality assurance, and manufacturing repeatability before 3D printing can become a robust, standardized manufacturing technology. The Solution Materialise developed the Materialise Control Platform (MCP), powered by CompactRIO and LabVIEW, as a ready-to-start software-driven, embedded controller platform specifically for laser-based 3D printing applications. Researchers and engineers can build and improve additive manufacturing processes that are ready for industrial use to support innovation, research, and new applications in the market. Materialize additive manufacturing and 3D printing facility, Leuven, Belgium. (Credit: Materialise). Revolutionizing the World of 3D Printing and Additive Manufacturing Materialise’s open and flexible platforms help companies in industries such as healthcare, automotive, aerospace, art and design, and consumer goods to build innovative 3D printing applications that make the world a better and healthier place. Examples include customized implants that have helped people out of their wheelchairs, hearing aids that have enhanced social lives, or the improved designs of the cars we drive and the planes we fly in. Titanium Aerospace Part with 63% weight reduction. Challenges That Prevent More Broad-Based Adoption of Additive Manufacturing Traditional machines operate on preformed material geometries (bars, blocks, sheet metal plates, and more), but additive manufacturing starts from pulverized material (powder) or liquid material (resins). This means that in contrast with traditional manufacturing, additive manufacturing not only shapes the geometry of end products, but also defines the material properties. Therefore, manufacturers use laser optical systems. The power and geometrical accuracy of these systems shifts as time progresses. Furthermore, like in welding, the process is susceptible to corrosion. This requires complete and total control of both the laser power, beam position, process atmosphere, and machine temperature throughout the entire build, and for some materials also before and after the build (controlled heat-up and cool-down phases). Also, the more an additive manufacturing machine is used, the quicker its internal processes deteriorate. Users mitigate this with preventive maintenance and recalibration—time-consuming tasks with a rather long machine standstill as a consequence. When using additive manufacturing, users may calibrate the machine before a certain number of builds. A complete calibration easily consumes half a day of work by a skilled technician for a build that might take only a couple of days. With machines that drift so quickly and have advanced complex processes, the industry is somewhat skeptical about an emerging and revolutionizing technology like additive manufacturing. Typically, users demand outstanding quality, high production throughput, and repeatability to trust this manufacturing approach to handle those elements that generate revenue for them. We might gradually overcome challenges like throughput and increasing production capacity. With repeatability still a challenge, this could lead to accelerated production costs and more waste. However, all of this is still not as important as a potential quality issue, which directly impacts customer relationships. Users can benefit from the Materialise Control Platform to innovate and overcome these hurdles one by one. They can tackle and reduce the typical calibration downtimes and implement an automatic process and quality monitoring and control. Users can also achieve closed-loop rates that have never been achieved before and produce more repeatable parts with higher quality, at higher volume, and at lower cost. NI-9030 CompactRIO Chassis Using the CompactRIO Platform We selected the CompactRIO platform as the foundation for our solution becuase it offers an extendable FPGA-based hardware platform with a vast selection of I/O. We extended the platform with a scan head and laser interfaces. We developed and added both XY2-100 and SL2-100 scan head communication protocols as C Series interface cards. Specifically, the cRIO-9030 controllers offered great advantages while running Linux Real-Time. We could port our current C developers and many of the already existing libraries to the CompactRIO system. The CompactRIO FPGA is crucial for data analysis and interconnecting the I/O in the additive manufacturing machine. The overall process runs at 100 kHz, a 100X improvement over traditional 3D printing machines. This loop speed is hard to keep up with using regular processors. Our own scan head modules rely on another FPGA that processes the laser and scan head signals. Every MCP-based additive manufacturing machine runs at least two processors and two FPGAs, all interconnected in the Materialise Control Platform (MCP). The modularity and openness of the CompactRIO platform is scalable for our customers. Not everybody needs two or more scan heads or numerous I/O channels. When customers need more than eight modules, they can use an NI-9149 chassis to add another eight modules in the configuration. The high-end cRIO-9030 products include Gigabit Ethernet, IP, and USB camera support, an in-demand feature for high-end additive manufacturing machines for machine inspection. Users can monitor and intervene during the build process and reduce the high costs attached to non-destructive, post-build tests. The additive manufacturing industry is global, so we needed to certify the MCP for sales worldwide. Having developed custom C Series modules, we validated and tested these modules ourselves, but saved significant time on all CompactRIO components due to the available global certification standards like CE, FCC, UL, and more. Original Authors: Stijn Schacht, Materialise Edited by Cyth Systems

  • Automated Test of RF Tire Sensors using NI USRP Software Defined Radio

    Validation of Continental Automotive tire sensors. Project Summary Continental Automotive required the upgraded validation of a tire sensor with wireless communication capabilities at a lower cost compared to existing test systems. Solution Developing a configurable RF communication test solution using an NI RF generator and receiver card for the ISM band (315 MHz/915 MHz). This was achieved using an NI USRP-2900 card to send and receive RF signals and LabVIEW software to create a configurable and software-defined test approach. Industry Automotive Technology at-a-glance NI USRP 2900 Read More... NI LabVIEW Testing RF Communication to Tire Sensors Continental Automotive, a premium supplier of automotive solutions, manufactures various connected sensors, keys, and other equipment that operates according to different RF protocols. Amongst these communication-capable sensors are some dedicated to monitoring continuous tire pressure so that it is always possible to know what the tire pressure is and receive an alert in the event of abnormal pressure loss. As such, the Continental tire pressure monitoring system prevents frequent cause of accidents and ensures optimal safety for the driver. It also helps to reduce CO2 emissions and fuel consumption (a tire that is under-inflated by 0.3 bar leads to an over-consumption of 1.5% on average). Left: NI USRP‑2900 is a tunable RF transceiver with full-duplex operation used to test the tire sensor's RF capabilities, Right: Cross-sectional diagram of tire sensor position and functionality. The company’s quality requirements call for products to undergo many tests, which are now carried out with dedicated cards using hardware and software designed specifically for this purpose. As a result, in the event that there are changes in customer requirements or the technologies used, it is sometimes necessary to update the dedicated card’s software or begin a new hardware design. Among the most common developments are the size of the frame, the type of modulation, the throughput, data coding, or the frequency used. Such an approach is costly in terms of resources for development as well as maintenance and prohibits a standardized approach. The company, therefore, needs to find a more practical and less expensive alternative. The Advantages of Software Defined Radio In a context where hundreds of thousands of products are produced every day and where the tolerated error rate is extremely low, an application that tests equipment such as tire pressure sensors or car keys must be hardy, reliable, and fast. It is therefore necessary to develop a hardy and flexible solution that ultimately has the same reliability as the solution currently being used. This solution needs to be easy to use while remaining complete so as to be able to adapt to future changes in technologies and protocols. The advantage of SDR technology is that it usually requires only two components: A card for transmitting/receiving signal. A computer with software that can process this signal. Combining USRP-2900 and LabVIEW Equipment We have therefore used USRP-2900 from NI. It is an RF transceiver that covers the range from 70 MHz to 6 GHz with a maximum instantaneous bandwidth of 20 MHz with characteristics that are relatively well suited to our application’s requirements. With this USRP (Universal Software Radio Peripheral) device, we can not only capture RF product messages under test but also transmit them through two channels, Tx/Rx and Rx. The USRP is driven by LabVIEW software on which the main operations for processing the signal for demodulating and decoding the frames coming from the sensors are carried out. We can also use LabVIEW to create an intuitive graphical user interface. Top: LabVIEW Programming for Receiving Signal, Bottom: LabVIEW Block Diagram of Transmitted Signal Sending and Receiving RF Data Frames LabVIEW UI of RF Vector Signal LabVIEW UI of RF Vector Signal Generation and Receiver Transmission during live test of the wireless tire sensor. With the LabVIEW program, frames contained in the ISM frequency band can be sent and received. We can set the frequency, the baud rate, the modulation (ASK or FSK), and the number of bytes to be received as well as code/decode Manchester code. The content of the frames to be sent is customizable. We can send a sequence containing a wake-up frame followed by a defined number of frames containing useful information. (Figure 5) Most of the variables are configurable, which requires a configuration and initialization step when first used. After completion, the program works automatically and doesn’t require any further changes. In the event of changes in the transmission and receiving protocol, we simply have to restart the software and amend the parameters during the configuration and initialization step. Then we can use the new protocol. Conclusion The use of the USRP-2900 card and LabVIEW software was a great validation of the SDR approach for transmitting and receiving RF frames for vehicle access or tire pressure monitoring systems. Original Authors: Joram Fillol-Carlini, Continental Automotive France, Wireless Tests Edited by Cyth Systems

  • Solar-Powered Car Using CompactRIO & LabVIEW

    *As Featured on NI.com Original Authors: Alisdair McClymont, Cambridge University Eco Racing Edited by Cyth Systems The Challenge Using remote data analysis and telemetry to reliably monitor and control the electrical systems of a solar-powered car. The Solution Using NI CompactRIO hardware as the in-car embedded controller to interface with the vehicle controller area network (CAN) bus, to implement the vehicle control algorithm programmed in NI LabVIEW software, and to send and receive data via telemetry radio. Cambridge University Eco Racing (CUER), a student team from the university’s engineering department, designs, builds and races solar-powered cars. Our goal is to win the World Solar Challenge, the world’s premier race for solar vehicles. This 3,000 km race across Australia pushes efficiency and reliability to their limits. CUER first entered the race in 2009 with our car called Endeavor, which averaged more than 70 kmph and achieved a top speed of 121 kmph. However, we finished 14th due to reliability problems. This encouraged us to seek an alternative solution built with National Instruments products. Solar Car Electrical Systems The key electrical components of a solar car are simple – a battery, solar array modules, and a motor. A battery management system (BMS) monitors the state of each battery cell. The motor connects to the high-voltage bus through a high-efficiency three-phase inverter (motor controller), and each solar array module connects to the high-voltage bus through a high-efficiency switch-mode converter known as a maximum power point tracker (MPPT). Each of these devices has a CAN interface and outputs information, such as current, voltage, speed, temperature, and error, about the relevant electrical devices. Vehicle Control The motor controller actively controls the vehicle. It can limit both the current drawn from the high-voltage bus and the motor speed. Therefore, the motor controller requires a “desired velocity” and an “allowed current” via CAN. This message is sent by an on-vehicle processor, which monitors the driver inputs and the states of the other electrical systems and decides the values to send to the motor controller. Initially, we used a student-made device to perform this task. Although functional, the device was not reliable and, critically, failed during the 2009 race in Australia. We decided that a more reliable solution was essential for future success. Vehicle Control Using CompactRIO National Instruments, a key sponsor of CUER, provided CompactRIO products for use on solar cars. We used a 2-port, high-speed CAN module to connect to the CAN bus so the cRIO could receive information from the motor controller, BMS, MPPTs and driver inputs. We wrote a LabVIEW program to process this information in real-time and to send control messages to the motor controller and condensed information to the driver display. We used the NI-CAN driver to quickly and easily create a database of all the CAN messages sent out by each device on the network. The program then called on the CAN Frame to Channel Conversion Library to decode and encode messages. This offered a quick, reliable way to process the information on the CAN bus. We wrote the control program in LabVIEW using an object-oriented structure for easier modularization, maintenance, and understanding of the code. We created a class for each device on the bus, including functions to decode and encode messages for that device. We used a state chart to determine the nature of the messages sent to the motor controller. The modes of the car include: “normal” driving mode where the driver’s accelerator controls the current drawn by the motor controller, and “cruise control” mode where the vehicle maintains a constant speed (we use this mode for almost all of the race), and “reversing” or “braking” mode where the motor controller uses regenerative braking to minimize the energy lost. Using LabVIEW state charts, we can easily define the actions carried out in each mode and the requirements to transition between states. Most importantly, it is a reliable way to implement vehicle control. The last thing the team wants is for a driver’s actions to result in a burnt-out motor. A distinct advantage of using LabVIEW is the ease of running several processing loops in parallel. For example, one loop can send control messages to the motor controller at a constant rate and the system immediately processes CAN messages whenever they are received. Telemetry A reliable telemetry system between the car and the chase vehicle, which follows the car throughout the race, is essential. The driver sees a limited display inside the car, so team members in the chase vehicle must monitor in-depth data and look for any faults or suboptimal performance. CompactRIO gave us a simple solution for implementing this system. We connected a telemetry radio to the serial port on the cRIO module and the control program simply sends packaged data via the serial port. A second telemetry radio in the chase vehicle receives this data and it’s processed, again using LabVIEW, on a laptop. The system presents this data so that operators can quickly detect errors or other significant changes in the vehicle state. We also use the telemetry system to send the optimum cruise control speed to the solar car. We calculate the optimum speed using a complex optimization algorithm in LabVIEW, which integrates directly with the weather instruments on board the chase vehicle. Testing To date, using CompactRIO and LabVIEW for vehicle control and telemetry has proven 100 percent reliable. This means we can concentrate on improving the efficiency of the other systems on the car, such as modifying the algorithms run by the MPPTs and the battery voltage. Thanks to National Instruments, we look forward to a much-improved performance in the World Solar Challenge. Original Authors: Alisdair McClymont, Cambridge University Eco Racing Edited by Cyth Systems

  • Universal ECU System Using CompactRIO

    *As Featured on NI.com Original Authors: Kristof Ceustermans, Karel de Grote University College, Department of Applied Engineering Edited by Cyth Systems The Challenge Developing a high-efficiency, low-emission adaptable engine control unit (ECU) to control engines running on standard gasoline as well as hydrogen, natural gas, and diesel. The CompactRIO Solution Using the NI LabVIEW FPGA Module and NI CompactRIO to simulate, test, and control our engine and perform on-the-fly control adaptations. Engine Control Units Engine Control Units or sometimes called engine control modules help us program and tune combustion engines to maximize fuel efficiency and provide a sustainable power method. Programmable ECUs are a necessity when working with any fuel sources and accelerate research on cutting edge fuel sources such as hydrogen. Several ECUs take data from the rotations per minute (rpm), request the torque/throttle position or boost pressure in the turbocharged engine, and determine the optimum ignition time and period as well as the optimum fuel injection amount and time. We use additional parameters such as engine temperature to apply a correction on these values. For example, a cool engine needs a richer fuel mixture compared to an engine running at its normal operating temperature. Off-the-shelf programmable ECUs are not suited for our university’s research because they are limited in programming capabilities, take a predefined set of input variables (sensors), and are usually optimized for motorsport applications. We designed a very flexible, programmable motor management system using the CompactRIO (cRIO) programmable automation controller because the platform is modular and expandable for additional sensors and contains a field-programmable gate array (FPGA). With the FPGA, the CRIO’s high-speed data acquisition allows for our motor’s sensor data to be tracked using inputs and outputs in real-time. Left: NI cRIO-9038, Right: NI cDAQ-9178 Engine Sensor Simulation Before connecting our ECU system based on cRIO to our test bench engine, we verified the module’s function using Hardware in the Loop simulation (HIL). To do this, we applied simulated sensors to our CompactRIO engine controller, which are controlled by a separate application based on LabVIEW and NI CompactDAQ , generating the necessary voltage and current normally applied by the sensors. We discovered that the inductive signals typically generated by a sensor connected to the camshaft and crankshaft are usually unpredictable 80 Vpp signals, where the NI C Series output modules are limited to 60 V. To better represent this signal and save time, we connected a real sensor to a gear and electric motor and the application based on LabVIEW and Compact DAQ controlled the motor rpm. Then we fed the real signal to the CompactRIO ECU. Designing an ECU Using CompactRIO We used the LabVIEW FPGA Module to develop our ECU and we can implement the system with CompactRIO using LabVIEW. We created tables using the rpm, requested torque as input values, and used LabVIEW VIs interpolate array functions to find the appropriate actuator parameters such as spark ignition timing and fuel injection timing. We also acquired sensors such as manifold air pressure (MAP) and engine temperature and applied the correction parameters. With the CompactRIO setup, we can easily add more and nonstandard sensors for research as well as adapt for different engines and fuel types. CompactRIO uses the FPGA to acquire the crankshaft and camshaft angular position and generate the actuator signals at the correct time. In addition to the standard engine parameters, we plan to measure the cylinder pressure and use the data as closed-loop control parameters in our engine controller to maximize engine efficiency. The mixture should best be ignited at the highest-pressure level to generate the most power. First, we want to optimize the control of a normal, four-cylinder gasoline car engine. By implementing the fast and reliable response time of the FPGA, we can focus our research on improving the efficiency of the engine by better controlling the combustion. Moreover, we will perform tests on our test engine under varying load conditions to further enhance our control algorithms. ECU Developments for the Future Hydrogen is an environmentally friendly fuel because it does not generate any carbon dioxide. We are working to adapt the ECU to control a hydrogen-fueled car engine. When using hydrogen as a fuel, the hydrogen/air ratio should be matched to at low torque to obtain perfect combustion without any hydrogen or air surplus. However, at higher torque, the engine is best operated at a poor fuel mixture by applying a surplus of air to the engine, which is also called the lean burn principle. To reduce nitrogen oxide emissions, the engine should not operate at the intermediate fuel/air mixtures. In this control strategy, we will open the throttle valve all the time and use a high air/fuel ratio. The requested torque is controlled by varying the fuel amount. However, when more torque is needed than the lean burn principle can deliver, we have to control the throttle valve instead and switch between the two control strategies. Currently, there is no commercially available engine control system other than the BMW Hydrogen 7 that can switch between those strategies. We plan to implement an ECU using CompactRIO to switch between our control schemes and deliver a commercially available system to interested third parties. Original Authors: Kristof Ceustermans, Karel de Grote University College, Department of Applied Engineering Edited by Cyth Systems

  • CompactRIO Controls Fuel-Cell Hybrid Train

    *As Featured on NI.com Original Authors: Tim Erickson, Vehicle Projects LLC Edited by Cyth Systems The Challenge Controlling the operation of a 250-kW fuel-cell hybrid locomotive. The Solution Using an NI CompactRIO controller to monitor and control the safety and operation of a fuel-cell locomotive and controller area network (CAN) bus to communicate the engine status to the operator via a touch panel programmed with NI LabVIEW software. The prime mover of a traditional switch locomotive is a diesel engine between 1 and 2 MW driving an alternator that supplies power to the traction motors and locomotive auxiliary systems. These traditional switch locomotives require a high-power diesel engine, which typically is not fuel-efficient and has limited emission control. Subsequent design iterations of switch locomotives have transitioned to a hybrid-electric design, which reduces the overall emissions and fuel consumption because the engine can be downsized while the battery stores energy for high-power transients. However, a large source of diesel particulate pollution in urban areas still comes from diesel-powered locomotives in rail yards. To help alleviate this pollution, a North American public-private partnership is prototyping a fuel-cell hybrid switch locomotive for urban rail applications and replacing the diesel engine with a 250-kW net fuel-cell power plant, creating the world’s largest fuel-cell hybrid locomotive. Vehicle Projects LLC of Denver, Colorado, engineered the control system for the fuel cell using a CompactRIO embedded controller and LabVIEW graphical design software. Our goals are to reduce air pollution in urban rail applications, including yard switching associated with seaports, and to serve as a mobile backup power source for critical infrastructure during military base grid failures or civilian disaster relief operations. Fuel Cells and Hybrid Power Trains Fuel cells are electrochemical power devices that directly convert the chemical energy of a fuel into electric power. The cells produce electricity and water from hydrogen fuel and oxygen, which is the reverse process of water electrolysis. While fuel cells share principles of operation with batteries, they differ in that the electrochemically active materials, hydrogen and oxygen, are stored or available externally and continuously supplied to the device rather than stored in the electrodes. They are periodically refueled, like an engine, rather than recharged electrically. Like batteries, individual cells are grouped together into “stacks” to provide the required voltage or power. A fuel-cell hybrid power train uses a fuel-cell prime mover plus an auxiliary power/energy-storage device to carry the vehicle over power peaks in its duty cycle and recover kinetic or potential energy during braking. For steady-state operation, the continuous net power of the prime mover must equal or exceed the mean power of the duty cycle. Preliminary research has shown that a hybrid-switch locomotive can reduce capital and recurring operation costs. Figure 1. Top: NI Compact RIO 4 Slot Chassis. Bottom: NI CompactRIO 8 Slot Chassis. Designing a Control System Using CompactRIO We faced several design and integration challenges while developing the large hydrogen fuel-cell vehicle including weight, packaging, and safety considerations. Harsh operating conditions, especially the shock loads that occurred during coupling to railcars, required highly rugged component systems. Additionally, the fuel-cell control system is needed to communicate with the existing commercial vehicle controller to interpret operator demand and adjust fuel-cell power plant parameters to meet the power requirement. The CompactRIO embedded controller provided an ideal form factor to meet these specifications with the right I/O combination for this application. This programmable automation controller (PAC) managed and executed all power plant functions and continuously monitored the performance and safety of the hydrogen storage and fuel-cell power systems. Software Architecture Based on LabVIEW A CompactRIO embedded controller running the LabVIEW Real-Time and LabVIEW FPGA modules controls the fuel-cell power plant operation. The user monitors the control system via a touch panel installed in the locomotive cab. The control application consists of modular control algorithm VIs that communicate with each other and the field-programmable gate array (FPGA) I/O system using a tag-based architecture so that we can refer to each I/O point by the assigned name within the LabVIEW application. Each tag has properties associated with it including alarm limits, scaling (converting from voltage to engineering units), and events such as when the user wants it to log to a disk. We implemented a programmable logic controller (PLC) mentality into our PAC-based system. Developing the Perfect Control Platform with LabVIEW and CompactRIO We chose LabVIEW and CompactRIO because the NI C Series modules with integrated signal conditioning helped us implement fast monitoring of the various I/O points while connecting to a wide range of specialty sensors such as flowmeters and pressure sensors. Additionally, we performed complex control algorithms beyond simple proportional integral derivative control at very fast loop rates. Some of our control algorithms included mathematical models that we implemented with LabVIEW, which we could not have developed using less flexible environments such as a PLC platform. Furthermore, we achieved the fast loop rates that we required because we had the ability to place some of the control algorithms on the field programmable gate array (FPGA). Technical Specifications LabVIEW 2020 NI CompactRIO 8-slot Chassis Author Information: Tim Erickson Vehicle Projects LLC

  • High-Voltage Dielectric Test System for Magnetic Couplers with CompactRIO

    *As Featured on NI.com Original Authors: Flavio Floriani, INTEK S.p.A. Laboratory, Sector Manager Edited by Cyth Systems The Challenge INTEK needed to invent and build a fully automatic measurement system that could test up to 50 magnetic couplers simultaneously while they are subjected to a high voltage (up to 8 kV rms) and placed in a 150 °C oven. The system also needed to record the mean time to fail and monitor the overvoltages generated when a device breaks. The Solution INTEK used the CompactRIO platform with FPGA technology and LabVIEW to develop a system with four functional levels. The system combines the power of NI hardware with the flexibility of LabVIEW. The use of a magnetic coupler, or the device under test (DUT), is similar to that of an optocoupler. It must guarantee the insulation between two points at different potentials to satisfy the security standard (SOT-24 package). A test required in the reference standard is to estimate the mean time to fail (MTTF) by applying a 50 Hz sinusoidal high voltage and record the time to breakdown. Once a consistent number of data is collected using a statistical technique, we can parametrize a Weibull distribution and predict lifetime. When a failure occurs, the DUT looks like a short circuit (an internal discharge path shorts the two points), and when the DUT is in normal condition it looks like a small capacitor. Magnetic Couplers (Credit: https://www.magnetictech.com/magnetic-couplings/ ) Right: Magnetic Coupler Test LabVIEW User Interface After three years of study on how to manage the AC high voltage (up to 8 kV rms) on these small DUTs (simpler 15-position equipment have run continuously for two years and collected a lot of data), we can focus on the main issues emerging from these kind of tests: - How to cut the current that flows through a DUT when it fails, without generating an extra voltage (which could damage all the other DUTs) - How to use low voltage components to have small equipment dimensions and save on cost - How to manage wiring efficiently and safely merge high-voltage circuits with low-voltage circuits Combining the result of these studies with NI technologies, we have developed a system architecture that includes a cRIO-9035 controller and two NI-9205 analog input modules to read the current in each of the 50 circuits and read the high voltage. We acquired the current by reading voltage drop on shunt resistors and acquire the voltage by reading the output of a customized high-voltage divider developed specifically for this application. The system also includes two NI-9476 digital output modules to command the 50 relays that cut off the current when a breakdown occurs. We also used an NI-9217 RTD module to acquire the temperature inside the oven through a PT100. We developed a four-level software architecture using LabVIEW. An FPGA runs the part of code that detects the fault current and commands the cut-off relay. Because we need speed and determinism, the FPGA is perfect for this application. With some optimization, we finally read the RMS fault current (calculated on 60 ms perdiod) and ensured the fault clearing in less than 100 ms. It is important to clear the fault quickly because the current damages the DUT and the customer needs to analyse them after the test to study and improve the technology. We developed a kind of scope with a pre-trigger function and ran it in a parallel loop. Every 100 ms, the scope controls the insantaneous maximum voltage value and eventually records and saves the waveform if the threshold alarm is exceeded. We can easily edit all the scope parameters such as pre-trigger time, sample rate, and more in the front panel. We used DMA and FIFO structures to have high speed and share data in the project. We also monitored the oven temperature with a lower sample rate in another parallel loop and compared with an alarm threshold. Every time a DUT fails, the FPGA VI changes the state of a shared variable in the project and the VI detects it running on the real-time enviroment. By using the real-time features, we could easily create a VI that automatically acquires the time to fail and saves it directly on a file on the CompactRIO system. This makes it easy to download that file during or at the end of the test and have all the required information. The real-time VI is an auto-running executable on the system and has no user interface, which ensures stability. We created another VI inside the project and converted it to an EXE file. This VI can be run on any PC connected to the same LAN network to control the state of the whole system (alarms, state of DUT, overvoltages, temperature, and more). The technicians can easily monitor the test status every day or when needed (a test session can go on for longer than two months). It can be used also during the debug operations. We also created a VI with the channel status only and converted it to a web page. The customer can connect directly to the system and know the status of the DUT, but cannot interfere with the apparatus settings. Original Authors: Flavio Floriani, INTEK S.p.A. Laboratory, Sector Manager Edited by Cyth Systems

  • Increasing Automation to Reduce Post-Silicon Characterization by Over 60%

    *As Featured on NI.com Original Authors: Vinodh J Rakesh, Cypress Semiconductor Technology India Pvt. Ltd. Edited by Cyth Systems The Challenge Reducing the turnaround time for the bench characterization of diverse System-on-Chip (SoC) products designed and developed at different Cypress centers in Asia and America. The Solution Developing an ATE like characterization platform called CyMatrix using NI PXI Source Measure Units, PXI Matrix Switch Modules and FlexRIO modules with built-in support for up to 4X parallelism without sacrificing the flexibility or the signal integrity of a bench characterization environment. Cypress Semiconductor Corporation (NASDAQ: CY) is a leader in advanced embedded system solutions for the world’s most innovative automotive, industrial, home automation, consumer electronic and medical product applications. Our Microcontroller and Connectivity Division (MCD) focuses on high-performance microcontroller units (MCUs), analog, wireless, and wired connectivity solutions. The portfolio includes Traveo™ automotive MCUs, PSoC® programmable MCUs, and general-purpose MCUs with ARM® Cortex®-M4/M3/M0+/R4 CPUs, analog PMIC Power Management ICs, CapSense® capacitive-sensing controllers, TrueTouch® touchscreen, Wi-Fi®, Bluetooth®, Bluetooth Low Energy and ZigBee® solutions. It also features a broad line of USB controllers, including solutions for the USB-C and USB Power Delivery (PD) standards. MCD generated revenue of about $1.4B in FY2017 while serving automotive customers like Continental, Denso, Visteon, Toyota, and BMW. Time-to-Market Pressure Driving the Need for Efficient Silicon Characterization Around 2011, driven by intense time-to-market pressure and customer requirements in the mobile touchscreen market, Cypress reduced the product development cycle time drastically by following a novel IP-based design approach. This approach decreased the time taken to tape out a chip and reduced the number of bugs we identified post silicon. Because of these factors, we saw a sudden spurt in the number of new products being launched. The design team was capable of taping out more products, but the characterization team wasn’t ready to characterize all the products on time. The characterization team at Cypress is responsible for guaranteeing the datasheet specifications across PVT before releasing the product for volume manufacturing. On an average, each product has about 400 datasheet specifications. To guarantee fitness for volume production, we much measure each of these 400 specifications on anywhere between 6 and 50 DUTs. To meet the customer schedule requirements, we must complete the entire characterization of the product in four to eight weeks from silicon availability, depending on the complexity of the product. If we do not complete the characterization of the product within four to eight weeks, we risk delays in product launches, potentially leading to loss of customers and millions of dollars of business. Driven by the business requirement to be first and relevant in the market space, we knew we had to increase the efficiency of bench characterization. Left: Transition From a Bench Setup With Six Discrete Box Instruments to the Standardized CyMatrix Setup With 34 Instruments , Right: Architecture of the LabVIEW Characterization Process Boosting Characterization Efficiency by Increasing Parallelism, Automation, and Standardization of the Bench Characterization Setup The characterization team explored three ways to achieve higher efficiency: (1) increased parallelism, (2) increased automation and (3) increased code reuse/standardization. We had to scale the traditional bench characterization setups, which were limited to one DUT at a time with 50 percent automation, to achieve truly parallel characterization that gives every DUT its own dedicated instruments or at least enough instruments to allow for time-division multiplexing with > 90% automation. To characterize a typical PSoC DUT, we have to do current/voltage/timing measurements on 64 signal pins and 4 power pins. Therefore, we need 68 source measure units (SMUs)to perform all applicable tests. Factoring in a 4X parallelism, we need a characterization setup including 272 SMUs. Conventional production floor automated test equipment (ATEs) offer channel counts of this order, but they often fall short on the accuracy required for bench characterization. Besides, replicating an ATE architecture for bench is prohibitively expensive. Although a DUT has 68 pins across which the measurements must be made, the measurements need not be done simultaneously across all these pins. A careful study of all the test cases showed that, we need access to, at most, 4 signal pins and 4 power pins simultaneously. Hence, we built the entire characterization setup with just 32 high-precision SMUs, which were multiplexed across all 272 pins of the 4 DUTs being tested using a switch-matrix arrangement. The test setup includes two NI 2532 PXI Matrix Switch modules, which are 512-crosspoint switch matrices. The LabVIEW-based characterization program makes the requisite connections at run time between the DUT pins and the SMU channels based on the test requirements. The architectural diagram is shown in Figure 1. Despite the use of switch matrices, we still need 32 SMUs for the complete bench characterization setup. We ruled out traditional benchtop SMUs because they occupied too much space and involved additional cabling efforts. Instead, we chose to use NI’s PXI 4-channel SMUs, and with eight of these PXI modules, we met all the setup requirements. Some products, such as the Traveo II Automotive MCUs, could also function as multiple protocol masters—DDR HSSPI, Hyperbus, SPI and SDHC. Hence, the characterization process for these DUT variants involves source synchronous measurements. In source synchronous measurements, the DUT provides the clock and the external instrument drives the data with respect to this input clock. Measurements also included shmoo’ing the data driven by the instrument with respect to the input clock at ~100 ps resolution while being protocol aware. Conventional pattern generators, high-speed digital I/O modules, and, in some cases, ATEs do not have good timing responses to external discontinuous clocks. The timing characterization of protocol masters is best implemented with actual protocol-aware slaves that can respond synchronously to the input clock. However, these slaves do not have the capability to shmoo the data with respect to the clock. Also, using slave devices often requires the use of external discrete components such as level shifters to ensure voltage compatibility between the master and the slaves, the programmable delay chains to shmoo the delay, and so on. Such setups including multiple slave devices and associated external components are complex and prone to system-level issues. To circumvent this problem of using actual slave devices, we turned to programmable NI PXI FlexRIO product family with programmable FPGAs to emulate the slave functionality. The FlexRIO devices also allowed us to implement programmable delays with ~40 ps resolution for fine delays and ~120 ps resolution for coarse delays. The front-end FlexRIO adapter module fulfilled the functionality of the level shifter between the DUT and the emulated slave. An NI Alliance Partner, Soliton Technologies Pvt. Ltd., helped us implement the slave functionality and programmable delays on the FPGA. The transition from a single-site characterization setup to a multisite parallel characterization setup offered many benefits, so our engineering teams in Bangalore, Seattle, and Colorado immediately adopted this multisite platform. Before that, each center maintained setups with different instrumentation and had its own repository of code that was built and maintained only within these individual sites. The earlier approach limited code reuse and, at times, made replicating issues across sites difficult because of the lack of standardization in automation programs, characterization boards, and the instrumentation and interconnects. Standardization was immediately identified as the way forward, but it also presented some business constraints, like reusing existing instrumentation investments. To address this, we developed a framework based on LabVIEW object-oriented programming (LabVIEW OOP) concepts that provides an abstraction layer above the individual device driver on which we built the entire characterization automation program. This allowed engineers at different sites to seamlessly switch between instruments, such as SMUs, digital multimeters(DMMs), and arbitrary waveform generators(AWGs), from different vendors by simply editing a single entry on an Excel spreadsheet. Translating Higher Characterization Efficiency into Organizational Benefits The 4X parallelism has decreased characterization times, which has allowed us to identify and fix bugs more rapidly. The newer characterization setup has directly reduced our characterization time by over 60 percent. The characterization routine for medium-complexity products takes less than 4 weeks now as compared to 10 weeks earlier. We have also reduced the characterization time for highly complex products from 20 or more weeks to under 8 weeks. Our characterization team’s head-count has not changed much from 2011. However, the same team can now characterize more than five products in a calendar year instead of the one or two, prior to automation of the setup. Standardization and code reuse have helped us reduce the time we need to prepare for a new silicon, avoid bugs, and improve the overall reliability of the characterization setup. Though engineers and teams from multiple sites sitting in different time zones contribute to the overall characterization efforts of a product, we have seen instances when bugs reported from one site were easily replicated at the other sites by the next day. This is a true testament to how standardization has helped increase our engineers’ productivity, as they spend more time on troubleshooting the reported bug instead of on reproducing bugs. Original Authors: Vinodh J Rakesh, Cypress Semiconductor Technology India Pvt. Ltd. Edited by Cyth Systems

  • How CompactRIO Compares to a PLC

    Introduction In the world of industrial automation and control systems, the choice between different hardware platforms can be a critical decision. Programmable Logic Controllers (PLCs) have long been the workhorse of the industry, but newer technologies like CompactRIO (cRIO) and Programmable Automation Controllers (PAC's) have been gaining ground. In this article, we will explore the differences between CompactRIO and PLC systems to help you make an informed decision for your industrial automation needs. Before we delve into the comparison, it's essential to understand what CompactRIO and PLC systems are and their primary functions. Programmable Logic Controller (PLC) PLC's are specialized industrial computers designed for controlling industrial processes and machinery. They execute control functions based on logic and timing, making them well-suited for applications requiring real-time control and reliability. PLC's are programmed using ladder logic or other programming languages specifically designed for automation. PCL's have been controlling industry and processes for over 50 years, but are limited on their speed and processing power. Programmable Automation Controllers Programmable Automation Controllers (PACs) are advanced industrial control systems that combine the real-time capabilities of PLC's with the computational power and flexibility of PCs. PACs are characterized by their ability to handle complex control tasks, high-speed data processing, and custom algorithms, making them ideal for applications where standard PLC's might fall short. PACs provide a versatile platform for designing and implementing sophisticated control strategies and seamlessly integrating with a wide range of sensors and devices. Whether it's complex automation, data-intensive processing, or demanding industrial environments, PACs offer a powerful and adaptable solution for modern control and automation systems. CompactRIO CompactRIO is a PAC hardware platform developed by National Instruments that combines a real-time microprocessor, a Real-Time Linux OS, and Field-Programmable Gate Array (FPGA) technology. This platform is known for its flexibility and is often used for applications requiring high-speed data acquisition, complex signal processing, and integration with other systems. CompactRIO systems can be programmed using LabVIEW which makes Data Acquisition (DAQ) and computational algorithms very easy for engineers. Comparison Factors Now, let's compare CompactRIO and PLC systems across various factors to help you make an informed choice: Performance Comparison PLC's are well-known for their reliable real-time capabilities, making them suitable for many industrial applications. CompactRIO, is also a deterministic and reliable real-time controller, but with a PC microprocessor (like an Intel Core Processor) offering significant processing power and flexibility. Furthermore, the cRIO includes an FPGA, which can be programmed with algorithms and logic that execute on a MHz clock iteration, meaning it can make closed loop inputs and outputs literally in nanoseconds. This makes cRIO an excellent choice for applications requiring high-speed data processing and advanced algorithms. Flexibility CompactRIO is highly flexible due to its FPGA, which not only allows you to implement custom signal processing and control algorithms, but also enables Reconfigurable I/O (RIO) which refers to the ability to swap modules and modify the I/O of the system without programming. Programming Environment PLC's typically use ladder logic or Structured Text, while CompactRIO systems are programmed using LabVIEW or other programming languages. LabVIEW comes with hundreds of engineering and mathematical algorithms and code libraries, which makes industrial and control system applications powerful with minimal programming. The result can be a fast, complex, and powerful software application that can do much more than a PLC. Connectivity Both CompactRIO and PLC systems offer a wide range of communication options. However, the cRIO's microprocessor can be particularly advantageous when integrating with industrial devices and standard networks. A CompactRIO can interact with databases, send emails, or write files to a hard-drive or a network. It can also communicate with third-party instrumentation or external devices using serial (like RS232/422/488), ethernet, or other protocols. Yet cRIO is also designed to work with industrial buses such as Modbus, Fieldbus, EtherCAT, DeviceNET, and more. Lastly, with the additional of bespoke modules and interfaces, it can handle custom communication protocols like Fiber Optic or ISM band radio. Size CompactRIO systems are compact, as the name suggests, in comparison to a computer or other larger device with similar computing power. Yet they are about the same size and shape as a PLC. Both come in various sizes, and with varying number of module slots, but larger ones can be bulkier. Cost PLC's are generally considered very cost-effective for simple control applications. CompactRIO, with its advanced processing capabilities, tends to be pricier and are generally better suited for complex control tasks where the cost can more easily be justified by the results. Additionally, cRIO can potentially save costs in the long run by reducing the need for additional hardware or complex workarounds. Conclusion In the CompactRIO vs. PLC debate, there's no one-size-fits-all answer. Your choice should depend on the specific requirements of your application. If you need real-time control, reliability, and simplicity, a PLC may be the right choice. However, if your application demands high-performance data processing, custom algorithms, and advanced connectivity, CompactRIO can provide the necessary flexibility and power. Both CompactRIO and PLC's have their strengths and weaknesses, and the right choice will depend on your unique industrial automation needs.

  • NI Platform Used to Develop Unified Test Architecture for Commercial Aircraft

    As Featured on NI.com Original Authors: Scott Christensen, Collins Aerospace Edited by Cyth Systems The Challenge Collins Aerospace needed to create a “cradle to grave” test architecture for electromechanical systems that was flexible enough to use for a wide range of aerospace controller and component tests across the product development cycle on new and existing programs. The Solution We standardized on the NI PXI and CompactRIO hardware platforms and LabVIEW software to provide a modular test architecture that could be easily configured, customized, and maintained. We collaborated with NI Alliance Partners Wineman Technology on the software and Sierra Peaks on the instrumentation and actuation. Aerospace Test Needs Aerospace line replaceable units (LRUs), components, and controllers require rigorous test, and aerospace companies like Collins Aerospace must test a wide range of configurations and variants of one type of part for a variety of OEM vehicle programs, from business jets to commercial airliners to military aircraft. Collins Aerospace designs many of components used in aircraft flight systems. The actuation group designs systems that translate cockpit control commands into movement of all the leading and trailing edge control surfaces (flaps, slats). These systems are comprised of slat and flap electronic control units (SFECUs), central power drive units (PDUs) and associated power transmission elements like torque tubes and gear boxes. All these system components need to be tested individually and in combination at the system and aircraft levels. This test configuration and LabVIEW user interface (UI) management screen demonstrates the variety of devices under test that can be handled with this common architecture and how an application can be configured on the subsystem level. Challenges of a Traditional Test Approach The design and test methodology from component to component is relatively similar. However, those of us in the actuation group were operating many test stands for various types of LRU test, including development, qualification, production, and repair. Furthermore, we were operating with hydraulic loading on big test systems (both for internal use and for customers) that were time consuming and costly to reconfigure. We were losing time re-creating architectures and procedures to run different tests across the product development cycle. For example, existing stands used hydraulic loading for mechanical LRU test. We were seeing a lot of similar rework across tests; we needed to replumb hydraulic systems and rewire whenever we wanted to reconfigure our test setup. Even for electronic test, the test stands required actual flight hardware, which made the test solutions rigid and inflexible. Automated test was minimal, and support for multiple configurations was limited. The “traditional approach” to test was costly and time consuming. Our group faced aggressive schedule and resource constraints that simply did not allow us to adapt the fragmented existing architecture quickly enough to meet the growing requirements. Also, Collins Aerospace still needed a competitive advantage over other suppliers to reduce cost and schedule to win future programs, so were driven to develop a new test architecture. The Benefits of a Common Test Platform Across the Design Cycle The upfront work and investment we put into our new distributed, deterministic, and dynamic (D3) architecture was a forward-looking approach that will pay off in the years to come. We saw great potential to optimize tests by standardizing on a common test architecture for all tests across the design cycle for a component. We implemented the following test types with the D3 architecture: model-in-the-loop (MIL), software-in-the-loop (SIL), and hardware-in-the-loop (HIL) tests; hardware and software validation and verification (V&V) test (fault insertion); life-cycle durability test; system integration lab (iron bird) test; aircraft-level system integration test; high-lift system test rig (HLSTR) including aircraft-level physical system test; system test rig (STR) including performance, endurance, and fatigue tests; slat/flap controller rig (SFCR) including software development, fly-the-box, software functional, software regression, system, and automated production (acceptance test procedures or ATP) tests; and physical tests including single wing and “right side” emulating tests based on total loading on left side. We achieved several goals. First, we created a single common test platform that provides a “cradle to grave” test architecture and multipurpose tester. We used the same SFECU rig across the entire design “V” for development, ATP, iron bird test, system integration test stand (SITS) test, full production electronic controller test, and full production mechanical hardware test. Second, we incorporated modular hardware that is maintainable and reconfigurable. Now we can easily expand for larger system qualifications, reconfigure for different systems, and avoid hard wiring or plumbing to connect system components. Third, we have an open software architecture that is easy to integrate. The reflective memory architecture allows us to completely control our test stands with memory reads and writes. We can use this architecture separately or integrate it into larger test systems, and we can achieve distributed control that allows for more processing power as the system grows. Collins Aerospace Cost, Time, and Labor Savings By using NI’s distributed measurement and control products, we were able to cut test reconfiguration times from weeks to a day. Our D3 architecture is multipurpose (same load tables used for system, ATP, and iron bird; same SFECU rig used for development, ATP, iron bird, SITS, ESIM), modular (no hard wiring or plumbing to connect; software and hardware are built on designs common to multiple aircraft), easy to integrate (open software architecture; script writing in any language; proven RFM architecture that allows test stand to run in segregated or integrated mode), maintainable (eliminates traditional harness construction through extensive use of printed wiring boards), and forward looking (our team has patentable designs). By developing a common test platform to address our HIL, V&V, system integration, and production test needs across a variety of projects and even aircraft architectures, we were able to cut our test equipment development time while positioning ourselves to better address future needs including a more digital test lab. We’ve saved months of development time and hundreds of thousands of dollars on new platforms while operating with less test lab labor. We’ve perfected an architecture on which an entire test lab can run on a series of mobile common front ends, which eliminates the problem of aging, stationary electronics designed for a single function on a single mechanical test bed. Now we are striving to complete the integration of this new architecture into our daily technical and business systems to achieve a fully automated test lab where concerns over labor rates are a thing of the past, which frees investment for further innovation. Original Authors: Scott Christensen, Collins Aerospace Edited by Cyth Systems

  • Proton Therapy Cancer Treatment Controlled using NI Single-Board RIO

    *As Featured on NI.com Original Authors: Jacob McCulley, ProNova Solutions Edited by Cyth Systems The Challenge Developing a highly accurate and precise proton beam control solution to deliver a prescribed radiological dose to a specific location within a tumor. The Solution Implementing intensity-modulated pencil beam scanning in the ProNova SC360 by using Single-Board RIO solutions to meet the monitoring and control requirements to safely and effectively deliver the radiological dose to treat a tumor. About ProNova More than 1.6 million people will be diagnosed with cancer this year in the United States, with 320,000 of those cases eligible for proton therapy. However, with just 24 existing proton therapy centers, only 5 percent of eligible patients can receive this treatment. ProNova aims to make proton therapy a widely available cancer treatment option by delivering a lower-cost, more compact, and more energy-efficient proton therapy system without sacrificing clinical capabilities. About the SC360 We designed the SC360 proton therapy system to provide the flexibility required to support 1 to 5 treatment rooms, allow for different treatment room configurations, meet individual customer needs, and enable easy integration with future R&D projects. This modular approach lends itself nicely to the design of a distributed control system with NI reconfigurable I/O (RIO) technology. This technology, in conjunction with the LabVIEW Real-Time Module and the LabVIEW FPGA Module, provide the hardware flexibility and programming capabilities needed to rapidly develop advanced embedded monitoring and control solutions for the SC360 without sacrificing the performance requirements of a proton therapy system. Consequently, we used CompactRIO and Single-Board RIO solutions extensively throughout the SC360 for magnet control, vacuum control, beamline diagnostics, and dose delivery. The SC360 offers a highly accurate and precise method for targeting tumors by using intensity modulated proton therapy (IMPT) with pencil beam scanning (PBS). This technology helps doctors treat large, non-contiguous targets with improved local control; thus, sparing sensitive organs and normal tissue from unnecessary radiation exposure. This allows proton therapy to provide a dosimetric advantage in more than 80 percent of all external beam radiation treatment cases. Left: Simplified diagram of dose delivery system, Right: Dose Delivered to a Target Along Z-Axis (left) and XY-Axis (right). SC360 Dose Delivery System The Dose Delivery System, or DDS, is the SC360 subsystem that accurately and precisely delivers protons from the beamline to a specific target in the patient. We implemented IMPT with PBS in the DDS using three sbRIO-9626 embedded controllers. The individual controller responsibilities include: · dose monitoring · beam control · beam position monitoring (Figure 2). A PBS treatment plan contains a set of locations, or spots, in 3D space (horizontal-X, vertical-Y, depth-Z) that are each prescribed a specific radiological dose. The spot produced by the proton beam is between 4–8 mm depending on depth and must be delivered within 1 mm of the prescribed location. Modulating the intensity of the proton beam adds a time dimension to the treatment plan by controlling the beam current to deliver each spot in ~5 ms. We used the Single-Board RIO FPGA and LabVIEW FPGA Module for each of these applications to meet the timing requirements for spot delivery and the response times required to safely remove the beam from the treatment room during spot transitions or following a safety interlock. Additionally, hard-wired signals pass between the FPGAs of each of the control components to trigger spot completion, spot advancement, and treatment faults. Each DDS module uses LabVIEW Real-Time to receive treatment plans, process spot treatment results, and report treatment results back to the treatment room master control component. Beam Control We control the vertical and horizontal position of the proton beam from the beamline to the patient using specialized scanning magnets. The Single-Board RIO device dedicated to beam control is responsible for controlling the magnetic fields required to deflect the proton beam to the desired spot location. Additionally, this controller provides the beam intensity set point required to maintain spot durations of 5 ms. We can sample the analog I/O available on the sbRIO-9626 at 10 kHz to continuously monitor critical feedback signals (control signals, load voltages, currents, fields, temperatures, and water flow) related to vertical, horizontal, and intensity control. The beam control system safely removes the proton beam from the treatment room if any of the monitored signals fall outside set point tolerances. The beam control module is triggered to adjust the magnet fields for the next spot when the dose has been delivered for the current spot. Upon verification that the monitored signals have settled at the new set point, the treatment can continue. We can complete and verify this spot transition process in <800 µs. Dose Monitoring We monitor the amount of charge collected on two redundant dose planes located between the output of the beamline and the patient to control the dose delivered to a spot. We used the sbRIO-9626 to meet the analog I/O and digital I/O requirements for sampling the dose plane signal conditioning circuits. Additionally, we use the onboard FPGA to monitor the delivered dose at frequencies up to 1 MHz, and provide the response time required to safely remove the proton beam from the treatment room upon fulfilling the prescribed dose or in the event the delivered dose falls outside of treatment tolerances. This level of precise control makes it possible to deliver a radiological dose within 1 percent of the prescribed dose. The dose monitoring module also synchronizes spot advancement with other DDS modules upon the delivery of a prescribed dose. We accomplish this by 1) removing the beam from the room when the prescribed amount of dose is delivered, 2) triggering the beam control and beam position monitoring modules once the spot has been completed, 3) receiving notification from the beam control and beam positioning monitoring modules upon successful spot transition, and 4) completing the spot by verifying the delivered dose is within treatment tolerances. Once the spot transition has completed (<1 µs), the treatment plan resumes on the next spot if all components are confirmed ready for safe beam delivery. This process incrementally advances the control components through a treatment plan until a dose has been delivered to all prescribed spots. What’s Next? ProNova received FDA approval for the SC360 earlier this year (2020) and plans to start treating the first patients later this year at the Provision Center for Proton Therapy in Knoxville, Tennessee. We have planned future SC360 installations for cities across the United States, Europe, and Asia. ProNova strives to improve upon the clinical advantages of proton therapy and introduce advanced technologies that help make this treatment option a reality for more cancer patients. Original Authors: Jacob McCulley, ProNova Solutions Edited by Cyth Systems

  • A Mobile Platform for Road Inspections Using LabVIEW

    *As Featured on NI.com Original Authors: Willians R. Mertz Villa, HOB Consultores SA. Edited by Cyth Systems The Challenge Improving the process of surveying information to assess the degree of deterioration of the road infrastructure—which is currently performed manually and disrupts traffic during the day, has a high risk of accidents, and yields little (10 km per work crew per day)—and recording the information in formats that are not subject to manipulation and can be verified with consistent results. The Solution Developing a mobile platform with a continuously geo-referenced, real-time video system specifically designed to collect virtual images of the condition of deterioration and maintenance of the pavement and engineering structures (up to 80 km/h) efficiently and securely. Surveying Road Conditions Using the ROAS Equipment Road infrastructure helps local markets develop and provides integration with spatial economic centers to generate positive effects that influence businesses’ and households' production and consumption decisions. The lack of a route affects the standard of living and the productivity of the people in the area. Additionally, road deterioration increases operating costs, travel time, and investment. User satisfaction is reflected in the quality of the pavement. We must know when to intervene and how to measure deterioration, a process subject to methodologies of surveying information that are now performed manually, which makes results inconsistent. Application Description The road analyzer and survey (ROAS) vehicle mobile platform is oriented to technological innovation for the management and maintenance of roads. The ROAS performs automatic measurements of geo-referenced data and provides service survey information to assess service levels and surface conditions of the road using national and international standards such as ASTM Standard D6433-11 from the Roads and Parking Lots Pavement Condition Index Surveys (Figure 1). We developed a data acquisition and pre-processing system, and two software modules to generate specialized reports. Left: Road Inspection Platform Graphical User Interface, Right: Software Module for Inspection and Reporting With Images of the Pavement (MESP) The Data Acquisition and Pre-Processing System This module is composed of data acquisition hardware using the NI PXI Platform and the control module, which we developed using NI LabVIEW software and the NI LabVIEW Real-Time Module (Figure 2). The hardware synchronizes and acquires digital image data of the road and pavement, GPS, turning, and other devices. We synchronized this data with a distance measuring instrument (DMI) through a sensor encoder connected to the vehicle axis wheels. Figure 3 shows the parts of the DAQ system mounted on the mobile platform. Software Modules to Generate Specialized Reports We generate reports based on the type of information collected by the ROAS system. We could perform many inspections and operations with the route images acquired through the route surface evaluation module (MESR), which we developed using LabVIEW (Figure 4), including: Road safety Asset inventory (traffic signs, traffic lights) Current road conditions with simulations of tours through the track at different speeds Measurements of the images, such as lane width and projection GPS We can perform in-cabinet visual inspections of the condition of the paved roads, get a maintenance history, identify the type of failures and determine severity (crocodile cracking or longitudinal and transverse cracks), and perform reporting. We can do all of this through the pavement surface evaluation module (MESP), which we developed using LabVIEW (Figure 5). General Properties of the System The system includes: Data acquisition rates of up to 80 km/h Panoramic digital images of up to 120° of route Digital images of all the pavement in a lane (up to 4 m wide) Virtual measurements on images Virtual tours of tracks at different speeds and geo-positioning Large storage capacity for internal and external information Report generation and data exporting to different file formats such as Excel, Word, KML, (Google Earth, JPG, and AVI Artificial lighting using xenon strobe lights to capture images of the pavement Geo-referenced data (GPS or DGPS) DMI with an error rate less than 0.1 percent Conclusion We developed the appropriate methodology for surveying. Our system provides reliable information and consistent results with a high production rate (200 km/day) and is safe. It works on unpaved, paved, and urban roads during the day or at night. Users can adjust report generation to their own requirements. Data is verifiable, reproducible, and exportable to other platforms. The system has GPS or DGPS information and can import images from Google Maps. It also has improved information quality to input into the pavement management system, provides better and timelier intervention decisions for the road, and can integrate other sensors and/or measurement equipment into the platform. Original Authors: Willians R. Mertz Villa, HOB Consultants SA. Edited by Cyth Systems

  • Human Cardiovascular Simulation Device with Circaflex, Single-Board RIO, and LabVIEW

    Left: Califia Patient Simulator controlled by Circaflex, in use at a university surgical training lab. Right: Califia controls a pig’s heart to emulate various clinical scenarios. "We couldn't have found a better partner for this collaboration. Cyth took our concepts and developed a working prototype quickly and with remarkable accuracy. In addition, by using Circaflex they have made it possible to add sensors or actuators in a seemingly effortless way, facilitating expansion and evolution of our product." Richard Tallman Ph.D. CEO BioMed Simulation, Inc. The Challenge A designer and manufacturer of medical simulation equipment approached us about the need for a system to simulate the cardiovascular function of the human body. The Solution Using our Circaflex embedded control board and LabVIEW software, we were able to create a life-like cardiovascular simulator that recreates a wide variety of clinical scenarios including cardiopulmonary bypass (CPB). The Story and The Cyth Process BioMed Simulation, Inc., a designer, and manufacturer of medical simulation equipment approached us about a system that could simulate human cardiovascular activity. Their system – The Califia Patient Simulator – needed to be able to recreate a wide variety of clinical scenarios, including cardiopulmonary bypass (CPB). Our engineering team recognized the benefits of having a versatile system that could be reprogrammed to emulate any ailment or cardiovascular condition. To provide these benefits we utilized Circaflex, our embedded control system. To build the system our team began by using the Circaflex 540. Circaflex is a circuit board that interfaces the NI Single-Board RIO with all the sensor controls the Califia requires. The Circaflex and NI sbRIO monitor and control all functions of the Califia device which is crucial for its ability to simulate various medical scenarios. Left: Circaflex embedded control board in use in the Califia Patient Simulator. Right: The Califia unit provides heart rate readouts to an operating room (OR) monitor. The Califia Patient Simulator contains many features that place it at the cutting edge of medical simulation equipment. The first of which is its ability to realistically simulate blood flow to and from a heart. It does this by containing a 2-liter tank for liquid which it controls with an inlet and outlet valve. A peristaltic pump directed by a stepper motor creates the pumping and draining actions required to accurately emulate the function of a human heart. All the motor controls are preprogrammed using LabVIEW. The Circaflex’s high-speed communication allows it to have a custom closed feedback loop constantly reporting the status of system motors and sensors back to the control boards. The system is then able to make the smallest of adjustments in real-time to make sure the simulation’s accuracy remains constant over a duration of time. The Circaflex control board communicates with an array of devices in the Califia simulator including: Flow meters Liquid level sensors Pressure sensors Peristaltic pumps Flow control valves These sensors, motors, actuators, and valves enable the Califia unit to simulate medical scenarios, including Cardiopulmonary Bypass (CPB). CPB is a technique in which a patient’s heart and lungs are controlled and monitored during surgery to ensure proper blood circulation. In the past, gaining such experience for medical staff and students was only possible through shadowing a live operation. Thanks to Califia and the technology it brings forth medical staff and students are provided a way to work through real-world clinical scenarios themselves. Another area in which the Califia Patient Simulator excels is in its ability to provide a large variety of simulated clinical scenarios. It can accurately simulate scenarios from high blood pressure and blood clots to even cardiac arrest. It achieves this through its high level of programmability. Our engineering team was able to use LabVIEW software to define a set of parameters by which the Califia functions. The unit is then able to self-adjust its function of base parameters to emulate whichever scenario the user requests. It is able to do this with remarkable consistency and repeatedly which has solidified its trust in university and medical settings alike. Delivering the Outcome Overall, the high-speed communication and control capabilities of the Circaflex and NI sbRIO control boards help provide the value Califia brings to the world of medical simulation. By precise control of motors, sensors, and valves and by enabling rapid deployment, the Ciraflex platform helped Califia evolve from a proof of concept to a final design in a matter of weeks. Califia is currently supporting medical staff and students in surgical laboratory settings, and likewise is helping prepare future medical professionals for the real-world scenarios they will encounter. Technical Specifications 1 x Circaflex 577 1 x NI Single-Board RIO 9603 1 x Peristaltic Pump 1 x Stepper Motor 1 x Air/Water Flow Sensors with 0 to 5 Vdc Output 1 x Flow Meter 1 x RTD Sensor 1 x Pressure Sensor

  • Automated Test System uses PXI to Validate Irrigation Control Panels

    i Left. Irrigation control panel test enclosure. Right. Two DUTs are being loaded into the test enclosure. The Challenge A designer and manufacturer of irrigation systems approached us with the need for a system to test and validate the control panel and outdoor sensor of their residential sprinklers. The Solution Using hardware and software to create a full turnkey solution for automated testing, we were able to help improve the client’s quality control process and the efficiency of their final product testing. The Story//The Cyth Process A global provider of irrigation products approached us with the need for a system to perform validation testing of their residential sprinkler system control panel and outdoor sensor. Their product was a two-part system that worked in tandem to automate the control of one’s sprinkler system. The outdoor sensor was designed to measure rainfall and temperate and relay this information to an indoor control panel. The indoor control panel then decided from the transmitted information and the scheduled presets the customer chose whether to irrigate one’s landscape. The two communicated using radio frequencies (RF) which increased the product’s ease of use, as the outdoor sensor was mounted on one’s roof and communicated wirelessly with the control panel located in the interior of one’s home. The product testing of the outdoor sensor and indoor control panel required testing the functionality of the two-part system. Our engineering team recognized this required testing the control panel’s physical buttons, the device’s interior software, the segmented LCD screen, and the RF signal transmitter (outdoor sensor) and RF signal receiver (indoor control panel). Figure 3. The NI PXI chassis and I/O cards contained in the test enclosure. Our engineering team began by building a test fixture centered around a PXI hardware chassis, several I/O cards, and NI TestStand software that controlled and monitored all aspects of the device testing. The wide range of the PXI designated card slots provided the ability to acquire the wide range of high-speed I/O required. NI TestStand provided our team with the ability to program and automate the procedural sequence of the fixture’s testing. Control Panel Feature Test Method Button Membrane Switches NI PXI 4110, Mechanical Plungers Device Power Consumption NI PXI – 6514, Industrial Digital I/O Card with DBL Voltage Cable Segmented LCD Screen Camera with Ethernet Comm RF Signal Analyzer NI PXI 5661, VSA – Vector Signal Analyzer (inbound signal) using Digital Signal Attenuator RF Signal Generator NI PXI 5661, VSG – Vector Signal Generator Test Fixture Procedure Indoor Control Panel The automated test enclosure is opened by the operator. Two units are slotted into the provided nests. The operator wires the units to the fixture, and the enclosure's cover is closed. The system automatically powers up the units, measures current, and validates RF data from the RF signal analyzer. The system then presses the button membranes 1 – 4 of the control panel and uses a camera to inspect the relayed output of each button on the LCD screen. All the test data is read by TestStand and the PXI hardware and stored to the PC's memory. The operator opens the enclosure cover, removes the tested units, and repeats. Outdoor Sensor The automated enclosure is opened by the operator. Two units are slotted into the provided nests. The operator wires the units to the fixture, and the enclosure cover is closed. The system automatically powers up the units, measures current, and validates RF data from the RF signal generator. The system downloads firmware to the outdoor rainfall sensor and validates the temperature data to the measured value. All the test data is read by TestStand and the PXI hardware and stored to the PC's memory. The operator opens the enclosure cover, removes the tested units, and repeats. Figure 4. Two irrigation control panel DUTs are electronically wired and awaiting test. Delivering the Outcome Overall, our engineering team was able to deliver a full turnkey solution for the automated testing of our client’s sprinkler system. Both the outdoor sensor and indoor control panel were tested using the same automated test fixture. This was achieved through our engineering team integrating NI TestStand software, NI PXI hardware, and an industrial PC control system to be able to execute defined testing sequences and monitor the responses in real-time. The flexible I/O modules enabled us to control several different devices from mechanical plungers to an ethernet camera to test the various features of the sprinkler system (buttons, software, a segmented LCD screen, and the RF signal analyzer). We were able to provide the client with a system that accelerated their quality control process through the in-depth testing of 300+ units a day. The operational bulletproofing of the test fixture has ensured the product validation and optimal function of our client’s sprinkler system in preparation for their use across homes nationwide. Technical Specifications 1 x Optical Sensor – Photodetector 1 x Cable USB - Serial TTL Converter 1 x Ethernet Camera Plungers 1 x Airflow Control Valve w/ Exhaust Muffler 1 x Solenoid Control Valve 1 x Compact Extruded Aluminum Air Cylinder 1 x Magnetically Actuated Switch

  • Custom RFID Tracking System for Biotech Reagent Bottles using LabVIEW

    The Challenge A medical diagnostics research and manufacturing company approached us with the need for a system to scan radio frequency identification (RFID) tags on their products and efficiently log their product inventory. The Solution Using a programmatically controlled conveyor, LabVIEW software, and RFID sensors we built the customer a turnkey solution that automated the scanning and data collection of their product inventory. The Story//The Cyth Process Radio Frequency Identification (RFID) uses frequencies to communicate and read emitted signals from a microchip. RFID tags automate data collection reducing human-operator error as unlike a barcode they don’t require a line of sight to be scanned. A product containing an RFID tag will be scanned no matter its orientation as long as the tag passes within the scanner’s read field. Our team purchased a Dorner 5 ft. configurable length conveyor that best matched our customer’s needs featuring a Lenze AC Tech verifiable frequency drive (VFD) that allowed the conveyor’s speed to be programmatically controlled. Rails and brackets were added to the conveyor for positioning the customer’s product to help ensure optimal scanning. Sick RFH6xx RFID antennas were integrated into the conveyor with a 9.5-inch reading field. LabVIEW was used to program the RFID antennas to log scanned “tag ID” signals into a CSV file, which was then uploaded directly into the customer inventory database via ethernet. Left: Three RFID scanner systems are under manufacture at Cyth. Right: The Sick RFH630 RFID antennas that are featured in the system. System Order of Operations: The operator begins by turning on the conveyor using the menu option on the LabVIEW user interface. Once the operational light is green, the operator places the customer product on the conveyor. A guard rail positions the product along the conveyor so that all are within the read field of the RFID antennas. As the product passes RFID antennas, the product “ID tags” are acquired, read, and written into a CSV file, which is then uploaded directly into the customer inventory database via ethernet. Products that are detected to have passed on the conveyor without a successful scan due to faulty RFID tags are flagged by a proximity sensor. After the desired scans are complete the operator turns the system off using the LabVIEW user interface. Delivering the Outcome Using multiple Sick RFH630 RFID antennas, a programmable conveyor, and LabVIEW software architectures our engineering team was able to build the customer a turnkey solution that automated the scanning and data collection of their product inventory. This has increased the throughput and minimized the potential for human error in our customer’s inventory scanning and documentation process. The customer ordered multiple units we were able to manufacture and deliver through to factory acceptance testing at their location. The project hardware and software were completed within 10 weeks and within the client’s budget and timeline requirements. Technical Specifications 3 x Dorner 2200 Configurable Conveyor 8 x Sick RFH630 and Antenna, and Antenna Cabling 8 x Sick Proximity Sensors 3 x Asus Touchscreen Monitors 4 x Industrial PC and Mount 4 x Traffic Light 4 x Handheld RFID Scanner 4 x E-stops 4 x keyboards and Mounting 3 x Slotted Wire Mount 3 x Fiberglass Mounting Enclosure

bottom of page