top of page

Search Results

485 results found with an empty search

  • EMERSON NI Authorized Training Partner | Cyth Systems, Inc.

    We provide classroom training courses designed to teach effective techniques for reducing development time while enhancing application performance & scalability Thank You for Your Interest in NI Certification Exams Name Company Email Classroom Training Courses Choose an option [attributer-channel] [attributer-channeldrilldown1] [attributer-channeldrilldown2] [attributer-channeldrilldown3] [attributer-landingpage] [attributer-landingpagegroup] Submit

  • NI Distribution - NI Software LabVIEW | Cyth Systems

    LabVIEW is a graphical programming environment engineers use to develop automated research, validation, and production test systems. Home > Products > LabVIEW Software What Is LabVIEW? LabVIEW is a graphical programming environment engineers use to develop automated research, validation, and production test systems. PRODUCT FEATURES P rogramming like your drawing a flow chart LabVIEW has what you need to build automated test systems, fast. -Thousands of available analysis functions -Configurable, interactive display elements -Drivers for automating every instrument and data acquisition hardware -Connectivity to other languages and industry-standard protocols LABVIEW APPLICATIONS What Can I Do with LabVIEW? For the past 35 years, LabVIEW has been engineers’ tool of choice for developing automated test systems. From performing a simple voltage measurement to advancing space missions, discover how LabVIEW can advance your next project: LabVIEW Editions LabVIEW is available in three editions and is a part of the Test Workflow bundle, which scale in features and capabilities to meet your application requirements. Consider the Test Workflow bundle for LabVIEW and more NI Software. Licenses are sold as one-year subscriptions that include access to online training, degreed engineers for technical support, and software updates. LabVIEW Base Recommended for building simple test and measurement applications. Includes the standard capabilities of LabVIEW: -Acquire data from NI and third-party hardware and communicate using industry protocols -Create interactive UIs for test monitoring and control. -Utilize standard math, probability, and statistical functions -Integrate code written in Python, C/C++, .NET, and MathWorks MATLAB® software -Save data to .csv, .tdms, or any custom-defined binary file Test Workflow Standard Recommended for applications that require hardware automation, data analysis, automated reporting, and remote access to test. Inc ludes: -LabVIEW Full, with tools for advanced analysis and signal processing -G Web Development Software, for building web applications for test -DIAdem Advanced, for measurement data search, visualization, analysis, and creating automated reports -FlexLogger, for performing data acquisition with NI hardware without needing to do any coding Test Workflow Pro Recommended for building simple test and measurement applications. Includes the standard capabilities of LabVIEW: -Acquire data from NI and third-party hardware and communicate using industry protocols -Create interactive UIs for test monitoring and control. -Utilize standard math, probability, and statistical functions -Integrate code written in Python, C/C++, .NET, and MathWorks MATLAB® software -Save data to .csv, .tdms, or any custom-defined binary file Advantages of Standardizing on LabVIEW LabVIEW is the key for accelerating test. After selecting NI, with LabVIEW as the foundation for test, L3 has increased its engineering performance with improvements in development time, downtime, and code reuse. 9X Improvement in development times 50% Reduction of beta test downtime 80% Code reuse across similar products UPGRADE LabVIEW What’s New in LabVIEW? LabVIEW is the tool the test industry has relied on for decades. However, LabVIEW’s innate benefits, combined with renewed investment, makes the next ten years, not the last, the most exciting for users. Time-to-market, product complexity and resource pressure force development productivity to be prioritized, giving LabVIEW users a significant advantage!

  • NI Authorized Distributor & Integrator | Cyth Systems, Inc.

    Cyth Systems supports and stocks National Instruments products and platforms. We work with engineers and buyers to design systems and fulfill and verify orders. Get the NI products you need without missing a beat Technical guidance and operational proficiency that your whole team can trust. The only NI Authorized Distributor with certified System Integration experience Get my Quote Supply chain proficiency Shipping and receiving team member NI LabVIEW+ Do more with NI's suite of test, measurement, and analytics software PXI chassis and modules Choose from a variety of device options, including best-in-class PXI modular instrumentation Supply chain proficiency Shipping and receiving team member 1/6 Browse Products Trouble getting the right test and automation products at the best price on time? Where quality and price meet Get quality products at optimal prices with flexibility for volume orders Guided product selection Get the information you need to select the right products and accessories for the job Vendor approval in hours, not days Approved vendor setup made simple with prepared company information and forms Future-proof your system Access up-to-date life cycle data to avoid costly redesigns Personalized order tracking Avoid the supply chain black box with updated lead time and shipping information Startup assistance Put your purchase to work with free startup assistance and engineering support Choose from a wide variety of NI devices and software tools to get your job done To play, press and hold the enter key. To stop, release the enter key. Explore NI Hardware Explore NI Software Buyers rely on Cyth everyday to fullfill their orders with reliability and care. Provide order details in an format Guaranteed list pricing (no markups) No hidden handling fees Simplified vendor approval process Get my Quote "Working with Marty was wonderful! He followed up promptly and eliminated my concerns about lead times" M.L., Buyer, Sunnyvale, CA Aircraft Manufacturer Engineers trust Cyth as a technically competent vendor of NI products Technical guidance on product selection from trained engineers Recommendations on accessories and system components Free startup assistance Ongoing technical support Schedule a Consultation "Josh helped me find a better system configuration for my project... and provided reference code to get my software design going in the right direction." P.B., Test Engineers, Industrial Electronics company Order Service Request a Quote or Submit Your Order Here Submit any format or file PDF, Excel, text, screenshot First Name Last Name Company Email [attributer-channel] [attributer-channeldrilldown1] How can we help? [attributer-channeldrilldown3] [attributer-landingpagegroup] Upload your order Upload File Upload supported file (Max 15MB) [attributer-channeldrilldown2] [attributer-landingpage] Submit

  • NI Distribution - Packaged Controllers | Cyth Systems

    Packaged controllers are rugged, stand-alone systems with software, processing, and I/O for measuring, control, and monitoring. Build your system with NI. NI Packaged Controllers Hardware Products NI Authorized Distributor and System Integration Partner Home > Products > Packaged Controllers Packaged Controllers Packaged controllers are high-performance, rugged, stand-alone systems that combine customizable software with powerful processing and I/O for any measurement, control, or monitoring application. Packaged Controllers for Real-Time Systems The CompactRIO and Industrial Controller are packaged controllers designed to help you create real-time, stand-alone applications. Both products feature a user-programmable FPGA that you can program using the LabVIEW FPGA Module. Rugged and Industrial Options for Harsh Conditions Packaged controllers include options that are intended for use in hazardous environments. The Industrial Controller is particularly recommended for use in extreme conditions. PLATFORM MODULES AND CONTROLLERS Platform modules integrate with modular hardware platforms that allow you to combine different types of modules in a custom system that leverages shared platform features. NI offers three hardware platforms—CompactDAQ , CompactRIO , and PXI —though all platforms may not be represented in this category. Expansion Module for CompactRIO Expands the processor I/O from an NI controller to meet additional application requirements. Platform: CompactRIO Bus: Ethernet, USB-C CompactDAQ Controller Expands the processor I/O from an NI controller to meet additional application requirements. Platform: CompactRIO Bus: Ethernet, USB-C CompactRIO Controller Combines a processor running NI Linux Real-Time, a programmable FPGA, and modular I/O with vision, motion, and display capabilities. Feature Highlights: Platform: CompactRIO

  • Sales Development Engineer Cyth Systems, Inc. San Diego CA

    Sales Development Engineer | Sales Development Engineer November1st, 2025 Jobs Cyth Systems, Inc. Sales Development Engineer San Diego, CA, USA Cyth Systems is an Engineering Integration Company specializing in Automated Test Equipment (ATE), Embedded Control Systems, and the distribution of National Instruments products for over 20 years. We enjoy working with exciting customers, both small and large, from nearly every industry, performing interesting and engaging engineering projects week after week. We combine a small business family feeling with a world-class engineering team, and customers appreciate both the personal and professional treatment we bring to their projects. Job Description Cyth Systems is seeking a driven and technically skilled Sales Development Engineer (SDE) who engages with customers through our lead management process and identifies new sales opportunities. Your effort and skills will enable our sales team to focus on high-value revenue generating activities by streamlining our sales pipeline and proactively working with customers to solve technical issues. This is an in-office position in San Diego, CA. Cyth Systems is an Engineering Integration Company specializing in Automated Test Equipment, Embedded Control Systems, and Machine Vision Systems. We enjoy working with our customers, both small and large, from nearly every market. We combine a world-class engineering team, and our customers appreciate both the personal and professional treatment we bring to their projects. You will grow your business acumen and strategic sales capabilities while consulting with engineers and their leadership to make major design and equipment investments. What is especially unique about this position is working in several industries such as Life Sciences, Semiconductor, Product Manufacturing, Sporting Goods, Aerospace, Energy and more. If you enjoy a collaborative environment where you can be a part of a team who build things and bring them to life, across a diverse range of industries and applications, come join our team at Cyth Systems! Essential Functions It is essential to the SDE role that you possess a learning mindset rooted in humility, drive, and a high IQ & EQ. You must take initiative and ownership of your role and contribute as a team player. This is a very technical role demanding that you possess tech savvy skills and can track, prioritize, review, schedule, and execute on customer-based tasks. You must have excellent communication skills that allow you to make interpersonal connections with engineers. The role demands excellent organizational abilities as well as strong analytical and problem-solving skills to perform the following: Comfortable working independently in a constantly evolving environment. Balance your time between inbound sales demand, marketing generated leads, customer presentations and prospecting activities to generate new demand. Develop test and automation expertise while supporting customers and their technical challenges through our sales process. Collaborate with the sales team to understand their needs and optimize lead handoff processes. Manage and prioritize a high volume of inbound leads and conduct initial qualifications. Timely respond to prospect and customer inquires Execute targeted outbound campaigns against tiered prospect lists, creating high-intent opportunities and supporting overall pipeline growth. Provide data driven insights to inform sales strategies and resource allocation. Possess empathy and listening skills. You must understand the customer's needs and pain points through active listening and empathy. You must empathize with customers' challenges and effectively address their concerns. Possess proven understanding of electronics, computers, and software programs Ability to continuously refine processes to improve efficiency and effectiveness. Ability to read and write in English and communicate with excellent grammar. Required Qualifications & Skills This position requires: • Bachelor of Science in Business or STEM • Strong desire to pursue a career in technical sales and marketing • Previous sales or customer facing experience • Experience with a CRM (Salesforce & Zoho) • Strong written and verbal communication skills Preferred Qualifications • Strong knowledge of test automation and embedded control hardware. • Experience with National Instruments hardware and software products. Submit your resume today Name Phone Email Upload Resume Upload supported file (Max 15MB) Submit [attributer-channel] [attributer-channeldrilldown1] [attributer-channeldrilldown2] [attributer-landingpage] [attributer-landingpagegroup] [attributer-channeldrilldown3]

  • Cyth Systems Incorporated Privacy Statement | Cyth Systems

    The Cyth Systems Incorporated Privacy Policy describes how we collect, use, share, and secure the personal information you provide on our website. COMPANY Privacy Policy Home > Company > Privacy Policy

  • NI Distribution - NI Software Test Workflow | Cyth Systems

    Test Workflow is a NI software featuring engineering-specific tools that help accomplish anything from their day-to-day work to overcome their obstacles. NI Test Workflow NI Authorized Distributor and System Integration Partner Home > Products > What is Test Workflow? What Is Test Workflow? Test Workflow is a bundle of select NI software featuring engineering-specific tools that help test professionals accomplish anything from their day-to-day work to overcoming their most challenging obstacles. Why Choose Test Workflow? You need to work efficiently to meet project timelines. It’s not the best use of your skills to program new tools or waste time with inefficient applications that weren’t designed for engineers. In Test Workflow, you get NI’s high-performance software, ensuring you can maintain quality and deliver to schedule. This was designed for you. What Can You Do with Test Workflow? Quick Measurement and Analysis Quickly set up your test with little to no code development. View data, analyze, and create shareable reports to communicate results to your team. Build Web-Connected Test Systems Automate repetitive tests using tools for instrument control, communication, data acquisition, and logic. Connect your system to the web and monitor test status from anywhere in the world. Test Anything, Measure Everything Connect to any NI or third-party instrument. Measure temperature, strain, sound and vibration, RF signals, and more. Analyze all your data. Develop Production Test Stations Integrate code developed in any modern programming language to a sequencer for a functional test system. When scaling to production, optimize throughput with native parallel testing. Research New & Emerging Technologies Test new technologies and evaluate design concepts with data-focused tools that let you interactively query and analyze results from different test runs. What’s Included in Test Workflow LabVIEW LabVIEW is a graphical programming environment engineers use to develop automated research, validation, and production test systems. TestStand* TestStand is test management software that helps you develop, debug, and deploy test systems and provides full visibility into testing process and results. DIAdem DIAdem is data management software for measurement data aggregation, inspection, analysis, and reporting. FlexLogger FlexLogger is application software for quick sensor configuration and data logging of mixed signals without programming. G Web Development Software G Web Development Software helps you create web-based applications for test and measurement applications without the need for web development skills. SystemLink Cloud SystemLink Cloud is an NI-hosted service in a secure, scalable cloud-computing environment that teams can use to access and share data from WebVIs developed in G Web. InstrumentStudio InstrumentStudio is application software that provides an integrated approach to interactive PXI measurements. LabVIEW Advanced Signal Processing Toolkit* The LabVIEW Advanced Signal Processing Toolkit provides interactive tools you can use to perform time frequency, time series, and wavelet analysis. LabVIEW Digital Filter Design Toolkit* The LabVIEW Digital Filter Design Toolkit provides interactive tools for the design, analysis, and implementation of digital filters. Select Your Test Workflow Edition Test Workflow Standard Recommended for applications that require hardware automation, data analysis, automated reporting, and remote access to test. Includes: -LabVIEW Full, with tools for advanced analysis and signal processing -G Web Development Software, for building web applications for test -DIAdem Advanced, for measurement data search, visualization, analysis, and creating automated reports -FlexLogger, for performing data acquisition with NI hardware without needing to do any coding Buy Test Workflow Standard Test Workflow Pro Recommended for building simple test and measurement applications. Includes the standard capabilities of LabVIEW: -Acquire data from NI and third-party hardware and communicate using industry protocols -Create interactive UIs for test monitoring and control. -Utilize standard math, probability, and statistical functions -Integrate code written in Python, C/C++, .NET, and MathWorks MATLAB® software -Save data to .csv, .tdms, or any custom-defined binary file Buy Test Workflow Pro

  • Measuring Sound Key Fundamentals Guide | Cyth Systems

    Cyth Systems | Whitepapers | Sensor Fundamentals | Measuring Sound Key Fundamentals Guide | Cyth Systems Measuring Sound Key Fundamentals Guide | Cyth Systems This guide helps you with the fundamentals of sound pressure, microphones, and helps you understand how different sensor specifications impact microphone performance in your application. After you decide on a sensor, consider the required hardware and software to properly condition, acquire, and visualize microphone measurements. You can also consider any hardware packages you may need. What is Sound Pressure Pressure variations, whether in the air or another medium are considered sounds. The human eardrum transfers pressure oscillations, or sound, into electrical signals that our brains interpret as music, speech, and noise. Microphones are designed to recreate this same processes. Using a microphone it is possible to record and analyze signals to gather information about the nature of the path a sound took from its source. For example, in noise, and harshness testing, engineers are often interested in reducing undesired sounds, such as the background noise a passenger experiences while driving in a car. These sounds include sounds that are above or below the frequencies the human ear can detect or amplitudes at specific resonant frequencies. These measurements are important to designers who need to reduce noise to meet emissions standards or to characterize a device for performance. Sound pressure is the most common measurement performed because of the human ability to detect it. Measured in pascals (Pa), the sound pressure level represents how a receiver perceives sound. You can also determine the sound power of a source. Measured in watts (W), the sound power level represents the total acoustic energy that is radiated in all directions. It is independent of the environment including the room, receivers, or distance from the source. Power is a property of the source, whereas sound pressure depends on the environment, reflecting surfaces, the distance of the receiver, ambient sounds, and so on. Measuring Sound with Microphones The most common instrumentation microphones are externally polarized condenser microphones, prepolarized electret condenser microphones, and piezoelectric microphones. Figure 1. A microphone is a transducer that converts acoustical waves into electrical signals. Condenser Microphones A condenser microphone operates on a capacitive design. It incorporates a stretched metal diaphragm that forms one plate of a capacitor. A metal disk placed close to the diaphragm acts as a backplate. When a sound field excites the diaphragm, the capacitance between the two plates varies according to the variation in the sound pressure. A stable DC voltage is applied to the plates through a high resistance to keep electrical charges on the plate. The change in the capacitance generates an AC output proportional to the sound pressure. The charge of this capacitor is generated either by an external polarizing voltage or by the properties of the material itself, as in the case of prepolarized microphones. Externally polarized microphones need 200 V from an external power supply. Prepolarized microphones are powered by IEPE preamplifiers that require a constant current source. Figure 1. A microphone is a transducer that converts acoustical waves into electrical signals. Condenser Microphones A condenser microphone operates on a capacitive design. It incorporates a stretched metal diaphragm that forms one plate of a capacitor. A metal disk placed close to the diaphragm acts as a backplate. When a sound field excites the diaphragm, the capacitance between the two plates varies according to the variation in the sound pressure. A stable DC voltage is applied to the plates through a high resistance to keep electrical charges on the plate. The change in the capacitance generates an AC output proportional to the sound pressure. The charge of this capacitor is generated either by an external polarizing voltage or by the properties of the material itself, as in the case of prepolarized microphones. Externally polarized microphones need 200 V from an external power supply. Prepolarized microphones are powered by IEPE preamplifiers that require a constant current source. Choosing the Right Microphone When choosing the optimal microphone, consider the type of response field, dynamic response, frequency response, polarization type, sensitivity required, and temperature range. There are also a variety of specialty type microphones for specific applications. In order to select and specify a microphone, the first criteria that needs to be looked at is the application and what the sound and environment represent. Consider the Microphone Response Field You must choose the microphone that is best for the type of field in which you will operate it. The three types of measurement microphone are free field, pressure field, and random incidence. These microphones operate similarly at lower frequencies but differently at higher frequencies. The most common microphone is a free-field microphone . It measures the sound pressure from a single source directly at the microphone diaphragm. It measures sound pressure as it existed before the microphone was introduced into the sound field. Echo free chambers or larger open areas are ideal for free-field microphones. Figure 4. Pressure-Field Microphone In many situations, the sound is not traveling from a single source. Random-incidence or diffuse-field microphones respond uniformly to sounds arriving simultaneously from all angles. They are applicable when taking sound measurements of buildings or areas with hard, reflective walls. However, for most microphones, the pressure and random-incidence responses are similar, so pressure-field microphones are often used for random-incidence measurements. Figure 5.Random-Incidence Microphone Select the Right Dynamic Range The main criterion for describing sound is based on the amplitude of the sound pressure fluctuations. The lowest amplitude that a healthy human ear can detect is 20 millionths of a pascal (20 μPa). Since the pressure numbers represented by pascals are generally low and not easily managed, another more commonly used scale, the decibel (dB) scale, was developed. This scale better matches the reactions of the human ear to the pressure fluctuations. Here are real-world examples of typical sound pressure levels for reference: Manufacturers specify the maximum decibel level based on the design and physical characteristics of the microphone. The specified maximum dB level refers to the point where the diaphragm approaches the backplate, or where total harmonic distortion (THD) reaches a specified amount, typically 3 percent THD. The maximum decibel level that a microphone generates in a certain application depends on the voltage supplied and that particular microphone’s sensitivity. Before you can calculate the maximum output for a microphone using a specific preamplifier and its corresponding peak voltage, you first need to calculate the pressure in pascals that the microphone can accept. You can calculate the amount of pressure using the following formula: Where P = pascals (Pa) and voltage is the preamplifier output peak voltage. After determining the maximum pressure level that the microphone can sense at its peak voltage, you can convert this amount to decibels (dB) using the following logarithmic scale: Where P = pressure in pascals Po = reference pascals (constant = 0.00002 Pa) This formula provides the maximum rating that a microphone, when combined with a preamplifier, is capable of measuring. For the low-end noise level, or minimum amount of pressure required, you need to review the cartridge thermal noise (CTN) rating of the microphone. The CTN specification provides the lowest measurable sound pressure level that can be detected above the electrical noise inherent within the microphone. Figure 6 shows the noise level at different frequencies for a microphone when used in conjunction with a preamplifier.\ Figure 6. The inherent noise level is greatest at upper and lower capabilities of the microphone.  When selecting a microphone, one must confirm that the pressure levels they are testing fall between the microphone’s CTN and the maximum-rated decibel level of the microphone. In general, the smaller the microphone diameter, the greater the high-end decibel level is. The larger diameter microphones typically have lower CTN, so they are recommended for low-range decibel measurements. Review Specifications to Evaluate Frequency Response After you consider the type of microphone field response and dynamic range you need, review the microphone’s specification sheet to find the usable frequency range (Hz). Smaller diameter microphones usually have a higher upper frequency level capability. Conversely, larger diameter microphones are more sensitive and better suited to detect lower frequencies. Manufacturers place a typical tolerance of ±2 dB on the frequency specifications. When comparing microphones, ensure you check the frequency range and the tolerance associated with that frequency range. If an application is not critical, you can improve the usable frequency range if you are willing to increase your allowable decibel tolerance. You can check with the manufacturer or look at the calibration sheet for a particular microphone to determine the actual usable frequency range for specific decibel tolerances. Decide on Polarization Type Traditional externally polarized and modern prepolarized microphones work well for most applications, but they do have some differences. Externally polarized microphones are recommended for high temperatures (120 °C to 150 °C) because the sensitivity level is more consistent in this range. Prepolarized microphones tend to be more consistent in humid conditions. Sudden changes in temperature that result in condensation on internal components may short out externally polarized microphones. Because externally polarized microphones require a separate 200 V power source, you are limited to 7-conductor cabling with LEMO connectors in this setup. The newer, prepolarized microphones have become more popular because they are powered by an easy-to-use, 2–20 mA constant current supply. With this design, you can use standard coaxial cables with BNC or 10-32 coaxial connectors for both current supply and signal to the readout device. Know Your Temperature Range Microphone sensitivity decreases as the temperature approaches the maximum specifications of the microphone. You must be aware of not only the operating temperature but also the storage temperature of the microphone. Operating and/or storing a microphone in extreme conditions can adversely affect it and increase its calibration needs. In many cases, the required preamplifier can be the limiting factor for operating temperature range. Although most microphones can operate to 120 °C without any loss of sensitivity, the preamplifiers required for these microphones typically operate in the 60 °C to 80 °C range. Use a Specialty Microphone for Particular Applications When temperature is of concern, a probe microphone offers an alternate solution. The probe microphone was designed for sound pressure measurements in harsh environments. It combines a microphone with a probe extension tube. This enables the user to get very close to sound sources. The probe tip will send the acoustic signal to the microphone inside the probe housing. By placing the critical components in a separate housing, this microphone type can be used in extremely high temperature applications, or where access to the sound source is too small for a typical condenser microphone. Applications that require a microphone to be fully submersible provide their own challenges. Hydrophones were designed to detect underwater sound pressure signals. Industrial and scientific underwater testing, monitoring and measurements are accomplished with this corrosion resistant design. Different models are available for different sensitivities, frequencies decibel levels and operating depths. Sound Level Meters are designed by manufacturers to provide a fast and convenient way to obtain a sound pressure level reading. This design contains all the components necessary to take a sound pressure reading. This small handheld unit includes the microphone, preamplifier, power source, software and display. This is an excellent choice for taking a dB measurement in an industrial setting, for community noise assessment, noise exposure measurements, artillery fire measurements, and many other applications. The Sound Level Meter can be provided with a number of options, including A Weighting, real time analyzers, and software options. When measurements involving the magnitude and direction of the sound needs to be captured, an intensity probe is an excellent choice. By taking two phase matched microphones and placing a spacer between them, a user can not only tell the pressure level, but also the speed and direction of the propagating sound waves. Different sized spacers are available for measuring the particle velocity at different frequencies. The higher frequencies typically require a smaller spacer. Larger spacers are suitable for lower frequencies and for situations where reverberation is present. For Near Field Acoustic Holography (NAH) applications where three dimensional field values are required, an Array microphone set-up is recommended. By taking a number of array microphones and spacing them out in a preset pattern, and combining them with the appropriate software, spatial transformation of a complex sound pressure field is projected to effectively map the acoustic energy flow. Array microphones are an excellent choice for large channel count acoustic testing. Transducer Electronic Data Sheet (TEDS) are a recommended option for arrays, since they enable the user to quickly and easily identify a particular microphone. These TEDS chips and software enable the user to store information on the microphones model, serial number, calibration date, along with the specifications of the microphones sensitivity, capacitance, impedance, etc… that can be downloaded and help ensure accurate test results. Outdoor microphones have been developed to be able to withstand the rigorous environmental exposure that these microphones will be subjected to. Airport noise, or highway traffic noise has become increasing popular spots for test and measurements, to provide safety for humans. Environmental microphones and outdoor microphones provide different levels of protection for the internal components, while maintaining their high-accuracy specifications. Signal Conditioning When preparing a microphone to be measured by a DAQ device, you need to consider the following to make sure all your signal conditioning requirements are met: Amplification to increase measurement resolution and improve signal-to-noise ratio Current excitation to power the preamplifiers in IEPE sensors AC coupling to remove DC offset to increase resolution and take advantage of the full range of the input device Filtering to remove external, high-frequency noise Proper grounding to eliminate noise from the current flow between different ground potentials Dynamic range to measure the full amplitude range of the microphone

  • FPGA Programming Fundamentals for Every LabVIEW Developer

    Cyth Systems | Whitepapers | FPGA Clocking, Datatypes, Pipelining, Synchronization, and more | FPGA Programming Fundamentals for Every LabVIEW Developer FPGA Programming Fundamentals for Every LabVIEW Developer Section 1: Why Use an FPGA? When developing on an FPGA, you are doing more than providing instructions to be executed on a pre-defined chip, you are actually customizing the FPGA chip itself. The benefit of this is that the application can achieve very high loops rates, parallelism, and responsivity to I/O and logic states with minimal software overhead. Depending on your experience, this may be review, so you may want to skip down to some of the other sections covering pipelining , parallel loop operations, or host synchronization. But if you want the full story, read on. FPGAs, or Field Programmable Gate Arrays, are digital circuits that can be modified time and again for different applications or updates to existing applications. For test and measurement applications, FPGAs can be programmed to perform tasks more traditionally handled by a processor – anywhere from a simple microcontroller through a multicore CPU. While not all tasks make sense to execute on a FPGA, applications requiring inline signal processing for minimal latency, high hardware-timed reliability, and/or high-speed, deterministic control are good candidates for FPGA integration. System diagram showing generic timing capabilities for FPGA and CPU control loops Below are some of the most common use cases for FPGAs in control, monitoring, and test applications. Reduce data processing time → sub-µs loop rates Customizable algorithms and filtering → hardware-timed execution Lower level memory control → DMA streaming with low overhead Custom protocol support and decoding → digital communications and integration Complex and deterministic triggers → customize responsiveness Now, if you know you need any of the system or performance benefits listed above, an alternative option to using an FPGA is to use an ASIC. An ASIC, or Application-Specific Integrated Circuit, is a chip that has a predefined set of functionality that can be programmatically accessed through an API. Depending on the application requirements and planned deployment volume, selecting or building an ASIC for some set of functionality may be the right design decision, but the core limitation is that once they are built, the underlying hardware cannot be modified. From the moment the chip ASIC is fabricated, you take it as it is. And here is the beauty of FPGA-based development. You can not only modify the application software running on top of the FPGA, but also modify the hardware circuit implemented on the FPGA itself. This gives development teams massive flexibility to adapt to changing application requirements, bugs, and new features over time. Section 2: FPGA Programming Basics HDLs, or Hardware Description Languages, are used to program an FPGA chip. To oversimplify what programming a circuit means, an HDL program is compiled and pushed to the FPGA target which intakes that compiled program and modifies the logic gates on the chip. The result is a temporarily static instantiation of a “personality” on the FPGA that incorporates: FPGA fabric overview highlighting core blocks available across platforms. Memory blocks – data storage in user-defined RAM Logic blocks – Logic, arithmetic, DSP algorithms, etc. I/O blocks – connections between external circuits (e.g., sensors, processors, other FPGAs) and logic blocks Interconnections – connections between multiple I/O blocks common in any integrated hardware application If the developer wants to make a tweak to some block or repurpose the FPGA entirely, they must modify the HDL source code, re-compile, and re-push the compiled code to the FPGA. Pretty neat, right? Yes, very neat, though there are some caveats: When compared to an ASIC, FPGAs are typically more power hungry, less performant, and higher cost for post-design deployment, though the application design cycles are typically far shorter, thereby lowering total engineering cost. In terms of abstraction, HDLs resemble assembly languages, which are quite low in the compute hierarchy. To put it another way, they are not easy to program. While there are some early AI HDL copilot tools out there, they seem to be far from maturity in truly expediting the digital software design process. While there are numerous HDLs available, the two most common are VHDL and Verilog. While there are some savants out there, if you haven’t been using these languages for some time or have a library of existing IP at your disposal, the learning curve on these languages is both steep and long. Don’t forget your oxygen tank. This is where LabVIEW FPGA comes in. It provides programming access points at a higher level in the abstraction hierarchy targeting developers who see the benefit of using an FPGA but don’t have expertise in HDL development and validation. While LabVIEW is a full-featured graphical programming language with full IDE support, the FPGA Module extends some of the core tenets to FPGA application development, making high-performance, low-latency FPGA-based systems more accessible to a wider swath of engineering teams. Again, if you’re an experienced HDL programmer, all the power to you. The remainder of this article intends to provide additional details on how FPGA-based application development can be simplified in LabVIEW and some tips to use along the way. Want to see some application examples where LabVIEW FPGA shines? Explore Case Studies Section 3: FPGA Clocking If you’re developing on an FPGA, you almost certainly have some critical design considerations around closed loop control and timing. While FPGAs typically run at lower rates than CPUs and GPUs, the level of control and parallelism they provide can enable very complex orchestration of processes. LabVIEW FPGA provides a number of tools to control timing in your application, but here we’ll cover Single-Cycle Time Loops and derived (divided) clocks. The Single-Cycle Timed Loop is a While loop intended to execute all of the functionality inside it within one clock cycle of the FPGA. In this code example, all of the functionality added to the loop would need to execute within 5ns (200 MHz loop interval). IF you try to compile and all of the functionality cannot be executed on the FPGA within that interval, LabVIEW will through a timing violation, suggesting that you make some optimization or removal of functionality, or lower the clock rate. A Single-Cycle Timed Loop executing at 200MHz Derived clocks easily enable developers to create loops of different clock domains in the same application. This gives the developer timing flexibility in which functionality gets executed at which rate, empowering them to parallelize processes and reserve FPGA resources for higher-speed tasks and avoid timing violations. The clock configuration utility in LabVIEW FPGA allows for any combination of integer multipliers and divisors. FPGA Derived Clock configuration utility showing a 35MHz clock derived from the 200MHz parent clock Section 4: Numeric Data Types In LabVIEW FPGA applications, there are three main numeric data types: integers, fixed point, and single-precision floating point. It is non-trivial to decide which data type to use for different scenarios, so here’s a simplifying outline of the choices. Integers – Our understanding of integers dates back to grade school and then was bolstered when we first learned to program (ANSI C for me). You can use integers when there is no need for precision beyond the decimal, but the story goes deeper than that. Integers can be a good choice for numeric representation if you have the following requirements: Bit manipulations, such as masking, shifting, or inverting. Packing of multiple 8- or 16-bit integers into 32- or 64-bit words. This can be helpful with data sharing as it minimizes the overhead associated with each numeric read/write. Choosing between calibrate fixed-point or uncalibrated integer I/O node outputs Fixed-point - As opposed to floating point number which have a relative precision (the decimal placement can “float”), fixed point numbers have an absolute precision (the decimal point is set). Resource-efficient arithmetic You’re using High Throughput Math functions* Default datatype with C Series analog I/O Watch out for data saturation and LSB (least significant bit) underflow errors. Single-precision floating point – This numeric datatype in LabVIEW FPGA provides 24-bit procession with a variable position of the decimal. Naturally, arithmetic operations on floating-point numbers are more resource intensive than integers or fixed-point numbers, though there are a couple interesting use cases: · High dynamic range data paths. Many analog I/O channels have multiple ranges where the best precision and accuracy is provided in the channel whose max value is as close to the measurement or setpoint as possible. Oftentimes changing ranges means changing digits of precision implying your FPGA designs needs to be flexible across that set of ranges. Prototype algorithms and designs quickly without losing precision and worry about resource optimization later. Oh, and don’t forget to watch out for data type tradeoffs. Extra resources for arithmetic can scale quickly when performing operations on large arrays and other data structures. Sometimes precision is worth the extra FPGA resources and sometimes it is not. Only you as the system developer can determine that. Section 5: Pipelining Pipelining is an extremely powerful paradigm applicable across computing architectures. Pipelining is a process by which parallel execution of operations (or instructions at the chip level) can occur, thereby increasing throughput and operational clock frequency for a given amount of resource utilization. This means that assuming there is FPGA fabric available, transforming a non-pipelined design to a pipelined design implies you can increase the clock speed for a given set of process operations. If you don’t have a need to execute faster or increase throughput, pipelining may not be worth your trouble. Implementing a pipelined design is aided by feedback nodes in LabVIEW FPGA. Feedback nodes incorporate a data register under the hood such that data can be shared between loop iterations at each step along the process. Pipelined data flow using feedback nodes to share data between loop cycles. The result is a higher throughput algorithm. Section 6: High Throughput Math Functions These specialized functions in the LabVIEW FPGA API are implemented with pipelining under the hood, thereby saving significant development and debugging time compared to custom-designed functions. While they may not be as performant as alternatives composed in lower-level HDLs, they tend to work pretty well for out-of-the-box functionality. · The API includes trigonometric, exponential, logarithmic, and polar operations, in addition to (no pun intended) basic arithmetic operations. These functions operate on fixed-point numbers Some of these functions have Throughput controls that strive to operate to the data throughput level you specify. High Throughput function palette in LabVIEW FPGA. These functions are implemented with pipelining under the hood, thereby helping increase out-of-the-box throughput. Section 7: Parallel Loop Operations As previously described, you may require multiple loops running on your FPGA, such as the need to have different processes running at different loop rates to optimize FPGA resource utilization. There are a number of mechanisms available in LabVIEW FPGA used to communicate between these various loops. · Local variables, global variables, register items – These mechanism are used for communicating latest data without buffering. Because of this, they are subject to pesky race conditions that arise when you have multiple writers and 1-N readers of that memory space. Also, because there is no buffering, they are generally not good for data streaming which typically requires a lossless communication mode. Local variables – access scope to a single VI Global variables – access scope to multiple VIs Register items – Because you can generate a reference to a register, you can re-use subVIs that access different registers (through the provided reference) given different calling conditions. This makes them more flexible in practice than local and global variables. FIFOs and handshake Items – These “first in, first out” data structures are bread and butter for lossless data communication because they have allocated memory to buffer data. This is useful when the producer of the data and the consumer of the data do not always run at the same rate. Because all memory blocks in the FPGA are allocated at compile time, it is possible for these data structures to run out of memory. For FIFOs, you can have multiple writer and multiple readers accessing the same data buffer, whereas handshake items are single writer / single reader. Memory items – Block memory and lookup tables (LUTs) provide mechanisms for re-writeable data storage that can be accessed across your FPGA application. They are lossy and therefore a poor choice for data streaming, though widely flexible otherwise. We’re starting to get into some rather non-trivial concepts here with various caveats and implications, meaning care and caution must be exercised when choosing which data communication mechanism should be used for different pieces of functionality across complex FPGA applications. Section 8: Host Synchronization Depending on the application, you may need to send data between an FPGA and an external processor, such as a Real-Time processor or a Windows host PC, for datalogging, further processing, or visibility in a UI. One tool to help with this data sharing is a Direct Memory Access (DMA) FIFO. These data structures provide an efficient mechanism for data streaming which minimizes the processor overhead for lossless data fetching. Direct Memory Access (DMA) FIFO in LabVIEW FPGA with a warning indicator for underflow conditions. In this code snippet, the FPGA is taking accelerometer data, converting that data to single point, and writing it to a lossless DMA FIFO. What is not shown here is the corresponding FIFO Read function call that would be made asynchronously from code running external to the FPGA. Section 9: Compilation There exists a world of information and complexity on this topic, but here we’ll highlight some core points to help you take your design implemented in LabVIEW FPGA and get it running on real hardware. While LabVIEW code intended to run on Windows does not necessarily require a compilation step, LabVIEW FPGA code always does. The output of a LabVIEW FPGA compilation is a bitfile which can be referenced from calling programs and re-used across similar FPGA targets. When you select to build your code, LabVIEW presents you with a number of configuration and optimization options which must be selected (or implicitly accepted in common click-through practice) before kicking off the compilation. From there, LabVIEW generate intermediate files which are then passed to a Xilinx compiler (don’t worry, you don’t need to install these tools separately) Lastly, you have a few different options of where the code is actually compiled. You can do so locally on the development machine, on a networked server, or on an NI-provided cloud service. If you’re just getting started with LabVIEW FPGA, compiling locally is probably the easiest option, but once you get up and running with the tool chain, the cloud compile service is pretty neat and offloads a ton of work from your machine. Compile options in the LabVIEW FPGA build specifications Section 10: Debugging Ever write perfect code the first time around? Me neither. Debugging is a fact of life, but you don’t need me to tell you that. Because FPGA code can take a long time to compile, you don’t necessarily want to go through that step every time you make a small tweak to an algorithm or want to test out a new subVI. Given this, LabVIEW FPGA offers a few different execution modes which can be very helpful in debugging. I’ve only ever used Simulation (Simulated I/O) when not going through a full compilation. FPGA execution modes options relevant for simulation and debugging. With that said, verifying full system functionality with I/O and memory simulated on the FPGA is generally a bad idea. Host visibility : Use indicators and FIFOs to pass data from key areas up to the host to get quicker top-level visibility into different pieces of lower level functionality in subVIs that are otherwise difficult to access. In final deployment applications, be sure to remove unnecessary indicators and data structures as they consume limited resources. Performance benchmarking : Use sequence structures and tick counters to benchmark timing in critical sections of code. From experience, this is often an iterative process where you ratchet up loop rates until timing violations occur or you identify areas of the code that can be further optimized through pipelining or other tactics. Xilinx Toolkit : Use the Xilinx ChipScope toolkit to probe, trigger, and view internal FPGA signals on FlexRIO targets. Conclusion Whether you’re assessing FPGA technology fit, trying to choose a future-proofed platform, or developing a system, the concepts outlined in this article are intended to bolster your knowledge so you can make the best decisions possible for your application. While specifically focused on fundamentals applied through LabVIEW FPGA, these discussed concepts span across chipsets, programming languages, and application requirements. Timing paradigms and clocking Data types and structures Memory and data transfer Compilation and debugging If you’re interested in learning more about design patterns associated with common processes, such as analog data streaming, custom triggers, and serial protocol decoding, you can review this article. Ready to enhance your FPGA development skills or want help selecting a technology? Get in touch with an FPGA expert

  • Components of Automated Testing System for PCBA

    Cyth Systems | Whitepapers | All the details you need to know to create a system for testing circuit boards | Components of Automated Testing System for PCBA Components of Automated Testing System for PCBA Board holding Test fixtures are used to securely hold the PCBs during the testing process. These fixtures ensure proper alignment and contact between the PCB and the testing equipment. Test fixtures can be custom-designed to accommodate different board sizes and configurations, allowing for efficient testing of various PCB designs. Interposer Board Instrumentation Printed Circuit Board Test Equipment Test equipment includes a wide range of tools and instruments used to perform different types of tests on PCBs. This may include oscilloscopes, multimeters, logic analyzers, and power supplies. The choice of test equipment depends on the specific requirements of your testing process and the types of tests you need to perform. Printed Circuit Board Test Software Software plays a critical role in automated testing systems. It allows you to program and control the testing process, configure test sequences and parameters, and analyze test results. The software also provides traceability and documentation of test results, enabling manufacturers to track and analyze data for quality control purposes. PCB Testing Equipment Communication Interface The communication interface allows the automated testing system to interact with other systems and devices, such as production control systems or data collection systems. This interface enables seamless integration of the testing process with other stages of the manufacturing process, ensuring efficient workflow and data exchange. PCBA Test Data Management System & Reports A data management system is used to store, organize, and analyze test data. This system allows manufacturers to track and analyze data for quality control purposes, identify trends or patterns, and make informed decisions based on the test results. The data management system also provides traceability and documentation of test results, ensuring compliance with regulatory requirements and customer expectations. By understanding the components of an automated testing system for PCBA, you can make informed decisions when selecting the right equipment and software for your testing process. It is essential to choose a system that meets your specific requirements, offers flexibility and scalability, and integrates seamlessly with your existing manufacturing process.

  • NI Test Forum - Phoenix

    Events ||NI Test Forum - Phoenix| NI Test Forum - Phoenix NI Test Forum - Phoenix May 15, 2025 Phoenix, AZ As engineering demands evolve, testing needs to be faster, more accurate, and adaptable to complex workflows. This forum provides an in-depth look at how NI’s latest solutions in hardware and software can streamline and optimize testing processes across various application areas. Attendees will explore advanced validation techniques that reduce development time, boost throughput, and improve data reliability—all critical for meeting today’s stringent testing requirements. Come join us for any portion of the day and engage with sessions that align with your interests to make the most of your time. Join industry experts for hands-on and technical sessions on test automation, real-time data acquisition, and system flexibility. Explore NI platforms like PXI, CompactDAQ, and mioDAQ, through live demos. Learn about the latest Aerospace and Defense advancements, including rack-based ATE and RF solutions for SatCom, radar, and electronic warfare. Gain insights into scalable test systems that handle growing datasets, deliver fast results, and drive efficiency across your organization. Space is limited to this complimentary event. Make sure to register early to secure your spot! https://events.ni.com/profile/web/index.cfm?PKwebID=0x1476692d99#:~:text=This%20forum%20provides%20an%20in,processes%20across%20various%20application%20areas.

  • Technical Field Sales Engineer  Cyth Systems, Inc. San Diego CA

    Technical Field Sales Engineer  | Technical Field Sales Engineer  November1st, 2025 Jobs Cyth Systems, Inc. Technical Field Sales Engineer 9939 Via Pasar, San Diego, CA, USA Job Summary Cyth Systems is looking for a Field Sales Engineer to visit with new and existing customers, understand their technology goals, and research a solution with our own engineering team, to help customers decide on a solution to their requirements. Job Description Cyth Systems is looking for a Field Sales Engineer to visit with customers, understand their technology goals, research a solution with our own engineering team, to help customers decide how to proceed on their project. Cyth Systems is an Engineering Integration Company specializing in Automated Test Equipment (ATE), Embedded Control Systems, and Machine Vision Systems for over 20 years. We enjoy working with exciting customers both small and large, from nearly every industry, doing interesting and engaging engineering projects week after week. We combine a small business family feeling with a world-class engineering team, and customers appreciate both the personal and professional treatment we bring to their projects. We believe this makes Cyth a great place to build a satisfying and long-lasting career where we get to follow our natural interests and passions every day. In this role, you represent the tip of the spear of several respected industry brands with highly differentiated products and engineering services. You will grow your business acumen and strategic sales capabilities while consulting with engineers and their leadership to make major design and equipment investments. What is especially unique about this position is working in several industries such as Life Sciences, Semiconductor, Product Manufacturing, Sporting Goods, Aerospace, Energy and more! A typical week might include visiting 6 customers in the local area, meeting with our engineering team to review customer needs, creating a proposal to show customers solutions and options for their project, and discussing the decision with the customer and their management. As you establish trust for yourself and our brands, you become the preferred vendor for your customers for future projects and repeat orders, resulting in increased sales growth for the company. If you enjoy a collaborative environment where you can be a part of a team who build things and bring them to life, across a diverse range of industries and applications, come join our team at Cyth Systems! About Cyth Cyth Systems is an Engineering Integration Company specializing in Automated Test Equipment, Embedded Control Systems, and Machine Vision Systems. We enjoy working with our customers, both small and large, from nearly every market. We combine a world-class engineering team, and our customers appreciate both the personal and professional treatment we bring to their projects. You will grow your business acumen and strategic sales capabilities while consulting with engineers and their leadership to make major design and equipment investments. What is especially unique about this position is working in several industries such as Life Sciences, Semiconductor, Product Manufacturing, Sporting Goods, Aerospace, Energy and more. If you enjoy a collaborative environment where you can be a part of a team who build things and bring them to life, across a diverse range of industries and applications, come join our team at Cyth Systems! Qualifications This position requires skill and experience in the following areas: Experience working with and proposing engineering services and solutions. Experience selling in a long sales cycle with complex custom engineering for both hardware and software solutions. Knowledge of Test Automation and embedded control hardware and software Experience selling to engineering leadership, including Directors and VPs 3+ years’ experience in business-to-business high tech sales Excellent English communication (written and spoken) Responsibilities Planning and executing territory and account development initiatives to generate demand in identified areas of greatest opportunity. Technical consulting with customer engineers and managers to understand and address their technical and business requirements with the best Cyth and Partner products and services that meet their needs. Managing and closing sales opportunities through collaborations with Cyth and Partner resources. Networking and discovery within assigned accounts to engage with new groups, create and sustain valued relationships with customer leadership, and identify new qualified sales opportunities. Achieve annual quota and quarterly targets to achieve commission & bonus. Generate demand for products through effective top account plans and overall territory strategy. Effectively lead Cyth and Partner resources to close sales. Adapt and lead client presentations and product demonstrations. Learn the Cyth and Partner product and service offerings and relevant value propositions. Demonstrate account knowledge and ability to impact revenue by utilizing sales tracking systems and providing accurate forecasts. Other responsibilities Technical degree with a major in Electrical, Computer, Mechanical Engineering, Physics, Computer Science, or similar Sales: 3 years Technical sales: 1 year Valid driver’s license and reliable transportation Travel locally to visit customers. Ability to work in-office full-time in San Diego, CA Must be able to lift 5–10-pound objects, 5-10 times per day. Prolonged periods sitting at a desk and working on a computer. Ability to commute/relocate: San Diego, CA 92126: Reliably commute or planning to relocate before starting work (Required) Experience: Sales: 1 year (Required) Technical sales: 1 year (Required) Work Location: In person Job Type: Full-time Supplemental pay types: Commission pay Schedule: 8-hour shift Submit your resume today Name Phone Email Upload Resume Upload supported file (Max 15MB) Submit [attributer-channel] [attributer-channeldrilldown1] [attributer-channeldrilldown2] [attributer-landingpage] [attributer-landingpagegroup] [attributer-channeldrilldown3]

  • Project Engineer Cyth Systems, Inc. San Diego CA

    Project Engineer | Project Engineer November1st, 2025 Jobs Cyth Systems, Inc. Project Engineer 9939 Via Pasar, San Diego, CA, USA Job Summary Reports to: Director of Engineering Exemption Status: Full-time Exempt Location: Onsite Job Description We are looking for a motivated project engineer to join our dynamic engineering team, responsible for tackling a variety of projects. Your main responsibility will be designing and developing custom systems for various applications alongside the broader engineering team and project architect. System assembly and validation will be your critical job function along with an equal degree of programming within LabVIEW to assist in project completion. About Cyth Cyth Systems is an expanding engineering company that works alongside the most impressive and exciting companies in California, the United States, and Europe. Cyth Systems services companies in multiple industries including life sciences, biotech, automotive, energy, semiconductor, technology, and manufacturing. The dynamic nature of our customer base provides our team with an exciting and fulfilling career full of opportunity to develop as an engineering professional. We are a purpose led and performance driven team; and we look forward to your fresh perspective and contribution as we support innovation. Qualifications Bachelor's degree in STEM-related field or equivalent experience. 1+ years of engineering related experience. Experience with high-level programming language, LabVIEW highly preferred. Analytical and problem-solving skills. Self-motivated to excel in assigned responsibilities and team projects. Responsibilities Lead development of embedded, automated test, and vision systems. Support various projects with LabVIEW design and architecture. Collaborate with engineering team to design, assemble, test, and validate projects on both hardware and software fronts. Produce professional documentation related to system design and function. Review LabVIEW code for debugging, error diagnosis, and documentation. Other requirements Valid driver's license and reliable transportation. Ability to work in-office full-time in San Diego, CA. Must be able to lift 20-50 pound objects 5-10 times per day. Prolonged periods of sitting at a desk and working on a computer. Job Type: Full-time Salary: $69,000.00 - $95,000.00 per year Benefits: 401(k) matching Dental insurance Health insurance Life insurance Paid time off Vision insurance Schedule 8 hour shift Ability to commute/relocate: San Diego, CA 92126: Reliably commute or planning to relocate before starting work (Required) Work Location: In person Submit your resume today Name Phone Email Upload Resume Upload supported file (Max 15MB) Submit [attributer-channel] [attributer-channeldrilldown1] [attributer-channeldrilldown2] [attributer-landingpage] [attributer-landingpagegroup] [attributer-channeldrilldown3]

  • The Benefits of Automated PCBA Testing in Modern Manufacturing

    Cyth Systems | Whitepapers | Automated Printed Circuit Board Testing | The Benefits of Automated PCBA Testing in Modern Manufacturing The Benefits of Automated PCBA Testing in Modern Manufacturing The Importance of PCBA Testing in Manufacturing Ensuring Product Quality and Reliability As electronic devices become more complex, the importance of testing each PCBA for functionality and performance is paramount. Failures at the PCBA level can result in product recalls, costly rework, and damage to brand reputation. Effective testing catches defects early in the production process, ensuring that only high-quality products reach customers. Types of PCBA Testing Methods There are several methods for testing PCBAs, including: In-Circuit Testing (ICT): Verifies component placement, solder integrity, and electrical functionality. Functional Testing (FCT): Simulates the operation of the board in real-world conditions to ensure that it performs as expected. Boundary Scan Testing: Tests for connectivity issues on boards with limited test points, leveraging JTAG standards. Automated Optical Inspection (AOI): Uses cameras and image processing to inspect boards for visual defects like incorrect components, misalignments, or poor solder joints. Flying Probe Testing: Uses movable probes to test individual nets and components on low-volume or prototype PCBAs. Challenges of Manual Testing in Modern Manufacturing Manual testing of PCBAs is labor-intensive, time-consuming, and prone to human error. As the complexity of circuits and components grows, manual methods struggle to keep up, resulting in bottlenecks, inconsistent test results, and higher costs. Automated testing addresses these challenges, making it indispensable in modern manufacturing environments. The Advantages of Automated PCBA Testing 1. Increased Efficiency and Throughput Automated testing systems are designed for speed, allowing manufacturers to test a higher volume of PCBAs in less time compared to manual methods. High-speed testers, such as ICT systems, can test hundreds or thousands of points in seconds, significantly reducing test times per unit. Key Benefits: Scalability : Automated systems can easily scale with production volume, supporting high-mix, low-volume to high-volume production. Reduced Cycle Times : Faster test cycle times lead to shorter production cycles, accelerating time-to-market for products. Continuous Operation : Automated systems can operate 24/7 with minimal downtime, increasing overall production capacity. 2. Improved Accuracy and Consistency Human error is one of the biggest risks in manual testing, leading to missed defects, incorrect test results, and inconsistencies in quality. Automated testing eliminates these risks by providing consistent, repeatable results with high precision. Key Benefits: High Repeatability : Automated systems ensure the same test conditions for every board, leading to consistent results across production runs. Precision Measurement : Automated testers can perform high-precision measurements of electrical signals, ensuring compliance with tight tolerances. Reliable Defect Detection : Automated systems use advanced algorithms and image processing to detect subtle defects that are difficult to catch manually. 3. Cost Reduction and Better ROI While the initial investment in automated test equipment can be significant, the long-term cost savings are substantial. Automated testing reduces labor costs, minimizes rework and scrap, and improves yield rates. Key Benefits: Lower Labor Costs : Automated systems require fewer operators, freeing up human resources for other value-added tasks. Higher First-Pass Yield : By catching defects early, automated testing minimizes rework, scrap, and the cost of warranty returns. Faster Return on Investment (ROI) : The combined benefits of increased throughput, reduced errors, and lower labor costs lead to faster ROI. 4. Enhanced Test Coverage and Flexibility Modern PCBAs contain densely packed components with complex layouts, making it difficult for traditional testing methods to achieve full test coverage. Automated systems, especially those integrating multiple test methods (e.g., ICT combined with AOI), provide comprehensive test coverage for even the most complex boards. Key Benefits: Multi-Technology Integration : Automated systems can integrate multiple test methods into a single platform, covering electrical, functional, and visual inspections. Configurable Test Programs : Automated testers allow for easy reconfiguration and updating of test programs, accommodating design changes and new product introductions (NPIs). Adaptability to Different Board Types : Automated systems can handle a wide range of board sizes, component densities, and layouts, providing flexibility for high-mix production environments. 5. Data Collection and Analytics for Continuous Improvement Automated PCBA testing systems generate large amounts of data that can be used to drive continuous improvement in the manufacturing process. Advanced test software can analyze test results, identify trends, and provide insights into process variations, helping manufacturers address root causes of defects. Key Benefits: Real-Time Monitoring : Automated systems provide real-time data on test results, allowing for immediate action on detected issues. Process Optimization : Detailed analytics enable manufacturers to fine-tune processes, leading to better yields and reduced defect rates. Predictive Maintenance : Data from automated testing can be used to predict when equipment needs maintenance, minimizing unexpected downtime. 6. Compliance with Industry Standards and Traceability For industries with stringent regulatory requirements, such as aerospace, medical devices, and automotive, maintaining detailed records and traceability is essential. Automated testing systems can automatically generate test reports, logs, and compliance documentation. Key Benefits: Standardized Testing Procedures : Automated systems ensure that every board is tested according to predefined standards and procedures. Traceability : Automated systems maintain detailed logs of each test, providing traceability for regulatory compliance and audit purposes. Regulatory Compliance : Automated systems help manufacturers meet industry-specific standards, such as IPC, ISO, and CE, with documented proof of quality assurance. 7. Reducing Time-to-Market In today’s competitive market, getting products to market quickly is a critical advantage. Automated testing shortens development cycles by speeding up the validation process and reducing delays caused by manual testing bottlenecks. Key Benefits: Accelerated Prototyping and Validation : Automated systems enable rapid testing of prototypes, helping engineers quickly identify design issues and iterate faster. Streamlined Production Ramp-Up : Automated testing ensures consistent quality as production scales from pilot runs to full-scale manufacturing. Faster Product Launches : By reducing time spent on manual testing and rework, automated systems help manufacturers launch products faster, capturing market opportunities more effectively. Future Trends in Automated PCBA Testing Integration with Industry 4.0 and Smart Manufacturing Automated testing systems are increasingly being integrated into smart manufacturing environments, where data from test equipment is connected to enterprise systems for end-to-end visibility, predictive maintenance, and process optimization. AI-Driven Test Algorithms Artificial intelligence and machine learning are being incorporated into automated test systems to enhance defect detection, optimize test strategies, and adapt testing procedures based on real-time data. Miniaturization and Higher Complexity Testing As PCBs become smaller and more densely packed, automated test systems will need to evolve to handle micro-components and more complex multi-layered boards, necessitating advances in test equipment precision and versatility. Conclusion Automated PCBA testing offers a range of benefits that are essential for modern electronics manufacturing. From improving test accuracy and efficiency to reducing costs and accelerating time-to-market, automated systems have become indispensable in ensuring the quality and reliability of today’s electronics. As technology evolves, the role of automated testing will only grow in importance, helping manufacturers stay competitive in an increasingly complex and demanding industry.

  • Offshore Technology Conference (OTC) 2025

    Events ||Offshore Technology Conference (OTC) 2025| Offshore Technology Conference (OTC) 2025 Offshore Technology Conference (OTC) 2025 May 5, 2025 Houston, TX The Offshore Technology Conference (OTC) 2025 is a large, annual event focused on the offshore energy sector, bringing together professionals from around the world to share knowledge and advancements in offshore technologies . It's a platform for discussing and exploring innovations in areas like oil and gas, renewable energy sources (solar, wind, hydrogen), and other marine resources. The conference features exhibitors showcasing cutting-edge technologies, a technical program with sessions on various industry topics, and networking opportunities for attendees. Here's a more detailed breakdown: Dates and Location: OTC 2025 will be held from May 5-8, 2025, at NRG Park in Houston, Texas. Purpose: OTC serves as a hub for exchanging ideas and opinions to advance scientific and technical knowledge related to offshore resources and environmental matters. Key Activities:Exhibitions: Over 1,200 exhibitors will showcase their latest technologies and solutions for the offshore energy sector. Technical Program: A wide range of sessions and panels will address key issues and challenges facing the offshore energy industry, including topics like carbon capture and storage, subsurface approaches, and innovative drilling and completion techniques. Networking: Attendees can connect with industry leaders, experts, and peers to build relationships and gain valuable insights. Focus Areas: The conference covers a broad spectrum of offshore energy topics, including oil and gas, renewable energy sources (solar, wind, hydrogen), and other marine resources. Organized by: OTC is a collaboration of 15 non-profit organizations dedicated to supporting the global energy sector according to Tethys.pnnl.gov.

  • The Most Important Considerations of an Embedded System Design

    Cyth Systems | Whitepapers | The NI RIO Platform solves each of these design challenges | The Most Important Considerations of an Embedded System Design The Most Important Considerations of an Embedded System Design 1. Real-Time Performance and Deterministic Control In many embedded applications, precise timing and deterministic behavior are critical. Systems such as industrial controllers, robotics, and automated test equipment require predictable and consistent execution of tasks to ensure that operations run smoothly. Challenges: Maintaining consistent response times in time-sensitive applications. Executing control algorithms and feedback loops with microsecond-level accuracy. Handling complex timing, synchronization, and control tasks in real-time. How the NI RIO Platform Solves This: LabVIEW Real-Time OS : The CompactRIO and Single-Board RIO platforms run on a real-time operating system (NI Linux Real-Time) that guarantees deterministic behavior, ensuring that critical tasks are executed with consistent timing. Built-In FPGA : The integrated FPGA provides ultra-low-latency control and precise timing, allowing for rapid response to external events, high-speed signal processing, and parallel execution of control tasks. Applications: Precision motion control in robotics. Real-time monitoring and control in process automation. High-speed automated testing and data acquisition. 2. Flexibility and Customization Embedded systems must often cater to specific, sometimes unique, application requirements. Customizable hardware and software are essential for achieving the desired functionality, especially in applications that involve complex or evolving demands. Challenges: Designing systems that can be easily customized to meet specific application needs. Modifying control logic or I/O configurations without extensive hardware redesigns. Accommodating both current and future application requirements with scalable solutions. How the NI RIO Platform Solves This: Reconfigurable FPGA : The NI RIO platform’s FPGA can be fully customized using LabVIEW FPGA, allowing engineers to tailor timing, signal processing, and control logic precisely to their application’s needs. Modular I/O Options : The RIO platform supports a wide range of C Series modules for analog, digital, motion, and communication I/O, enabling designers to create bespoke configurations without custom hardware development. Scalability : The NI RIO platform is highly scalable, allowing systems to evolve with additional I/O modules or upgrades as application requirements change. Applications: Customized embedded systems for niche applications. Rapid prototyping of new designs with scalable hardware. Embedded control systems requiring frequent updates and reconfigurations. 3. Reliability and Ruggedness in Harsh Environments Embedded systems are frequently deployed in environments with extreme temperatures, vibrations, and other harsh conditions. Ensuring reliability and durability in such settings is critical for long-term operational success. Challenges: Operating reliably in environments with wide temperature ranges, dust, moisture, and mechanical shock. Maintaining performance and accuracy despite environmental stressors. Minimizing system downtime and maintenance needs. How the NI RIO Platform Solves This: Rugged Hardware Design : CompactRIO is designed for industrial-grade applications with a rugged enclosure, extended operating temperature range, and resistance to vibration and shock. Field-Ready Solutions : The platform’s durability and long lifecycle support ensure that systems continue to operate effectively in challenging environments, reducing the need for maintenance and minimizing downtime. Integrated Diagnostics and Monitoring : The real-time operating system and FPGA offer built-in diagnostics, enabling continuous monitoring of system health and predictive maintenance. Applications: Outdoor deployments in energy management and monitoring systems. Transportation and automotive testing in extreme conditions. Remote and unmanned monitoring stations in harsh environments. 4. Seamless Integration with Existing Systems and Protocols Embedded systems rarely operate in isolation—they must integrate with other devices, systems, and networks. Compatibility with standard industrial protocols and easy integration into existing architectures are key considerations. Challenges: Ensuring compatibility with a wide range of industrial communication protocols. Integrating embedded systems with existing SCADA, MES, and enterprise IT systems. Managing data flow and synchronization across distributed control networks. How the NI RIO Platform Solves This: Broad Protocol Support : The RIO platform natively supports a wide array of industrial communication protocols such as EtherCAT, Modbus, CAN, and Ethernet/IP, enabling seamless integration with other control systems and networks. Flexible Networking Options : The platform offers multiple connectivity options including Ethernet, serial, and wireless, allowing easy interfacing with both legacy systems and modern IIoT architectures. LabVIEW Integration : The RIO platform integrates with LabVIEW, simplifying the process of communicating with external systems and synchronizing data across complex networks. Applications: Integrating embedded controllers into larger automation and control systems. Data acquisition and monitoring in multi-site industrial networks. Interfacing with legacy devices and modernizing existing control infrastructure. 5. Development Speed and Time-to-Market Reducing time-to-market is crucial for companies in competitive industries. A development platform that allows for rapid design, prototyping, and deployment accelerates project timelines and enables faster iterations. Challenges: Speeding up the design, testing, and deployment cycles without compromising quality. Quickly prototyping systems and refining designs based on early feedback. Streamlining the transition from prototype to production. How the NI RIO Platform Solves This: Rapid Prototyping with LabVIEW : The graphical programming environment of LabVIEW allows engineers to design, simulate, and test embedded applications quickly, enabling rapid iteration and reducing development time. Seamless Transition from Prototype to Production : The RIO platform’s modularity and scalability allow developers to move from initial proof-of-concept to full production with minimal redesign, accelerating time-to-market. Pre-Built Libraries and Toolkits : NI provides extensive libraries and toolkits for signal processing, control algorithms, and communication protocols, reducing the need to develop solutions from scratch. Applications: Fast-tracked product development in industrial automation. Quick iteration cycles in R&D for new technologies. Streamlining pilot projects and scaling them into full-scale production. 6. Future-Proofing and Long-Term Support Embedded systems often have long operational lifecycles, making it important to choose platforms that offer long-term support, flexibility for upgrades, and the ability to evolve with changing technological requirements. Challenges: Ensuring long-term availability and support for embedded platforms. Designing systems that can be upgraded as new technology becomes available. Adapting to future needs without completely redesigning the system. How the NI RIO Platform Solves This: Long Lifecycle Support : NI provides long-term support for its embedded platforms, ensuring that critical systems remain operational for years, even in industries with strict regulatory requirements. Upgradeable Hardware and Software : The modular nature of the RIO platform allows easy upgrades to both hardware and software, accommodating future enhancements without major redesigns. Future-Ready Architecture : The platform’s flexibility, scalability, and support for evolving technologies (e.g., IIoT, AI integration) make it a future-proof choice for embedded system designs. Conclusion The design of embedded systems requires careful consideration of performance, reliability, scalability, and integration, all while balancing time-to-market and long-term operational requirements. The NI RIO platform, with its powerful combination of modular hardware, reconfigurable FPGAs, real-time capabilities, and seamless software integration through LabVIEW, offers a comprehensive solution that addresses these key design considerations. By choosing NI RIO for embedded system design, engineers can ensure that their systems are robust, flexible, and future-ready, delivering reliable performance across a wide range of demanding applications.

  • Measuring Pressure Key Fundamentals Guide | Cyth Systems

    Cyth Systems | Whitepapers | Sensor Fundamentals | Measuring Pressure Key Fundamentals Guide | Cyth Systems Measuring Pressure Key Fundamentals Guide | Cyth Systems This guide helps you with basic pressure concepts and with understanding how different pressure sensors work. There are a variety of sensors to choose from, each of which has its own operating principles, benefits, considerations, and drawbacks. After you decide on your sensor, you can consider the required hardware and software to condition, acquire, and visualize pressure measurements. You can also consider any hardware packages you may need. What is Pressure Pressure is defined as the force per unit area that a fluid exerts on its surroundings. The equation below demonstrates Pressure, (P), as a function of force, (F), and area, (A): P = F/A Pascal (N/m2) is the SI unit for pressure, but other common units include pounds per square inch (psi), atmospheres (atm), bars, and torr. A container full of gas contains countless atoms and molecules that are constantly bouncing off its walls. The pressure is the average force of these atoms and molecules on the container's walls per unit of area. Moreover, pressure does not have to be measured along the wall of a container but rather can be measured as the force per unit area along any plane. Air pressure, for example, is a function of the weight of the air pressing down on Earth. Therefore, as the altitude increases, pressure decreases. Similarly, as scuba divers go deeper into the ocean, the pressure increases. A pressure measurement can be described as either static or dynamic. The pressure in cases with no motion is static pressure. Examples of static pressure include the pressure of the air inside an oxygen tank or water inside a basin. Often, the motion of a fluid changes the force applied to its surroundings. Say the pressure of water in a kitchen faucet with the nozzle closed is 35 pounds per square inch (force per unit area). If one opens the nozzle, the pressure drops lower as the water exits the faucet. An accurate pressure measurement notes the circumstances under which it is made. Factors include flow, fluid compressibility of, and any external forces. Measuring Pressure A pressure measurement can be described by the type of measurement being performed. The three methods for measuring pressure are absolute, gauge, and differential. Absolute pressure is a reference to the pressure in a vacuum, whereas gauge and differential pressures are referenced to other pressures such as the ambient atmospheric pressure or pressure in an adjacent vessel. Absolute Pressure The absolute measurement method is relative to 0 Pa, the static pressure in a vacuum. The pressure being measured is acted upon by atmospheric pressure in addition to the pressure of interest. Therefore, absolute pressure measurement includes the effects of atmospheric pressure. This type of measurement is well-suited for atmospheric pressures such as those used in altimeters or vacuum pressures. Often, the abbreviations Paa (Pascal’s absolute) or psia (pounds per square inch absolute) are used to describe absolute pressure. Gauge Pressure Gauge pressure is a measurement relative to ambient atmospheric pressure. This requires that both the reference and the pressure of interest are acted upon by atmospheric pressures. Gauge pressure measurement excludes the effects of atmospheric pressure. These types of measurements include tire pressure and blood pressure measurements. The abbreviations Pag (Pascal’s gauge) or psig (pounds per square inch gauge) are used to describe gauge pressure. Differential Pressure Differential pressure is similar to gauge pressure, the difference is the reference of another pressure point in the system rather than the ambient atmospheric pressure. One can use this method to maintain relative pressure between two vessels such as a compressor tank and a line feeding the tank. Also, the abbreviations Pad (Pascal’s differential) or psid (pounds per square inch differential) are the applicable units. The difference between measurement conditions, ranges, and materials used in the construction of a sensor lead to a variety of pressure sensor designs. One can often convert pressure to an intermediate form, such as displacement, by detecting the amount of deflection on a diaphragm positioned in line with the fluid. The sensor then converts this displacement into an electrical output in voltage or current. If the area of the diaphragm is known, one can then calculate pressure. Pressure sensors are packaged with a scale that provides a method to convert units. Choosing the Right Pressure The three most universal types of pressure transducers are the bridge (strain gage based), variable capacitance, and piezoelectric. Bridge-Based Sensors Bridge-based sensors operate by correlating a physical measurement, like pressure, to a change in resistance in one or more legs of a Wheatstone bridge. They are the most universal type of sensor because they meet a variety of accuracies, sizes, ruggedness constraints. Bridge-based sensors measure absolute, gauge, or differential pressure in both high and low pressure applications. They do this by using a strain gage to detect the deformity of a diaphragm subjected to the applied pressure. When the diaphragm deflects due to a change in pressure, a corresponding change in resistance is induced on the strain gage, which you can measure with a conditioned DAQ system. You can bond foil strain gages to a diaphragm or to an element that is mechanically connected. If one uses silicon gages, they etch resistors on a silicon-based substrate and use transmission fluid to transmit the pressure from the diaphragm to the substrate. Because of the simple construction and durability these sensors are ideal for higher channel systems. In general, foil strain gages are used in high-pressure (up to 700M Pa) applications. They also have a higher operating temperature than silicon strain gages (200 °C versus 100 °C, respectively), but silicon strain gages offer the benefit of larger overload capability. Because they are more sensitive, silicon strain gages are also often preferred in low-pressure applications (~2k Pa). Capacitive Pressure and Piezoelectric Sensor A variable capacitance pressure transducer measures the change in capacitance between a metal diaphragm and a fixed metal plate. The capacitance between two metal plates will change if the distance between these two plates changes due to applied pressure. Piezoelectric sensors rely on the electrical properties of quartz crystals rather than a resistive bridge sensor. These crystals produce an electrical charge when they are strained. Electrodes actively transfer the charge from crystals to an amplifier built into the sensor. These sensors do not require an external excitation source, and are susceptible to vibration. Capacitive and Piezoelectric Pressure Transducers are generally stable and linear, are sensitive to high temperatures, and respond quickly to pressure changes. For this reason, they are used to make rapid pressure measurements from events such as explosions. Because of their superior dynamic performance, piezoelectric sensors are the least cost-effective, and must be cared for to protect their sensitive crystal core.

  • BioMedDevice 2023

    Events ||BioMedDevice 2023| BioMedDevice 2023 BioMedDevice 2023 November 15, 2023 Santa Clara, CA BioMed Device 2023, now known as MEDevice Silicon Valley , is an event focused on medical device innovation and manufacturing . It brings together engineers, business leaders, and innovators from the medical device industry to explore new technologies, foster partnerships, and accelerate progress in the field. The event serves as a platform for groundbreaking solutions, supplier partnerships, and discussions about the future of medical technology. Key aspects of BioMed Device 2023 included: Innovation Hub: It served as a central location for showcasing and discovering new medical device technologies Networking Opportunities: The event facilitated connections between engineers, business leaders, and innovators. Educational Content: It offered insights into the latest trends and advancements in the medical device industry, including discussions on regulations, cybersecurity, and AI in healthcare. Supplier Partnerships: The event fostered collaborations between medical device companies and suppliers. MEDevice Silicon Valley : The event is now part of the MEDevice series, with the Silicon Valley edition focused on the MedTech hub.

  • NI Connect 2024

    Events ||NI Connect 2024| NI Connect 2024 NI Connect 2024 May 20, 2024 Austin, TX NI Connect 2024 was National Instruments' annual event for engineers and industry leaders, focused on test and data analytics . It featured technical sessions, keynotes, and networking opportunities to explore the latest innovations in test and measurement. The event, held in Austin, Texas , aimed to help businesses improve performance and gain a competitive edge. NI Connect 2024 included: Keynotes: Discussions on the future of test and measurement and the role of AI in intelligent testing. Technical Sessions: Presentations on optimizing test data, modernizing labs, and digital transformation. Networking: Opportunities to connect with peers and share ideas. Demos: Showcasing new NI products and solutions, including the Battery Production Tester.

  • NI Test Forum - San Diego

    Events ||NI Test Forum - San Diego | NI Test Forum - San Diego NI Test Forum - San Diego June 25, 2025 San Diego Discover What’s Next in Test Engineering at the NI Test Forum – San Diego | June 25th, 2025 As engineering challenges grow more complex, the need for faster, more accurate, and scalable test solutions has never been greater. Join us for a full day of technical sessions and hands-on demonstrations showcasing how NI’s latest hardware and software innovations can help streamline your test workflows—from development to deployment. Explore real-world solutions that reduce development time, increase throughput, and ensure data integrity across a range of applications. Whether you're focused on Aerospace and Defense, Life Sciences, or Semiconductors & Electronics, this forum offers tailored insights to help you stay ahead. Technical sessions led by industry experts on topics like test automation, real-time data acquisition, and system flexibility. Live demos featuring PXI, CompactDAQ, and mioDAQ platforms. Deep dives into cutting-edge applications: · RF and rack-based ATE solutions for SatCom, radar, and electronic warfare · Medical Devices: Strategies to meet unique testing challenges · Semiconductors: Approaches to evolving test requirements and high-volume data Join us for any part of the day and attend the sessions that matter most to you. Walk away with practical insights and scalable solutions to accelerate your testing strategy. Register here: https://events.ni.com/profile/web/index.cfm?PKwebID=0x149075abcd&source=cyth

  • NI Test Forum: Boston

    Events ||NI Test Forum: Boston| NI Test Forum: Boston NI Test Forum: Boston September 9, 2025 Boston Join us at the NI Test Forum as we explore the future of test and measurement in an increasingly complex engineering landscape. As demands for faster, smarter, and more flexible testing grow, this forum offers a deep dive into NI’s latest hardware and software innovations designed to streamline validation workflows, reduce development time, and boost data reliability across a range of industries. Throughout the day, you'll have the chance to connect with industry experts, get hands-on with NI platforms like PXI, CompactDAQ, and mioDAQ, and see real-world demos of cutting-edge test systems in action. Topics include test automation, real-time data acquisition, and scalable solutions for Aerospace & Defense, Energy, and Semiconductor & Electronics applications—covering everything from RF testing for radar and SatCom to high-throughput semiconductor validation. Cyth Systems will be there! Ask us how our team helps accelerate automated test projects using NI tools, and we’d love to chat about how we can support your engineering goals.

  • Del Mar Electronics & Manufacturing Show 2023

    Events ||Del Mar Electronics & Manufacturing Show 2023| Del Mar Electronics & Manufacturing Show 2023 Del Mar Electronics & Manufacturing Show 2023 April 26, 2023 The Del Mar Electronics & Manufacturing Show 2023 (DMEMS) was a two-day event held on April 26th and 27th at the Del Mar Fairgrounds in Del Mar, California . It is a premier event for those involved in the design, manufacture, or testing of products, with a focus on the consumer products and services in the field of electronics and technology. The show featured exhibits of IC products and services, electronic components, fabrication products, engineering software, contract manufacturing services, and more. It also included high-tech presentations, educational seminars, and networking opportunities.

  • Measuring Direct Current (DC) Voltage Guide | Cyth Systems

    Cyth Systems | Whitepapers | Sensor Fundamentals | Measuring Direct Current (DC) Voltage Guide | Cyth Systems Measuring Direct Current (DC) Voltage Guide | Cyth Systems Measuring Voltage Voltage is the difference of electrical potential between two points of an electrical or electronic circuit, expressed in volts. It measures the potential energy of an electric field to cause an electric current in an electrical conductor. To measure voltage, two considerations need to be. 1) The voltage level at which the measurement is referenced to, as well as 2) the signal source. The two methods to measure voltage are ground reference and differential. Common signal source types are floating signal sources and grounded signal sources. Both signal sources have optimal connection diagrams based on the individual measurement method. Please note that depending on the type of signal, a particular voltage measurement method may provide better results than others. Learn more about Field Wiring and Noise Considerations for Analog Signals. Measurement Reference Point Methods There are two methods to measure voltages: ground referenced and differential . Ground-Referenced Voltage Measurement (RSE or NRSE) One voltage measurement method is to measure voltage with respect to a common, or a “ground,” point. Usually, these “grounds” are stable and around 0 V. Historically, the term ground originated from ensuring the voltage potential is at 0 V by connecting the signal to the earth. Ground-referenced input connections are good for a channel that meets the following conditions: Input signal is high-level (greater than 1 V) Leads connecting the device's signal are less than 10 ft The input signal can share a common reference point with other signals The ground reference is provided by either the device taking the measurement or by the external signal being measured. When the ground is provided by the device, this setup is called ground-referenced single-ended mode (RSE), and when the ground is provided by the signal, the setup is called nonreferenced single-ended mode (NRSE). Differential Voltage Measurement (DIFF) Another way to measure voltage is to determine the “differential” voltage between two separate points in an electrical circuit. For example, to measure the voltage across a single resistor, you measure the voltage at both ends of the resistor. The difference between the voltages is the voltage across the resistor. Usually, you can use differential voltage measurements to determine the voltage that exists across individual elements of a circuit—or you can use this method when the signal sources are noisy. Differential input connections are particularly well suited for a channel that meets any of the following conditions: The input signal is low-level (less than 1 V) The leads connecting the signal to the device are greater than 3 m (10 ft) The input signal requires a separate ground reference point or return signal The signal leads travel through noisy environments In differential mode, the negative signal is wired to and analog pin directly facing the analog channel that is connected to the positive signal. The disadvantage of differential mode is that it effectively reduces the number of analog input measurement channels by half. Types of Signal Sources Before configuring the input channels and making signal connections, you must determine whether the signal sources are floating or ground referenced. Floating Signal Sources A floating signal source is not connected to the building ground system but has an isolated ground reference point. Some examples of floating signal sources are outputs of transformers, thermocouples, battery-powered devices, optical isolators, and isolation amplifiers. An instrument or device that has an isolated output is a floating signal source. The ground reference of a floating signal must be connected to the ground of the device to establish a local or onboard reference for the signal. Otherwise, the measured input signal varies as the source floats outside the common-mode input range. For floating signals, you have several options when it comes to input configurations: differential (DIFF), single-ended ground referenced (RSE), or single-ended non-referenced (NRSE). Figure 1. Floating signal source with recommended input configurations. Ground-Referenced Signal Sources A ground-referenced signal source is connected to the building system ground, so it is already connected to a common ground point with respect to the device, assuming that the measurement device is plugged into the same power system as the source. Non-isolated outputs of instruments and devices that plug into the building power system fall into this category. The difference in ground potential between two instruments connected to the same building power system is typically between 1 and 100 mV, but the difference can be much higher if power distribution circuits are improperly connected. If a grounded signal source is incorrectly measured, this difference can appear as measurement error. Following the connection instructions for grounded signal sources can eliminate the ground potential difference from the measured signal. For grounded signals, you have two options when it comes to input configurations, differential (DIFF) or single-ended non-referenced (NRSE). NI does not recommend that you use single-ended ground referenced input configurations for grounded signal sources. Figure 2. Grounded-referenced signal source with input configurations Grounded Signal Source Input Configuration For grounded signals, you have two options when it comes to input configurations. Note: NI does not recommend that you use single-ended ground referenced input configurations for grounded signal sources. Voltage Measurement Considerations When measuring voltage, you should consider things such as high-voltage measurement, ground loops, common-mode voltage, and isolation topologies. High-Voltage Measurements and Isolation There are many issues to consider when measuring higher voltages. When specifying a data acquisition system, the first question you should ask is whether the system will be safe. Making high-voltage measurements can be hazardous to your equipment, to the unit under test, and even to you and your colleagues. To ensure that your system is safe, you should provide an insulation barrier between the user and hazardous voltages with isolated measurement devices. Isolation, a means of physically and electrically separating two parts of a measurement device, can be categorized into electrical and safety isolation. Electrical isolation pertains to eliminating ground paths between two electrical systems. By providing electrical isolation, you can break ground loops, increase the common-mode range of the data acquisition system, and level shift the signal ground reference to a single system ground. Safety isolation references standards that have specific requirements for isolating humans from contact with hazardous voltages. It also characterizes the ability of an electrical system to prevent high-voltage and transient voltages to be transmitted across its boundary to other electrical systems with which the user may come in contact. Incorporating isolation into a data acquisition system has three primary functions: preventing ground loops, rejecting common-mode voltage, and providing safety. Learn more about  high-voltage measurements and isolation . Ground Loops Ground loops are the most common source of noise in data acquisition applications. They occur when two connected terminals in a circuit are at different ground potentials, causing current to flow between the two points. The local ground of your system can be several volts above or below the ground of the nearest building, and nearby lightning strikes can cause the difference to rise to several hundreds or thousands of volts. This additional voltage itself can cause significant error in the measurement, but the current that causes it can couple voltages in nearby wires as well. These errors can appear as transients or periodic signals. For example, if a ground loop is formed with 60 Hz AC power lines, the unwanted AC signal appears as a periodic voltage error in the measurement. When a ground loop exists, the measured voltage, ΔVm, is the sum of the signal voltage, Vs, and the potential difference,  ΔVg, which exists between the signal source ground and the measurement system ground, as shown in Figure 6. This potential is generally not a DC level; thus, the result is a noisy measurement system often showing the 60 Hz power-line frequency components in the readings. To avoid ground loops, ensure that there is only one ground reference in the measurement system, or use isolated measurement hardware. Using isolated hardware eliminates the path between the ground of the signal source and the measurement device, thus preventing any current from flowing between multiple ground points. Common-Mode Voltage An ideal differential measurement system responds only to the potential difference between its two terminals, the (+) and (-) inputs. The differential voltage across the circuit pair is the desired signal, yet an unwanted signal may exist that is common to both sides of a differential circuit pair. This voltage is known as common-mode voltage . An ideal differential measurement system completely rejects, rather than measures, the common-mode voltage. Practical devices, however, have several limitations, described by parameters such as common-mode voltage range and common-mode rejection ratio (CMRR), which limit this ability to reject the common-mode voltage. The common-mode voltage range is defined as the maximum allowable voltage swing on each input with respect to the measurement system ground. Violating this constraint results not only in measurement error but also in possible damage to components on the device. Common-mode rejection ratio describes the ability of a measurement system to reject common-mode voltages. Amplifiers with higher common-mode rejection ratios are more effective at rejecting common-mode voltages. In a non-isolated differential measurement system, an electrical path still exists in the circuit between input and output. Therefore, the electrical characteristics of the amplifier limit the common-mode signal level that you can apply to the input. With the use of isolation amplifiers, the conductive electrical path is eliminated, and the common-mode rejection ratio is dramatically increased. Isolation Topologies It is important to understand the isolation topology of a device when configuring a measurement system. Different topologies have several associated cost and speed considerations. Two common topologies are channel-to-channel and bank. Channel-to-Channel The most robust isolation topology is channel-to-channel isolation . In this topology, each channel is individually isolated from one another and from other non-isolated system components. In addition, each channel has its own isolated power supply. In terms of speed, there are several architectures from which to choose. Using an isolation amplifier with an analog-to-digital converter (ADC) per channel is typically faster because you can access all the channels in parallel. A more cost-effective yet slower architecture involves multiplexing each isolated input channel into a single ADC. Another method of providing channel-to-channel isolation is to use a common isolated power supply for all the channels. In this case, the common-mode range of the amplifiers is limited to the supply rails of that power supply, unless you use front-end attenuators. Bank Another isolation topology involves banking , or grouping, several channels together to share a single isolation amplifier. In this topology, the common-mode voltage difference between channels is limited, but the common-mode voltage between the bank of channels and the non-isolated part of the measurement system can be large. Individual channels are not isolated, but banks of channels are isolated from other banks and from ground. This topology is a lower-cost isolation solution because this design shares a single isolation amplifier and power supply. Measure Voltage with NI Hardware The acquisition hardware’s quality determines the quality of the voltage data you collect. NI offers a range of that can accurately measure voltage over a wide range of values and generate voltage signals for control and communication applications. NI voltage products have options that are optimized for industrial or hazardous locations and can have built-in isolation and overcurrent protection for high-voltage applications.

  • Technology Platform Selection Guide for High-Complexity Products

    Cyth Systems | Whitepapers | A Technical Decision Framework | Technology Platform Selection Guide for High-Complexity Products Technology Platform Selection Guide for High-Complexity Products Even the most experienced hardware engineers have moments of doubt when staring at a project schedule and countless datasheets, wondering, "How do I know I'm making the right choice?" While selecting technology based solely on specifications seems systematic, how can you truly ensure the platform you build your solution on will deliver long-term success for your finished product and prove its value to stakeholders? The early decisions you make on underlying technology platforms and system architecture can determine whether you achieve your objectives. Certain goals may be clear from the beginning, such as functional performance metrics and launch schedule, while others, such as user-requested features and the long-term technical maintenance burden, may be unknown at project kickoff, but no less impactful. So why not approach these critical decisions with a proven framework that transforms uncertainty into confidence and mitigates risk, even for some of the unknowns? The challenge isn't just technical, it's strategic. Engineering teams today face an overwhelming array of processing architectures, form factors, and software stack design decisions, all while navigating the core trade-offs between system performance, budget, and development speed. Without the right criteria for choosing a technology stack to build their solution on, many fail to attain their market objectives. For example, many face the common pitfall of prematurely optimizing unit costs, which can significantly delay launch schedule, market uptake, and time-to-profit. Nothing is more expensive than failing to get to market at all. Below are a few examples of applications with high-complexity requirements where platform selection is non-trivial. Infrastructure monitoring systems (extreme environments, long lifecycle, remote deployment, total cost of ownership) High-speed automation processes (microsecond-level determinism, real-time performance, industrial networks) Healthcare edge devices (compliance requirements, security architecture) Equipment protection systems (fail-safe operation, environmental hardening) Industrial IoT AI inference systems (edge processing, model lifecycle management) This comprehensive white paper series aims to provide engineering teams with a structured methodology for evaluating product development platforms across a wide range of application spaces. We'll guide you beyond surface-level specifications to the considerations and factors that determine success from initial research through long-term product sustainment. Our selection framework for high complexity, medium volume product deployments addresses eight critical evaluation dimensions that separate successful deployments from costly mistakes: Signal Integration & I/O Mix Processing & Compute Software Toolchain Deployment Environment Cost Models Security Architecture AI Integration Signal Integration & I/O Mix Why it’s important: Limited I/O options can become expensive problems quickly. Products today are differentiated through their ability to integrate diverse signals, sensors, actuators, and protocols seamlessly. A platform lacking in native I/O diversity can force costly workarounds onto your team: additional hardware, increased complexity, and unforeseen sustainment costs can compound over a product’s lifecycle. Comprehensive I/O integration is more than convenient; it can prevent development bottlenecks and keep overall system costs down. Lessons Learned: Comprehensive, native I/O integration is key to designing your system for sustainability. The flexibility to adapt to ongoing product feedback and unknown future product requirements can help shorten project timelines and mitigate long-term sustainment costs. We recommend evaluating I/O capabilities using the following criteria: High-quality, calibrated measurement and stimulus: The accuracy and precision of high quality, calibrated I/O ensure that the analog and digital interfaces in your system are meeting your application requirements reliably. Measurement uncertainty can compound through an entire system; uncalibrated I/O can wreak havoc upon the most sophisticated algorithms and control strategies. Expansion and scalability: A system’s ability to accommodate additional I/O channels and signal types, without requiring architectural changes or separate hardware platforms, helps mitigate sustainment risks and facilitates releases throughout CI/CD processes. System requirements nearly always expand over time through customer requests and continuous improvement efforts, making flexible, modular I/O necessary for maintaining development momentum and avoiding costly redesigns. Future-proofed protocol support: A system’s native ability to interface with industrial networks, and emerging IoT standards without requiring external gateways or protocol converters can facilitate seamless integration with existing infrastructure and help futureproof a system against evolving communication standards and requirements. Figure 1. Comprehensive system I/O coverage builds in flexibility to adapt to future product requirements. Implementation Considerations & Guidance Audit your complete signal ecosystem upfront: catalog every sensor, actuator, and communication protocol you need today and anticipate for future releases to avoid architectural surprises Prioritize factory calibrated, modular I/O platforms: measurement errors compound through your system, and requirements always expand; choose platforms that maintain accuracy while adding channels without redesigns Select platforms with native protocol support : map your industrial networks and IoT requirements early; native communication protocol support eliminates costly gateways and future integration headaches Processing & Compute Why it's important: High-performance embedded systems demand precise timing you can rely on. Advanced control algorithms, safety-critical control loops and industry-specific compliance can require real time performance. Desktop operating systems and general purposes microcontroller units (MCUs) cannot guarantee critical response times for protection systems and high-speed automation. Lessons Learned: Critical, time sensitive processes necessitate a solution capable of reliable, sub-millisecond response times to avoid safety issues, damage to assets, liability and incomplete or inaccurate data sets. We recommend evaluating real-time computation capabilities using the following criteria: Compute architecture: Safety-critical and equipment monitoring applications require an underlying system design that enables deterministic, predictable execution of time-critical tasks without interference from non-critical processes. Without a hardware and software solution that can ensure minimal timing jitter in the system, it’s possible that equipment damage or other hazardous conditions could arise. Performance & loop rates: A system’s ability to execute control algorithms and data processing tasks at required frequencies is critical to system integrity. If loop rates were to fall below the tolerances of the controlled system, control system stability and performance can rapidly degrade. Memory management: The response time of a real-time system is dependent on the rate at which critical tasks can access the data they need. Memory access latencies and cache misses can introduce timing jitter issues that would violate the system's real-time constraints and compromise overall system safety. Figure 2. Systems built on a aplatform with native CPU and FPGA integration can reliably enable deterministic data acquisition and processing alongside the execution of non-critical tasks. Implementation Considerations & Guidance Define your critical timing requirements precisely: identify which control loops, safety functions, and monitoring tasks require microsecond determinism versus those that can tolerate standard OS scheduling Choose a compute architecture based on timing criticality: Refer to Table 1 for a comparison of GPUs, microcontrollers, CPUs, and FPGAs Choose a dedicated real-time compute architecture: safety-critical applications need hardware-software solutions that guarantee deterministic execution without interference from non-critical processes Validate performance under worst-case conditions: test your required loop rates and response times with full system loading, not just isolated benchmark conditions Design memory architectures for predictable access: minimize cache misses and memory latencies for time-critical tasks; deterministic memory access is essential for maintaining real-time constraints Processing & Compute Technology Description Common Applications Deterministic Timing GPUs Parallel processing Optimized for thousands of simultaneous calculations Image processing Scientific simulations High-performance computing Low Microcontrollers Integrated single-chip computers with processors, memory and I/O Designed for dedicated control tasks Consumer electronics Sensor interfaces Battery-powered devices Medium CPUs General Purpose processors Sequential task execution Data acquisition systems Human-machine interfaces Medium (with real-time operating system) FPGAs Reconfigurable haradware device with programmable logic gates Custom circuitry through software implementation High-speed signal processing High-fidelity hardware-in-the-loop test Custom I/O protocols High Software Toolchain Why it's important: Software can be your team’s greatest differentiator, or it can completely derail your development timelines. High-complexity systems require customization across the entire software stack, but fragmented toolchains and siloed development environments can result in debugging, integration, and iteration becoming costly ordeals. The software toolchain chosen can determine the balance between the flexibility to customize and the overhead to integrate. These tradeoffs must be considered during toolchain selection as architectural decisions quickly propagate throughout development and could make it impossible to pivot mid-project. Lessons Learned: Software can be your team's greatest differentiator or it can completely derail your development timelines. High-complexity systems require customization across the entire software stack, but fragmented toolchains and siloed development environments can result in debugging, integration, and iteration becoming costly ordeals. The software toolchain chosen can determine the balance between the flexibility to customize and the overhead to integrate. These tradeoffs must be considered during toolchain selection as architectural decisions quickly propagate throughout development and could make it impossible to pivot mid-project. We recommend evaluating software design and toolchain choice using the following criteria: Development Speed and Flexibility: Toolchain selection and software stack architecture determine how flexible your product will be to adapt to evolving and future requirements. Familiarity with software can support a rapid implementation of IP and the learning curve of an un familiar software stack can be mitigated through extensive documentation and an intuitive user experience out-of-the-box. Open source vs. custom IP: Any software developer must balance the use of proven, open-source code to accelerate their overall development time and the generation of new, proprietary intellectual property (IP) that will most greatly impact their product’s differentiation. Abstraction layers: Abstraction layers provide standardized interfaces that isolate application logic from underlying hardware and system dependencies, which is vital for maintaining code portability, enabling future hardware upgrades, and reducing the risk of vendor lock-in across long product lifecycles. Build and deployment tools: These tools are the automated systems for compiling, testing, packaging and distributing software across development, testing, and production environments. They are essential for maintaining code quality, mitigating deployment errors and enabling rapid iteration cycles that can keep pace with customer feedback. Figure 3. Select a cohesive software stack that abstracts away the low-level functionality so developers can focus on the value-adding features that differentiate a product. Implementation Considerations & Guidance Assess toolchain selection and software stack architecture early on: select development tools and a software stack that minimize integration challenges cross engineering teams Balance proven libraries with competitive differentiation: leverage native libraries and hardware abstraction layers to implement background processes and common functionality. Doing so enables you to free up your development resources to focus on implementing your unique IP. This approach accelerates your product’s time-to-market while creating a more maintainable codebase that sustains your competitive advantage throughout the product lifecycle Implement automated build and deployment pipelines early: establish CI/CD workflows from project start to maintain code quality, reduce deployment errors, and enable rapid iteration with customer feedback Deployment Environment Why it's important: The deployment environment of your application is a design constraint that should inform hardware selection from day one. Harsh conditions require component derating, conformal coating, specialized enclosures, thermal management, rigorous field testing, and hazardous area classifications. It’s critical to consider the environmental realities of your application to avoid the need for costly redesigns that could compromise performance and delay your product’s time to market. Lessons Learned: The deployment environment of your application is a design constraint that should inform hardware selection from day one. Harsh conditions require component derating, conformal coating, specialized enclosures, thermal management, rigorous field testing, and hazardous area classifications. It’s critical to consider the environmental realities of your application to avoid the need for costly redesigns that could compromise performance and delay your product’s time to market. We recommend evaluating deployment environment capabilities using the following criteria: Ambient conditions: A system’s ability to operate reliably across the full range of environmental stressors are subject to the environment the system is deployed into. These environmental factors, like temperature, humidity, vibration, and electromagnetic interference are typically the leading causes of field failures. Systems that cannot withstand their deployment conditions will require frequent maintenance interventions, which will ultimately negate any operational benefits gained through the implementation of the system. Thermal management: The ability of a system to dissipate heat generated by its processing and I/O components will determine its ability to maintain safe operating temperatures across varying ambient conditions and computational loads. Thermal stress accelerates component aging and can cause intermittent failures that could be costly to diagnose and repair when deployed in remote locations. Operating ranges and environmental derating are important to mitigate system stress and safety issues, prevent premature failure and extend asset lifespan. Physical and networked connectivity: All connection points, including I/O terminals, communication ports, and network interfaces must operate reliably despite environmental factors. The robustness of these connections is vital to system operation and are a very common failure point for systems deployed in the field. Hazardous area classifications: Systems deployed into environments with explosive atmospheres or flammable materials require hazardous area certifications (e.g. ATEX, IECEx, NEC Class/Division ratings). This regulatory requirement fundamentally impacts hardware selection, enclosure design and system architecture. Obtaining certifications for custom hardware can add substantial time and cost to product development projects, making platforms with existing approvals valuable for accelerating market entry. Figure 4. The deployment environment of a system greatly influences rating and certification requirements; it is crucial to consider ambient conditions early in a technology selection process. Implementation Considerations & Guidance Characterize your full environmental envelope: measure actual temperature ranges, vibration levels, EMI sources, and contamination in your deployment location, as laboratory specs rarely match field conditions. One common way to manage this is to build sensors into products that can self-calibrate in the field. For example, a temperature readback sensor signals the device to activate a fan for cooling purposes once the operating temperature threshold has been surpassed. Design thermal management for worst-case scenarios: ensure your system can dissipate heat at maximum computational load combined with highest ambient temperatures; thermal stress is a primary cause of field failures Harden all connection points from day one: specify industrial-grade I/O terminals, sealed communication ports, and robust network interfaces; connection failures are among the most common and costly field issues Plan for maintenance accessibility: consider how environmental factors affect your ability to service, diagnose, and replace components during the system's operational lifetime Cost Models Why it's important: In low-to-medium volume production, optimizing unit costs alone and ignoring development speed, flexibility and time-to-market can result in delayed project timelines or worse, missed market windows. Custom hardware design typically results in extensive bring-up phases and a costly ongoing support burden, erasing any marginal unit cost savings and limiting engineering bandwidth to focus on high-ROI tasks. Lessons Learned: Stop optimizing the wrong number. Custom designs trade marginal savings for launch delays and a perpetual support burden that can erode profitability and competitive advantage. To ensure long-term profitability, total cost of ownership is a much more important consideration than the unit costs of a BOM. Refer to Figure 5. for a visual depiction of the costs and timelines associated with a typical COTS (“Buy”) vs. Custom (“Build”) development cycle. We recommend evaluating cost models using the following criteria: Off-the-shelf vs. custom: Deciding whether to develop custom hardware and software solutions, down to circuit board and low-level software design, internally or leveraging existing commercial platforms with proven capabilities can impact development costs exponentially. Custom development often appears cost-effective at small scales but introduces the risks of extended development timelines and untenable maintenance and sustainment burdens. Hardware unit cost: When a product team focuses solely on the per-device expense for processing, I/O, and connectivity components at projected production volumes they inherently introduce risk into overall project costs and profitability. Unit costs directly impact product margins and competitiveness in the market, but it must be evaluated alongside development and integration costs to understand true cost of getting a product to market and sustaining it long-term. Non-recurring engineering (NRE) and development costs: All the upfront investments in hardware design, software development, testing and certification required to bring a product to market must be amortized across the total production volume. Underestimating development complexity can turn the most seemingly profitable projects into massive financial headaches. Figure 5. Long-term profitability is maximized when the total cost of owership of a technology platform is thoroughly assessed, from evaluation through sustainment. Implementation Considerations & Guidance Calculate true development ROI across your volume projections: custom solutions may seem cheaper per unit but factor in extended development timelines, testing costs, and ongoing maintenance burdens against proven commercial platforms Model total cost of ownership (TCO): Explore costs beyond hardware unit cost. TCO encompasses all expenses throughout a product’s lifecycle, including non-recurring engineering (NRE) costs, field service expenses, software updates, certification requirements, and end-of-life management. Project the lifetime expenses of a technology stack from initial development to sustainment over the product’s operational life to determine the total cost of ownership. Establish realistic volume assumptions early: accurately project your deployment scale to properly amortize development investments; overestimating volumes can make custom development appear falsely attractive Plan for hidden integration and sustainment costs: budget for ongoing technical support, security updates, hardware obsolescence management, and field service requirements that often exceed initial hardware expenses Security Architecture Why it's important: Every connected edge device is a potential doorway into critical systems. As edge computing spreads across industrial and infrastructural environments, these devices are increasingly becoming prime targets for cybercriminals hoping to explore vulnerable entry points to operational networks, sensitive data, and control systems. Without robust security architecture built into your platform from the start, functionality could be deployed alongside vulnerabilities and attack vectors at scale. Lessons Learned: Security architecture can be difficult, if not impossible, to retrofit once an incompatible technology platform has been selected. Preventing the deployment of vulnerable edge device requires proven security features at every level of the technology stack. It's the foundation that determines whether your edge deployment becomes a liability or a strategic asset. We recommend evaluating security and compliance capabilities using the following criteria: System security: The hardware and software security features that influence the device integrity, data confidentiality, and operational availability of your product against external threats. Edge devices are often the most vulnerable to remote attacks and physical tampering due to their deployment in isolated locations that can be difficult to surveil. A compromised edge device can expose sensitive data or compromise the network infrastructure it is a part of. Security package integration: The ability of a platform to natively incorporate industry-standard security frameworks, encryption libraries, and authentication protocols directly impacts development efforts and timelines. Security implementation is complex and prone to errors. Organizations without deep cybersecurity implementation expertise need platforms with proven security capabilities out-of-the-box to protect their assets and ensure network security. Compliance: The platform’s ability to meet regulatory requirements and industry standards for cybersecurity, data protection, and operational security across markets and applications are vital to widespread product adoption and customer confidence. Non-compliance can result in regulatory fines, customer rejection, and liability exposure. Implementation Considerations & Guidance: Assess your full attack surface from device to cloud: map all connection points, data flows, and access vectors; edge devices in remote locations are particularly vulnerable to both cyber-attacks and physical tampering Choose platforms with proven security frameworks built-in: leverage native encryption, authentication, and security protocols rather than developing custom solutions; security implementation is complex and error-prone Identify compliance requirements early in design: determine which regulatory standards (NIST, IEC 62443, etc.) apply to your markets and ensure your platform can meet these requirements without extensive customization Plan for security lifecycle management: establish processes for security updates, certificate management, and vulnerability response across your deployed device fleet's operational lifetime Industry System Security Security Package Integration Compliance Aerospace & Defense Hardware security modules, anti-tamper mechanisms, secure processors Software encryption, secure key management for classified data Anti-tamper mechanisms Encryption algorithms, certified security packages, authentication protocols STIG security configuration integration NIST 800-171 and DFARS cybersecurity req. ITAR compliance for export control DO-178C and DO-326A for airborne systems DB Client access requirements Medical Device & Biotechnology Hardware-based encryption and secure boot options Tamper detection for device integrity User access requirements FIPS 140-2 encryption libraries MFA, PKI and certificate management support Decentralized and edge authentication FDA 21 CFR Part 820 and ISO 13485 ISO 14971 and HIPAA compliance IEC 62304 for medical device software EU MDR Certification Oil & Gas Field Deployments Secure edge device connection to satellite/cellular communications Physical tampering protection for unmanned facilities Industrial protocol integration (OPC UA Encrypted SCADA communications Remote monitoring framework support Industrial VPN , secure tunneling protocols NERC CIP for critical infrastructure API standards for petroleum industry Regional environmental and safety regulations Export control compliance for international deployments Manufacturing Network segmentation capabilities Secure OT/IT communication on internal network Production system security isolation Industrial Ethernet security integration IEC 62443 security framework support Manufacturing execution system (MES) authentication Native OPC UA security implementation IEC 62443 industrial cybersecurity standards ISO 27001 information security management Sector-specific requirements (automotive ISO 26262) Pharmaceutical 21 CFR Part 11 when applicable Table 2. Security considerations are industry-specific and highly dependent on the type of application being deployed. An exhaustive assessment of attack vectors and security requirements are essential to mitigate system vulnerabilities. Note: security is always a growing and evolving consideration AI Integration Why it's important: Reliance on the cloud for critical systems introduces a failure mode to any application; mission critical functionality cannot depend on the cloud for processing and decision-making. AI inference at the edge enables real-time applications in settings where connectivity is intermittent or impossible. Local AI processing with millisecond-level responsiveness is essential for data breakthroughs at the edge. Lessons Learned: Cloud dependency is not an option for mission critical AI processing at the edge . AI inference with millisecond-levels of response time requires a local processing solution that can enable data breakthroughs as a standalone system. We recommend evaluating AI and machine learning capabilities using the following criteria: Inference at the edge: A platform’s ability to execute trained AI models locally on an edge device enables your most dynamic IP to perform mission critical tasks reliably. Cloud-dependent AI systems introduce latency and reliability risks into real-time control and safety applications, whereas local inference at-the-edge enables your system to respond immediately to changing conditions without sacrificing real-time performance. Model training: The ability to update and refine AI models using local data are foundational aspects of an edge device’s performance. AI models must adapt to changing operational conditions, equipment variations and evolving requirements that your team cannot anticipate during development. A suitable edge device must be capable of supporting on-device training or seamless integration with model training workflows. Data flow & validation: Any edge device running AI inference must reliably and efficiently manage the movement, preprocessing, and quality assurance of the data sets used. AI model performance is dependent on data quality and consistency and therefore must handle data validation, anomaly detection, and selective data transmission without overwhelming network resources or compromising sensitive information. Figure 6. Millisecond-level response times enable two critical capabilities: real-time AI inference at the edge and efficient model refinement through local processing of large datasets. Implementation Considerations & Guidance: Validate inference performance under real operational conditions: test your AI models on actual edge hardware with realistic data loads, environmental conditions, and concurrent system tasks to ensure reliable real-time performance Design for model lifecycle management: establish workflows for updating, retraining, and validating AI models using field data while maintaining system safety and performance during updates Implement robust data preprocessing and validation pipelines: ensure your edge platform can handle data quality assurance, anomaly detection, and selective transmission without overwhelming network resources or exposing sensitive information Plan compute resources for AI workload scaling: size your processing capabilities for peak inference demands while considering future model complexity growth and additional AI applications over the product lifecycle Discuss your High-Complexity Product Needs with an Expert

  • NI Test Forum: Seattle

    Events ||NI Test Forum: Seattle| NI Test Forum: Seattle NI Test Forum: Seattle July 9, 2025 Seattle Join us at the NI Test Forum as we explore the future of test and measurement in an increasingly complex engineering landscape. As demands for faster, smarter, and more flexible testing grow, this forum offers a deep dive into NI’s latest hardware and software innovations designed to streamline validation workflows, reduce development time, and boost data reliability across a range of industries. Throughout the day, you'll have the chance to connect with industry experts, get hands-on with NI platforms like PXI, CompactDAQ, and mioDAQ, and see real-world demos of cutting-edge test systems in action. Topics include test automation, real-time data acquisition, and scalable solutions for Aerospace & Defense, Energy, and Semiconductor & Electronics applications—covering everything from RF testing for radar and SatCom to high-throughput semiconductor validation. Cyth Systems will be there! Ask us how our team helps accelerate automated test projects using NI tools, and we’d love to chat about how we can support your engineering goals. Register here: https://events.ni.com/profile/web/index.cfm?PKwebID=0x149094abcd&source=cyth

  • NI Connect 2025

    Events ||NI Connect 2025| NI Connect 2025 NI Connect 2025 April 28, 2025 Fort Worth, TX NI Connect 2025 was a technical conference held in Fort Worth, Texas . It was hosted by National Instruments (NI) , which is now part of Emerson. Here's what NI Connect 2025 was all about: Focus: The conference focused on advancements in test and measurement systems, showcasing integrated hardware and software solutions used in various industries. Key Themes: Attendees explored the latest developments in areas like data acquisition (DAQ), AI-enhanced LabVIEW, and software-defined RF platforms. Target Audience: It gathered engineers, researchers, and business leaders interested in test and measurement technology. Event Features: The event featured:Technical sessions: Over 90 sessions covered topics such as Python, LabVIEW, AI, and machine learning integrations. Keynotes: Industry leaders shared insights and real-world success stories. Networking opportunities: Attendees connected with peers, industry experts, and thought leaders. Tech demos: The latest NI and test technologies were showcased. Product announcements: New product updates for NI DAQ, LabVIEW, and PXI were announced. Location and Date: The event took place at the Fort Worth Convention Center in Fort Worth, TX , from April 28th to 30th, 2025 .

  • Senior LabVIEW Engineer  Cyth Systems, Inc. San Diego CA

    Senior LabVIEW Engineer  | Senior LabVIEW Engineer  November1st, 2025 Jobs Cyth Systems, Inc. Senior LabVIEW Engineer 9939 Via Pasar, San Diego, CA, USA Job Summary Reports to: Director of Engineering Exemption Status: Full-time Exempt Location: Onsite Job Description We are looking for an experienced and motivated Senior LabVIEW Engineer to join our dynamic engineering team, responsible for tackling a variety of projects. As we take on more customers and projects than ever before, we are hiring at every level and need an experienced leader to captain the team. Your main responsibility will be designing and developing custom systems for various applications within multiple industries. You will play a critical role in developing LabVIEW code and configuring hardware to bring Automated Test and Embedded Control systems to completion. You will also be responsible for LabVIEW code review, de-bugging, and overall system support. About Cyth Cyth Systems is an expanding engineering company that works alongside the most impressive and exciting companies in California, the United States, and Europe. Cyth Systems services companies in multiple industries including life sciences, biotech, automotive, energy, semiconductor, technology, and manufacturing. The dynamic nature of our customer base provides our team with an exciting and fulfilling career full of opportunity to develop as an engineering professional. We are a purpose led and performance driven team; and we look forward to your fresh perspective and contribution as we support innovation. Qualifications Bachelor's degree in STEM-related field or equivalent experience 3+ years of Test System or Embedded System development LabVIEW certification (CLD, CLED, CLA) or equivalent experience with LabVIEW Responsibilities Oversee project development as the Project Manager. Take the lead on challenging code development and debugging. Architect and design projects from scratch in LabVIEW. Interface with all forms of NI hardware and third-party hardware using multiple technologies. Independently solve challenges such as interfacing with new equipment, computations or algorithms, and demanding customer applications. Develop code tools designed for simplicity and reuse. Serve as a mentor to other engineers by providing training, debug assistance, and demonstrating best practices of engineering, design, and industry. Visit customer sites and host on-site customer visits to build relationships and gain a deeper understanding of project requirements. Other responsibilities Valid driver's license and reliable transportation Ability to work in-office full-time in San Diego, CA Must be able to lift 20-50 pound objects 5-10 times per day Prolonged periods of sitting at a desk and working on a computer Job Type: Full-time Salary: $85,000.00 - $140,000.00 per year Schedule 8 hour shift Work Location: In person Submit your resume today Name Phone Email Upload Resume Upload supported file (Max 15MB) Submit [attributer-channel] [attributer-channeldrilldown1] [attributer-channeldrilldown2] [attributer-landingpage] [attributer-landingpagegroup] [attributer-channeldrilldown3]

  • ON Semiconductor Speeds Up Test Times with NI Hardware Integration | Cyth Systems

    Project Case Study ON Semiconductor Speeds Up Test Times with NI Hardware Integration Aug 16, 2023 e3ac61b1-99eb-448c-bd04-b58a6a3eaef3 e3ac61b1-99eb-448c-bd04-b58a6a3eaef3 Home > Case Studies > *As Featured on NI.com Original Authors: Gerd Van den Branden, ON Semiconductor Belgium Edited by Cyth Systems (ATE) production tester The Challenge Developing a high-end, scalable, and cost-effective mixed-signal automated test equipment (ATE) production tester that enables wafer sort and final test for ON Semiconductor’s new generation image sensors. The Solution Customizing the STS T4 system test head concept to meet our requirements for image sensor test of our full product portfolio. This solution is internally called the Pretty Image Sensor Tester from ON Semiconductor (PISTON tester). ON Semiconductor is a worldwide supplier of silicon solutions for energy-efficient electronics. At the Mechelen site, we develop high-end image sensors for a wide range of industries including cinematics, space, industrial and medical. We develop our CMOS image sensors to provide high frame rates (in the kilohertz range) at megapixel resolutions. These sensors also feature large resolution and high-speed parallel and serial digital outputs. To meet our quality standards, we needed a high-performance platform capable of providing stimuli to the sensors and handling the intense data rates in combination with static and dynamic parametric test functionality and massive image processing computing power. Furthermore, we needed a test system that could grow with future product requirements. Mechelen site We concluded that existing testers on the market could not meet these requirements without significant modification. We wanted to get away from doing everything ourselves, so we looked at the test and measurement market and asked NI for a proposal. We were doing characterization test with a custom test setup that used multiple custom-designed FPGA boards. This solution had major drawbacks in terms of data streaming and full frame rate capturing increasing our test times significantly. The PXI Express platform seemed like a promising fit to solve some of these problems, but the system could not handle the intense data throughput that our chips could produce. On the other hand, the PXI Express platform was open and modular, so other vendors could tie into it or develop custom modules for it. This meant we could combine the best of multiple worlds and build a hybrid solution. Off-the-Shelf Collaboration NI engineers told us they were developing the Semiconductor Test System (STS) and showed us some of the designs for this production tester. We looked into the specific technical requirements and found out that even though the STS T4 system was not designed with our application in mind, NI worked with Sisu Devices to help us customize the system. The idea was to create a scalable tester that could test all our new products. This was a very exciting time where each partner stepped up to meet our criteria. We worked with NI system engineers and Sisu Devices to build a hybrid test solution using off-the-shelf components when possible and custom design when needed. For example, Sisu Devices assisted in the mechanical design and thermal housekeeping design for our PISTON tester. Multiple customizations were implemented to the STS T4 core to optimize the machine for image sensor test. NI system engineers assisted us with all our questions regarding custom PXI Express modules, system timing synchronization and more. Based on a combination of NI PXI and PXI Express modular instrumentation, an in-house developed PXI Express timing engine, frame grabber and synchronization modules, we succeeded in outperforming other commercially available test systems in terms of cost, bandwidth, throughput, and configurability. The PISTON tester offers per pin test capability at speeds up to 1.2 Gbps on all 680 device under test (DUT) I/O pins, configurable I/O standards over a -2 V to 6.5 V range, multiple 24-bit accurate analog data channels, controlled over test temperature ranges from -40 to 125 degrees C, 100 percent correlation to characterization results, and more. Production results We released our PISTON tester in production, and so far we have seen great results. Not only can we test all of our product portfolio again without having to make compromises, but we have also seen test time reductions and enhanced performance benefits with our I/O count and image transfer bandwidth. See Figure 1 for details. Conclusion The PISTON tester has been on the production floor for over a year, and we have had excellent results with the system. Within approximately one year of the PISTON tester being deployed it fully paid back our hardware and development costs. Based on the open NI PXI and PXI Express platforms with NI modular instrumentation, in combination with the in-house developed PISTON timing engine, PXI Express frame grabber, and synchronization modules, we were able to develop a production tester that outperforms commercially available test equipment currently available in terms of cost, bandwidth, throughput, and product flexibility. Based on these positive results, we are confident we will have success with any future PISTON tester deployments. Original Authors: Gerd Van den Branden, ON Semiconductor Belgium Edited by Cyth Systems Talk to an Expert Cyth Engineer to learn more

  • Arbitrary AWG for Next-Generation Semiconductor Manufacturing | Cyth Systems

    Project Case Study Arbitrary AWG for Next-Generation Semiconductor Manufacturing Sep 10, 2025 12f87aab-9779-4c05-a4da-bbcae627b605 12f87aab-9779-4c05-a4da-bbcae627b605 Home > Case Studies > Semiconductor equipment manufacturer achieved next-generation etching capabilities through advanced waveform generation and control built with NI PXI and LabVIEW FPGA. Custom Advanced Arbitrary Waveform Generator (AWG) Project Summary Semiconductor equipment manufacturer achieved next-generation etching capabilities through advanced waveform generation and control built with NI PXI and LabVIEW FPGA. System Features & Components High-speed arbitrary waveform generation with complex waveform capabilities enabled the generation of waveforms with 1,000+ samples per millisecond (>1MS/s) Highly-synchronized waveform control and oscilloscope measurements for coordination of digitized signal measurement Streamlined vacuum chamber integration for semiconductor wafer processing applications Outcomes Next-generation semiconductor chip manufacturing enabled through high-precision waveform control Widespread customer adoption in every major silicon wafer fabrication site in the world Sustainable innovation enabled by robust and flexible system architecture built on NI PXI and LabVIEW FPGA Technology at-a-glance PXIe-1071 Chassis NI PXI-5441 Arbitrary Waveform Generator PXIe-5105 Oscilloscope PXIe-8822 Embedded Controller PXI-7852R FPGA Module LabVIEW FPGA Silicon Wafer Etching Almost every single modern electronic device contains at least one semiconductor chip. Smartphones, TVs, washing machines and cars depend on the precise and complex control of electrical signals that semiconductors provide. Silicon wafers are a foundational material from which many semiconductor technologies are made. Part of the semiconductor manufacturing process includes etching microscopic, 3D patterns onto these silicon wafers to form electronic devices like transistors, capacitors and interconnects that are critical for the function of the manufactured microchip. The level of precision with which these electronic components are etched into the silicon directly impact the performance of the microchip, making the uniformity and accuracy of these etched features critical at the nanometer scale. Complex Waveform Requirements A major semiconductor equipment manufacturer was facing significant limitations with their equipment’s ability to support the manufacture of next generation chips. Their semiconductor manufacturing tools were deployed into many silicon wafer fabrication sites and their manufacturing customers were continuously coming up against the limitations of their systems, putting their market position and market share at risk. They needed to upgrade the simple signal generators in their current solution to waveform generators capable of delivering the complex waveforms necessary to precisely-control the microscopic piezo coils central to their unique etching process. To maintain and expand their customer base, they required a solution capable of: Precision Control : Their equipment needed to drive microscopic piezo coils at rates of 2,000-100,000 times per second, requiring advanced control capabilities with arbitrary waveform programming and thousands of sample points per period. Complex Waveform Requirements: Simple waveforms like sine, square and triangular signals were not sufficient for controlling the complex etching operations their customers required; they needed the capability to customize every single point within the waveforms controlling their piezo coils. High-Speed Synchronization: The waveform generator required tight coupling with oscilloscope measurements to synchronize digitized signal acquisition Global Deployment: The solution needed to integrate seamlessly with semiconductor manufacturing equipment deployed worldwide Customized Advanced AWG The global semiconductor equipment manufacturer approached Cyth Systems for help improving their etching capabilities. Cyth’s expertise developing complex, highly-synchronized control systems and custom waveform generator solutions enabled them to rapidly iterate on the customer’s existing solution to provide high-performance waveform generation, measurement, and control. The development process of the Arbitrary Waveform Generator (AWG) included three iterations: 1st generation: Replicate simple, existing waveform generation capabilities on NI PXI platform 2nd generation: Synchronize AWG pulses with oscilloscope measurements to ensure accurate digital signal data acquisition 3rd generation: Further refine synchronization between waveform pulses and oscilloscope measurements to enable the driving of piezo coils in the upper frequency ranges (up to 100,000 times per second) Left: 2nd Generation Arbitrary Waveform Generator, Right: 1st Waveform Generator. The AWG system was built to perform sophisticated waveform analysis, generation, and control without operator intervention: Collected waveforms, measured their length and characteristic shape Determined the optimal number of points required to accurately describe each waveform Set appropriate sampling rates based on waveform complexity Calculated the precise number of points needed to achieve target sample rates The equipment manufacturer required 1,000+ samples per millisecond (>1MS/s) to accurately characterize the waveforms to be generated; the samples were then upscaled to 200MS/s to ensure smooth signal quality as the waveform is output. Learn about LabVIEW FPGA Programming The waveform generator was tightly coupled with an NI PXI oscilloscope in a closed-loop approach to enable real-time system optimization. The synchronization in the measurements of digitized signals ensured precise timing coordination between waveform output and measurement feedback. Graphs comparing the outputs of a simple waveform generator vs. an arbitrary waveform generator Leveraging the NI PXI platform with LabVIEW FPGA software, Cyth created a sophisticated Arbitrary Waveform Generator (AWG) capable of supporting next-generation semiconductor equipment with dramatically improved high-speed and high-sample rate waveform control. For the semiconductor equipment manufacturer, the greatest differentiators of the NI platform were: Measurement integration: Synchronized waveform generation and oscilloscope measurement in a single platform Firmware flexibility: LabVIEW-based algorithms enable rapid parameter adjustments and optimization Hardware reliability: NI PXI platform provides industrial-grade reliability for 24/7 manufacturing operations Compact footprint: 4-slot PXI chassis delivers advanced capabilities in space-efficient design System PXI Card Specifications Use PXIe-1071 Chassis 4-Slot Chassis PXI Chassis NI PXI-5441 43 MHz, 100 MS/s AWG, 16-Bit, Onboard Signal Processing Arbitrary Waveform Generator PXIe-5105 60 MHz, 8-Channel, 12-Bit PXI Oscilloscope High Speed & High Sample Rate Waveform Measurement PXIe-8822 Embedded Controller – FPGA-Based I/O, 2.4 GHz Quad-Core Processor PXI Controller Data Logging & Control PXI-7852R Virtex-5 LX50 FPGA, 750 kS/s Data Logging & Control Sustainable Innovation The Arbitrary Waveform Generator delivered transformative capabilities that positioned the equipment manufacturer for next-generation semiconductor manufacturing leadership. The most impactful system performance improvements were: Advanced waveform control: Transition from simple signal generation to arbitrary waveform programming with thousands of sample points per period Synchronized measurement: Tight integration between waveform generation and oscilloscope measurement for closed-loop optimization Scalable sampling rates: Flexible sampling from >1 MS/s for analysis up to 200 MS/s for output generation The overall system improvements enabled the customer to deliver: Global deployment capability: System integrated successfully across every major silicon wafer fabrication site worldwide Next-generation semiconductor manufacturing capabilities: Advanced waveform control capabilities support production of cutting-edge semiconductor devices Future-ready platform: Modular PXI architecture enables flexibility for continued system evolution and capability expansion The new, advanced capabilities fundamentally strengthened the semiconductor equipment manufacturer’s position as a leader in their space. The transition from simple signal generation to sophisticated arbitrary waveform generation and control enabled their equipment to meet the high precision requirements of next-generation semiconductor chip manufacturing. These modernized etching systems became a critical enabler for the semiconductor industry's continued advancement toward smaller, faster, and more efficient devices. These systems were built for sustainable innovation. The proven software architecture and modular NI PXI I/O enable continuous capability enhancement as semiconductor manufacturing requirements continue to evolve. Let's Talk

  • The Most Important Considerations of an Embedded System Design

    In embedded applications like industrial controllers and robotics, precise timing and predictable execution are crucial for smooth operations. < Back The Most Important Considerations of an Embedded System Design The NI RIO Platform solves each of these design challenges Previous Next

  • Implementing Automated Testing in Your PCBA Manufacturing Process

    Cyth Systems | Whitepapers | Automated Printed Circuit Board Testing | Implementing Automated Testing in Your PCBA Manufacturing Process Implementing Automated Testing in Your PCBA Manufacturing Process Assess Your Testing Needs Start by assessing your specific testing needs and requirements. Consider the types of tests you need to perform, the performance criteria, and the desired level of quality control. This will help you determine the scope and complexity of your automated testing system. Research & Select the Right Equipment Research and select the right equipment and software for your testing process. Consider factors such as test accuracy, test speed, ease of use, scalability, and compatibility with your existing manufacturing process. Consult with experts in the field, attend trade shows and conferences, and request demonstrations or trials to evaluate different options. Design Test Fixtures Design test fixtures that securely hold the PCBs, provide proper alignment and contact, and allow for easy insertion and removal of the boards. Consider the specific requirements of your PCB designs and ensure that the test fixtures can accommodate different board sizes and configurations. Test the fixtures with sample PCBs to ensure proper functionality. Develop Test Sequences & Parameters Develop test sequences and parameters that accurately reflect your testing requirements. This includes defining the order of tests, the specific measurements and analysis to be performed, and the pass/fail criteria. Consider the complexity of your test procedures and ensure that the automated testing system can handle them accurately and consistently. Test the sequences and parameters with sample PCBs to ensure accurate and reliable results. Deploy and Validate Develop test sequences and parameters that accurately reflect your testing requirements. This includes defining the order of tests, the specific measurements and analysis to be performed, and the pass/fail criteria. Consider the complexity of your test procedures and ensure that the automated testing system can handle them accurately and consistently. Test the sequences and parameters with sample PCBs to ensure accurate and reliable results.

  • Measuring Strain Key Fundamentals Guide | Cyth Systems

    Cyth Systems | Whitepapers | Sensor Fundamentals | Measuring Strain Key Fundamentals Guide | Cyth Systems Measuring Strain Key Fundamentals Guide | Cyth Systems This guide helps you with basic strain concepts, understanding how strain gages work, and with selecting the right configuration type. After you decide on your sensor, consider the required hardware and software to condition, acquire, and visualize strain measurements properly. You can also consider any hardware packages required for data acquisition. What is Strain In mechanical testing, it is necessary understand how an object reacts to various forces. The amount of deformation a material experiences due to an applied force is called strain. Strain by definition is the ratio of the change in length of a material to the original, unaffected length, as shown in Figure 1. Strain can be positive (tensile), due to elongation, or negative (compressive), due to contraction. When a material is compressed in one direction, the tendency of material properties is to expand in the other two directions perpendicular to this force. Poisson's effect, calculated using Poisson’s ratio (v), is the measure of this effect and is defined as the negative ratio of strain in the transverse direction to the strain in the axial direction. Dimensionless strain can be expressed in units such as in./in. or mm/mm. In practice, the magnitude of measured strain is very small, so it is often expressed as microstrain (µε), which is ε x 10-6. Figure 1. Strain is the ratio of the change in length of a material to the original, unaffected length. The four different types of strain are axial, bending, shear, and torsional. Axial and bending strain are the most common (see Figure 2). Axial strain measures how a the stretch or compression of a material as a result of a linear force in the horizontal direction. Bending strain measures a stretch on one side of a material and the contraction on the opposite side due to the linear force applied in the vertical direction. Shear strain measures the amount of deformation that occurs from a linear force with components in both the horizontal and vertical directions. Torsional strain measures a circular force with components in both the vertical and horizontal directions. Figure 2. Axial strain measures how a material stretches or pulls apart. Bending strain measures a stretch on one side and a contraction on the other side. Measuring Strain One measures strain using several methods, the most common being with a strain gage. A strain gage’s electrical resistance varies in proportion to the amount of strain in the device. The most widely used strain gage is the bonded metallic strain gage. The metallic strain gage consists of a very fine wire or metallic foil arranged in a grid pattern. The grid pattern maximizes the amount of metallic wire or foil subject to strain in the parallel direction. The grid is bonded to a thin backing called the carrier, which is attached directly to the test specimen. Therefore, the strain experienced by the test specimen is transferred directly to the strain gage, which responds with a linear change in electrical resistance. Figure 3. The electrical resistance of metallic grid changes in proportion to the amount of strain experienced by the test specimen. A fundamental parameter of the strain gage is its sensitivity to strain, expressed quantitatively as the gage factor (GF). GF is the ratio of the fractional change in electrical resistance to the fractional change in length, or strain: The GF for metallic strain gages is usually around 2. You can obtain the actual GF of a particular strain gage from the sensor vendor or sensor documentation. In practice, strain measurements rarely involve quantities larger than a few millistrain (e x 10-3). Therefore, to measure the strain, its important to accurately measure very small changes in resistance. Suppose a test specimen undergoes a strain of 500 me. A strain gage with a GF of 2 exhibits a change in electrical resistance of only 2 (500 x 10-6) = 0.1%. For a 120 Ω gage, this is a change of only 0.12 Ω. To measure such small changes in resistance, strain gage configurations are based on the concept of a Wheatstone bridge. The general Wheatstone bridge is a network of four resistive arms with an excitation voltage, VEX, that is applied across the bridge. Figure 4. Strain gages are configured in Wheatstone bridge circuits to detect small changes in resistance. The Wheatstone bridge is the electrical equivalent of two parallel voltage divider circuits. R1 and R2 compose one voltage divider circuit, and R4 and R3 compose the second voltage divider circuit. The output of a Wheatstone bridge, Vo, is measured between the middle nodes of the two voltage dividers. From this equation, you can see that when R1 /R2 = R4 /R3, the voltage output VO is zero. Under these conditions, the bridge is said to be balanced. Any change in resistance in any arm of the bridge results in a nonzero output voltage. Therefore, if you replace R4 in Figure 4 with an active strain gage, any changes in the strain gage resistance unbalance the bridge and produce a nonzero output voltage that is a function of strain. Choosing the Right Strain Gage Types of Strain Gauges The three types of strain gage configurations, quarter-, half-, and full-bridge, are determined by the number of active elements in the Wheatstone bridge, the orientation of the strain gages, and the type of strain being measured. Quarter Bridge Strain Gages Configuration Type I Measures axial or bending strain Requires a passive quarter-bridge completion resistor known as a dummy resistor Requires half-bridge completion resistors to complete the Wheatstone bridge R4 is an active strain gage measuring the tensile strain (+ε) Figure 5. Quarter-Bridge Strain Gage Configurations. Configuration Type II In an ideal world the resistance of the strain gage should only change in response to applied strain. However, strain gage material and the specimen material to which the gage is applied respond to changes in temperature. The quarter-bridge strain gage configuration type II helps minimize the effect of temperature by using two strain gages in the bridge. As shown in Figure 6, typically one strain gage (R4) is active and a second strain gage (R3) is mounted in close thermal contact, but not bonded to the specimen and placed transverse to the principal axis of strain. Therefore, the strain has little effect on this dummy gage, but any temperature changes affect both gages in the same way. Because the temperature changes are identical in the two strain gages, the ratio of their resistance does not change, the output voltage (Vo) does not change, and the effects of temperature are minimized. Figure 6. Dummy strain gages eliminate effects of temperature on the strain measurement. Half Bridge Strain Gage You can double the bridge’s sensitivity to strain by making both strain gages active in a half-bridge configuration. Configuration Type I Configuration Type II Measures axial or bending strain Requires half-bridge completion resistors to complete the Wheatstone bridge R4 is an active strain gage measuring the tensile strain (+ε) R3 is an active strain gage compensating for Poisson’s effect (-νε) This configuration is commonly confused with the quarter-bridge type II configuration, but type I has an active R3 element that is bonded to the strain specimen. Measures bending strain only Requires half-bridge completion resistors to complete the Wheatstone bridge R4 is an active strain gage measuring the tensile strain (+ε) R3 is an active strain gage measuring the compressive strain (-ε) Full-Bridge Strain Gage A full-bridge strain gage configuration has four active strain gages and is available in three different types. Types I and II measure bending strain and type III measures axial strain. Only types II and III compensate for the Poisson effect, but all three types minimize the effects of temperature. Configuration 1: Only Bending Strain Configuration Type II: Only Bending Strain Configuraiton Type III: Only Axial Strain Highly sensitive to bending strain only R1 and R3 are active strain gages measuring compressive strain (–e) R2 and R4 are active strain gages measuring tensile strain (+e) Highly sensitive to bending strain only R1 and R3 are active strain gages measuring compressive strain (–e) R2 and R4 are active strain gages measuring tensile strain (+e) Measures axial strain R1 and R3 are active strain gages measuring the compressive Poisson effect (–νe) R2 and R4 are active strain gages measuring the tensile strain (+e) Specifications of Strain Gauges to Consider Say you have decided the type of strain you intend to measure (axial or bending), other considerations now include sensitivity and operating conditions. For the same strain gage, changing the bridge configuration can improve its sensitivity to strain. For example, the full-bridge type I configuration is four times more sensitive than the quarter-bridge type I configuration. However, full-bridge type I requires three more strain gages than quarter-bridge type I. It also requires access to both sides of the gaged structure. Additionally, full-bridge strain gages are significantly more expensive than half-bridge and quarter-bridge gages. For a summary of the various types of strain gages, refer to the following table. Grid Width Using a wider grid, if not limited by the installation site, improves heat dissipation and enhances strain gage stability. However, if the test specimen has severe strain gradients perpendicular to the primary axis of strain, consider using a narrow grid to minimize error from the effect of shear strain and Poisson strain. Nominal Gage Resistance Nominal gage resistance is the resistance of a strain gage in an unstrained position. You can obtain the nominal gage resistance of a particular gage from the sensor vendor or sensor documentation. The most common nominal resistance values of commercial strain gages are 120 Ω, 350 Ω, and 1000 Ω. Consider a higher nominal resistance to reduce the amount of heat generated by the excitation voltage. Higher nominal resistance also helps reduce signal variations caused by lead-wire changes in resistance due to temperature fluctuations. Temperature Compensation Ideally, strain gage resistance should change in response to strain only. However, a strain gage’s resistivity and sensitivity also change with temperature, which leads to measurement errors. Strain gage manufacturers attempt to minimize sensitivity to temperature by processing the gage material to compensate for the thermal expansion of the specimen material for which the gage is intended. These temperature-compensated bridge configurations are more immune to temperature effects. Also consider using a configuration type that helps compensate for the effects of temperature fluctuations. Installation Installing strain gages can take a significant amount of time and resources, and the amount varies greatly depending on the bridge configuration. The number of bonded gages, number of wires, and mounting location all can affect the level of effort required for installation. Certain bridge configurations even require gage installation on opposite sides of a structure, which can be difficult or even impossible. Quarter-bridge type I is the simplest because it requires only one gage installation and two or three wires.

  • Circaflex | Off-the-shelf Control Systems | Cyth Systems

    A off-the-shelf control systems that help engineers develop sophisticated devices & instruments without the risk & cost of custom-designed circuit boards. SOLUTIONS Home > Solutions > Circaflex What is CircaFlex ? Circaflex is a family of off-the-shelf control systems that help engineers develop sophisticated devices and instruments without the risk and cost of custom-designed circuit boards. Using Circaflex, engineers and scientists can develop feature-packed embedded systems from prototyping in just a few weeks, and ready for deployment in just a few months, saving 50-75% of the effort, time, and risk of a custom board! Control PUMPS, MOTORS, PNEUMATICS, and MORE. BUILT-IN CONNECTIVITY to a variety of sensors. PREBUILT CONTROL software and HMI software. EASY to use & POWERFUL control. Why CircaFlex ? Rapid Prototyping & Deployment Control Systems The Circaflex family includes the motherboard, mezzanine boards, and signal conditioning modules to make prototyping easy. Circaflex supports National Instruments Single-Board RIO (sbRIO) and System-on-Module (SOM) systems. Each Circaflex product is designed to support a variety of sensors and devices commonly used in industrial, medical, and biotech device development. Customize your testing needs. “Almost all quality improvement comes via simplification of design, manufacturing, layout, processes, and procedures.” -Tom Peters Ready for prototyping with an array of STANDARD INDUSTRIAL INPUTS & OUTPUTS Customize your testing needs Circaflex series include Circaflex 300 Series (for NI RIO SOM) Circaflex 500 Series (NI Single-Board RIO (sbRIO) Circaflex I/O Modules

  • FPGA Serial Interfaces for Standard and Custom Protocols

    Cyth Systems | Whitepapers | LabVIEW FPGA Design Patterns | FPGA Serial Interfaces for Standard and Custom Protocols FPGA Serial Interfaces for Standard and Custom Protocols Common Serial Communication Standards Protocol Description Common Use Cases RS-232 Recommended Standard 232 Serial communication between devices, often over a distance traversed by a wire or cable (not a trace). RS-485 Recommended Standard 485 Similar to RS-232 but implemented using balanced transmitters and differential receivers to reject common-mode interference, thereby enabling even longer transmission distances. I2C Inter-Integrated Circuit Short distances, often traces, between integrated circuits (ASICs, FPGAs, HMIs, advanced sensors) designed onto a PCB assembly. Cabled configurations are typical as well. SPI Serial Peripheral Interface Synchronized data transfer between multiple circuits (microcontrollers, memory devices, sensors) on a board. Configurations : 1 writer (master), 1-N readers (slaves) an CAN Controller Area Network Common in industrial, automotive and medical environments with a robust physical layer and differential signaling for better noise immunity. Built-in error checking I2S Inter-IC Sound Used for transmitting digital audio signals between integrated circuits. Requires three or more wires, so hardware setup is more complex than I2C. Custom - Incorporate specialty triggering, timing, buffering, etc. There are many resources that further explore the tradeoffs between serial and parallel data communication as well as the nuances of specific serial protocols. The purpose of this post is to outline how serial protocols can be implemented in NI FPGA hardware and the LabVIEW FPGA design language. FPGA Benefits for Serial Communication While fundamentals concepts of FPGA application development in LabVIEW are covered in this article , it’s worth noting some of the benefits as they relate to digital communication. Device integration : Many FPGA-based test, control, and monitoring systems require integration of tasks with peripheral devices, such as sensors, electrotechnical systems, ICs, etc. These devices provide 1-N interfaces used for device control and data communication. Oftentimes the FPGA acts as part of the central control system through which various devices are integrated into the end application, which is feasible even if different devices have different interfaces. Timing control through clock domains : FPGAs give the developer significant flexibility in controlling timing for communication, data processing, and other tasks. If the overall system timing diagram necessitates communication with different devices at different rates, this can easily be implemented in the FPGA code. Customization : Regardless of tasks, FPGAs enable significant customization at the hardware-level. This could apply to triggering, inline data processing, and data storage, such that performance can be optimized for a given set of FPGA resources. Thus, it becomes critical to have an API to for the digital communications protocols present in your application. While NI provides a SPI and I2C driver which can be used in LabVIEW Real-Time and FPGA applications, your application may have different requirements. Need application support for your FPGA project? We're here to help. Book a free consultation Application Example: SPI Communication Let’s look at how a SPI interface can be configured in LabVIEW FPGA. The following code snippet implements a write/transmit action to a configured SPI port on a digital I/O line available on an NI CompactRIO, Single-board RIO, or R Series card. SPI port configuration and data communication in LabVIEW FPGA The VI code snippet above executes the following steps: Initialize the port by creating a shared memory item on the FPGA. Configure the port, accounting for the number of bits to be transmitted, chip select (CS), and mode Map the digital I/O channels to the SPI diagram. In this example, digital lines 5-8 on the hardware are used for the following SPI signals: Read SPI data from the port. This is an iterative action based on the Single-Cycle Timed Loop configured to execute based on a 40MHz clock Write SPI data to the port. As this subVI is also called in the Single-Cycle Timed Loop, it also executes at 40MHz. The case structure enables data write control based on the previous read action. It implements some basic decision-making logic which can occur in the FPGA clock domain. Signal Acronym Purpose SCLK Serial Clock Specifies the clock signals defined by the leader (master) MISO Master In Slave Out (Leader In Follower Out) Serial data output from the follower (slave) MOSI Master Out Slave In (Leader Out Follower In) Serial data output from the leader (master) CS Chip Select Important when you have multiple followers interfacing with a single master. When the chip select pin on the follower is active, it will be “listening” for communication. When it is inactive, it will be “deaf” This provides flexible control over the communication topology. This VI is implemented in such a way that the FPGA will execute without sharing data up to a host running on a Linux Real-Time controller or a Windows machine. Other communication paradigms can be used to pipe data up to a higher level for additional processing and visibility, though closed loop control will be fastest if wholly implemented on the FPGA. Get LabVIEW FPGA SPI API and support Application Example: I2C State Machine The following code snippets show different states of an I2C interface as implemented in a state machine. State machines are common design patterns which provide the developer with flexibility for implementing functionality and using current conditions and logic to transition between states. In LabVIEW, state machines are typically implemented with a case structure embedded in a while loop, where state logic is passed between successive iterations of the loop using shift registers. For this particular example, the following states are defined in the state machine, each representing a different action (or idleness) of the I2C bus. I2C bus states implemented in a LabVIEW FPGA state machine In the VI block diagram below, the I/O port is defined on the FPGA, again referencing the 40MHz onboard clock as the time base to be derived from. FPGA port definition and I2C "Configure" state After the port is configured, the master can then arm and write data to the line. Given the state machine architecture in place, you could easily add some functionality for data processing on the FPGA or sharing to a host VI. I2C "Arm" state Get LabVIEW FPGA I2C API and support Application Example: Maximizing RS-232 and RS-485 Baud Rates Using LabVIEW FPGA gives access to high-speed transmission rates over 102.4 kbaud as well as the ability to rapidly analyze and deterministically act on communication. The baud rate is the number of symbols transmitted per unit time, often expressed as bits per second (bps). The image below shows the configuration tool for an RS232 interface, accounting for baud rate and assignments for parity, data, and stop bits on a per-port basis. This provides the user with wide flexibility for the various serial-based peripherals they want to communicate with. RS-232 port configuration (baud rate, % error, start bit, data bits, stop bit) The VI snippets below, show a simplified example of how RS-232 communication can be implemented on an FPGA target in LabVIEW. The top-level VI also provides a UI for defining data to be written and showing data which has been read. Commonly, this data would be further synchronized up to a host. RS-232 interface "Write" loop in top-level VI RS-232 interface "Read" loop in top-level VI The top-level FPGA VI calls the FPGA Read Write VI (lower level) which runs in the background. Top-level LabVIEW FPGA VI In the lower level VI, which directly interfaces with the FPGA DIO lines configured as serial ports, there are two loops running in parallel, one handling reading from a FIFO, the other handling writing to a FIFO. The "Write" loop follows these steps: Configure the FPGA I/O items (e.g., DIO0) and set the baud rate Write the start bit (pre-defined) Send data bits via FPGA I/O item (known number of bits) Send parity bit Send stop bit Continue looping... The "Read" loop follows these steps: Wait for the start bit Read data from the FIFO data element (known number of data bits) Read the parity bit Read the stop bit Continue looping... In order for this lower-level VI to run effectively, the FIFO on the FPGA buffering the bit stream must be configured. LabVIEW FPGA provides an elegant configuration tool for configuring this FIFO: LabVIEW FPGA FIFO configuration tool Get APIs and support for developing serial interfaces Serializing and Deserializing Data The sections above show how different types of serial interfaces can be configured on an FPGA using LabVIEW. This section outlines the usefulness of an API that can perform serializing and deserializing actions regardless of which serial interface is being used. Serial communication protocols transfer data through bitstreams (0’s and 1’s). However, it is often not the case that the data to be transferred is already in a bitstream format, meaning the application must convert the numeric data to a bitstream. To make the code more modular and reusable across different protocols and devices, using a serializer/deserializer API can save significant time and troubleshooting effort. The serializer is utilized on the transmit side where an integer is converted to bits, and the deserializer does the reverse – takes a bitstream and converts it into a numeric datatype for more intuitive display and datalogging. Serializer VI prototype with an integer as input and bitstream as output Deserializer VI prototype with a bitstream as an input and an integer as output The following code snippet shows how this FPGA can be called: Serializer and Deserializer for serial interface data transformation Take an input word (in this case, it’s an unsigned 16-bit integer) Serialize the data (integer → bitstream) Deserialize the data (bitstream → integer) Process the generated data array Plot on a waveform graph for testing and troubleshooting Access support for LabVIEW FPGA projects Conclusion FPGAs provide developers significant flexibility and resource access for high-performance control and monitoring applications, which commonly involve interfacing with peripheral systems via serial standards. The intent of this article was to provide context on different serial interfaces and how their protocols can be designed into larger LabVIEW FPGA applications. While there are various toolchains and APIs available, we at Cyth have decades of experience designing and developing LabVIEW FPGA applications and are interested in helping you develop or upgrade your future systems. Book a demo or consultation

  • Aircraft Engine Part Inspection Using NI Smart Cameras & LabVIEW | Cyth Systems

    Project Case Study Aircraft Engine Part Inspection Using NI Smart Cameras & LabVIEW Mar 27, 2024 b4b8bc99-9b08-4b63-aa9d-d718c2f99464 b4b8bc99-9b08-4b63-aa9d-d718c2f99464 Home > Case Studies > *As Featured on NI.com Original Authors: Daniel Kaminský, ELCOM, a.s. Edited by Cyth Systems Turbine airfoils for aircraft engines The Challenge Automating the deburring and final inspection of turbine airfoils for aircraft engines. The Solution Building a robotics cell based on NI LabVIEW to precisely deburr and shape the turbine airfoils for quality inspection with an NI Smart Camera. To automate the deburring and inspection process for turbine airfoils for aircraft engines, AV&R Vision & Robotics designed a system that uses a six-axis robot to manipulate the airfoil to combine two critical operations. First, we debur the airfoil using tooling chosen specifically to deburr the dovetail of the part and create a radius on each edge. Then a vision system designed for surface inspection examines the part and records the data based on the part serial number, which is also read using the vision system. We originally developed the system for a large OEM aircraft-engine manufacturer based on a lean manufacturing workflow. The operator loads the airfoil into the work cell after the grinding operation. In addition, we designed the system to be programmable so we can easily adapt it for many other deburr and inspection applications including consumer goods such as wrenches, medical device implants, surgical tools, automotive components, and a variety of other aerospace engine components. There are difficulties regarding the system architecture which was needed to transfer over to the new robot cell which was created using NI Smart Camera and laser line scanners when required. The NI Smart Camera is a CPU and camera bundled in a compact design, which transfers data from the images taken directly over a local network or Ethernet. This allows the deployment of LabVIEW software directly to the camera to run a product inspection and give you data results in real-time. Left: Aircraft engine part undergoing line scan by laser for product profiling. Right: NI Smart Camera assisting with robot movement and part inspection. Automating the Deburring and Inspection Process In the past, operators inspected and deburred different complex and high-precision turbine airfoils using deburring tools to finish the parts and then manually inspected the airfoils to ensure the parts were within a specified tolerance. We developed a cell that can automatically perform these two processes, ensuring every part leaves the cell with the desired quality. After loading the part into the cell, the system initializes and a robot picks the part from the fixture and presents it to a deburring station that removes all the burrs from the root of each airfoil, breaks each edge, and creates a radius on specific edges as per the drawing specifications. Left: Aircraft engine foil being burred and polished using an automated process. Right: Robot cell created by AV&R for automated processes. After the deburring process, the robot presents the airfoil to an NI Smart Camera for inspection to look for random surface defects such as nicks, dents, scratches, and tooling marks on the critical surfaces. The defects are classified according to their shape using the particle analysis tools in the NI Vision Development Module. In addition, the vision system reads the serial number using NI optical character recognition (OCR) algorithms. After inspection, properly deburred parts are placed on the output of the cell and moved to the next production stage. We used two NI products in the finishing and inspection cell. For the vision system and the surface inspection, we chose the NI Smart Camera because of its industrial design and flexibility. We also used LabVIEW to implement the inspection sequences and for the user interface. Developing the human-machine interface (HMI) in LabVIEW allows the operator to see the status of the system, the part under inspection, and the statistics of each part as it is processed. The operator can view each of the parts presented to the vision system, a pass/fail counter that highlights the number and status of the parts processed, and the results of each inspection process on the HMI. We have used LabVIEW in similar inspection systems in which we built the code for a PC-based system. All the code previously used for the PC was easily transferred to the NI Smart Camera, which allowed us to take advantage of the common platform. By using NI hardware and software, we seamlessly combined the material removal and inspection solution using the framework for previously developed solutions. Original Authors: Daniel Kaminský, ELCOM, a.s. Edited by Cyth Systems

  • Why Choose NI Embedded Systems?

    Cyth Systems | Whitepapers | Unmatched Performance, Flexibility, and Integration | Why Choose NI Embedded Systems? Why Choose NI Embedded Systems? Overview of NI Embedded Systems NI embedded systems include powerful platforms such as CompactRIO (cRIO), Single-Board RIO (sbRIO), and the latest System on Module (SOM) solutions. These platforms are engineered for applications requiring real-time performance, reconfigurable hardware, and modular I/O with deep software integration, primarily through LabVIEW. Key Platforms: CompactRIO (cRIO) : A rugged, modular control system featuring a real-time processor, FPGA, and C Series I/O modules for reliable operation in harsh environments. Single-Board RIO (sbRIO) : A compact, embedded solution that offers similar capabilities as CompactRIO in a smaller form factor for custom designs and lower-cost applications. NI System on Module (SOM) : A flexible embedded system that integrates an FPGA and processor onto a small board for applications requiring custom hardware design. 1. Comprehensive Software Integration with LabVIEW One of the most significant advantages of NI embedded systems is their seamless integration with LabVIEW, a graphical programming environment designed for measurement and control applications. Key Benefits: Graphical Development Environment : LabVIEW allows engineers to design, prototype, and deploy embedded applications without the need for extensive text-based programming. Its dataflow-based approach simplifies development, especially for complex systems. Real-Time and FPGA Programming : LabVIEW provides powerful tools like LabVIEW Real-Time and LabVIEW FPGA, enabling deterministic control, precise timing, and parallel processing within the same environment. Extensive Libraries and Toolkits : NI’s embedded software ecosystem includes toolkits for signal processing, control design, machine learning, and more, accelerating development and improving functionality. Applications: Rapid prototyping of control systems Real-time monitoring and control in automation High-speed data acquisition and signal processing 2. Modular and Scalable Hardware for Versatile Applications NI embedded systems are designed to be modular and scalable, allowing users to easily expand or customize their systems as needs evolve. Key Benefits: Flexible I/O Configuration : With NI’s extensive C Series modules and I/O options, users can select the exact configuration needed for their application. Whether it’s analog input, digital I/O, motion control, or communication interfaces, NI systems can be tailored to specific requirements. Scalability : As applications grow or change, additional modules or systems can be easily integrated into existing setups, making NI platforms highly adaptable to future needs. Multi-Vendor Support : NI embedded systems support third-party sensors, actuators, and communication devices, offering flexibility when integrating with existing equipment. Applications: Industrial automation and process control Distributed data acquisition systems Complex measurement and testing scenarios 3. High Performance and Real-Time Control Capabilities NI embedded systems are designed for applications requiring high performance, real-time operation, and deterministic control. This is achieved through the integration of powerful processors, reconfigurable FPGAs, and real-time operating systems (RTOS). Key Benefits: Real-Time Deterministic Control : NI’s real-time processors ensure predictable and consistent execution of control algorithms, critical for time-sensitive applications such as motion control, robotics, and high-speed data acquisition. Reconfigurable FPGA : The FPGA provides unparalleled flexibility, allowing users to customize signal processing, I/O timing, and control logic at the hardware level. This is particularly useful for applications that demand low-latency responses or complex parallel operations. Robust Performance in Harsh Environments : CompactRIO and other NI embedded systems are built to withstand extreme temperatures, vibration, and shock, ensuring reliability in demanding industrial environments. Applications: Precision control in robotics and automation High-speed signal analysis and real-time feedback loops Critical infrastructure monitoring in energy and transportation 4. Rugged and Reliable Design for Industrial Environments NI embedded systems are engineered for durability, making them ideal for deployment in challenging industrial environments. Key Benefits: Rugged Build Quality : CompactRIO systems are housed in robust enclosures that protect against dust, moisture, vibration, and extreme temperatures. They are designed to operate reliably in environments where other systems may fail. Extended Operating Temperature Range : NI embedded systems are designed for environments ranging from sub-zero conditions to high-heat scenarios, making them suitable for use in outdoor installations, automotive testing, and factory floors. Long Lifecycle Support : NI offers long-term support for its embedded platforms, ensuring that industrial applications remain operational for years, even in industries with strict maintenance and certification requirements. Applications: Oil and gas field monitoring and control Automotive testing and validation in extreme conditions Industrial IoT solutions in manufacturing and process control 5. Seamless Connectivity and Integration with Industrial Protocols NI embedded systems support a wide range of industrial communication protocols, enabling easy integration with existing control systems, networks, and sensors. Key Benefits: Support for Industry-Standard Protocols : NI platforms offer native support for protocols such as EtherCAT, CAN, Modbus, Ethernet/IP, and more. This ensures compatibility with a broad range of industrial equipment and simplifies integration into existing control networks. Remote Monitoring and Cloud Integration : NI embedded systems can be connected to cloud platforms for remote monitoring, data analysis, and control, enabling IIoT applications and Industry 4.0 initiatives. Scalable Network Architecture : With built-in Ethernet, serial, and wireless connectivity options, NI systems can be easily integrated into distributed control systems for multi-site monitoring and control. Applications: Industrial automation with real-time communication Smart grid monitoring and energy management Remote data acquisition and analysis in IIoT applications 6. End-to-End Solutions for Faster Time-to-Market NI’s embedded systems offer a complete platform from development to deployment, reducing development cycles and accelerating time-to-market. Key Benefits: Rapid Prototyping and Deployment : LabVIEW’s integrated development environment allows users to quickly prototype, test, and deploy applications on NI hardware without switching between multiple tools. Turnkey Solutions : NI provides a range of pre-built solutions and reference architectures for common applications, reducing the time needed to develop custom systems from scratch. Support and Services : NI offers extensive support, including training, consulting, and application engineering services, helping users optimize system performance and achieve faster deployment. Applications: Fast deployment of custom embedded control systems Reducing development time in R&D projects Scaling pilot projects to full production Conclusion NI embedded systems stand out in the market due to their combination of robust hardware, powerful software integration, and flexibility across a wide range of applications. Whether it’s for industrial control, high-speed data acquisition, or real-time signal processing, NI provides a proven platform that scales with your needs. By choosing NI embedded systems, companies gain a competitive advantage in developing reliable, high-performance solutions that meet today’s and tomorrow’s challenges.

  • Increasing Washing Machine Reliability with Hardware in the Loop | Cyth Systems

    Project Case Study Increasing Washing Machine Reliability with Hardware in the Loop Aug 23, 2023 811eb230-8c15-4008-9d59-e8c725bfcb8c 811eb230-8c15-4008-9d59-e8c725bfcb8c Home > Case Studies > *As Featured on NI.com Original Authors: G. Paviglianiti - Whirlpool Fabric Care, Advanced Development Edited by Cyth Systems Washing Machine The Challenge Increasing reliability standards and testing capabilities of our washing machine electronic control boards and implementing an automatic system for embedded firmware validation to save time and resources. The Solution Using NI VeriStand real-time testing software and NI PXI hardware to create a stand-alone system that tests and validates electronic control boards and automatically tests, calibrates, and validates smart algorithms that support Whirlpool 6th Sense Advanced Technology in domestic washing machines. Hardware-In-The-Loop simulation is a technique used for testing complex control systems. It is a system that can simulate scenarios for the testing of an electronic control board. HIL must be able to perform a test with high-speed analog and digital I/O data acquisition to ensure a boards proper function before deployment. We developed the 6th Sense smart control algorithms using rapid control prototyping systems based on the NI LabVIEW Real-Time Module and the LabVIEW Simulation Interface Toolkit. With our company-wide use of this system, we can test new concepts developed with simulation tools directly on a real washing machine. The development process also requires validating engineered control algorithm firmware on the production control board. Algorithm complexity is growing exponentially due to higher quality requirements and challenging cost targets associated with energy label, washing performance, low noise, and advanced features. To handle these new requirements, we designed and implemented a hardware-in-the-loop (HIL) system. Our system serves two main purposes: fast, automatic testing of the low-level signals processed and provided by the production control board for algorithm functionality and minimizing calibration and validation effort with considerable time and resource savings. Performance First We developed a washing machine mathematical model to simulate the main control loops inside the home appliance, such as mechanical, hydraulic, and thermodynamic subsystems. We adapted the I/O interfaces of the model to comply with production control boards. In conjunction with the mathematical model, we designed an external I/O board to act as a bridge and signal processor between the control mainboard and the PXI I/O interfaces. Simulating a whole washing machine requires real-time performance. To achieve this determinism, we chose a 2.2 GHz NI PXI-8110 Intel Core 2 Quad controller combined with an NI R Series Virtex-II multifunction reconfigurable I/O (RIO) module for high I/O capabilities and flexibility. We used NI VeriStand software to integrate our washing machine model. We developed the main part with MathWorks, Inc. MATLAB® and Simulink® software so it is possible to compile, deploy, and run the model on the PXI real-time controller. NI VeriStand performs all I/O mapping. Due to special requirements such as time constraints and numeric integration issues, we designed a specific part of the model directly on the field-programmable gate array (FPGA) RIO module using the LabVIEW FPGA Module. We used NI VeriStand to see and log all low-level signals coming from the control board. With the model inline parameters, we can simulate the behavior of different washing machine models with various external conditions. Beyond Rapid Prototyping As our next step, we integrated the HIL test system in our previously developed algorithm calibration system. This system, developed with LabVIEW, automatically drives the washing machine to perform specific tests and prompts the user with instructions to configure the washer load condition. Moreover, it performs defined test plans and automatically computes algorithm calibration parameters. To integrate the two systems, we implemented a LabVIEW application that remotely configures the NI VeriStand environment according to the test plan executed by the calibration system. This way, we can use plug and play architectures to integrate our HIL system with actual algorithm calibrations set up to rapidly scope how different sources of noise affect algorithm calibration. Conclusion We designed our system using LabVIEW and NI VeriStand for several reasons. First, Whirlpool and NI have a long history of more than 10 years that has successfully delivered high-performance features to customers. NI instrumentation and technology are flexible, so we can use the development environment in other product categories. Whirlpool used NI VeriStand for the first time with this project and found it modular and intuitive. Even nonsoftware-expert resources agreed. Furthermore, the backward compatibility and easy customization of NI VeriStand helped us reuse VIs already developed for algorithm calibration applications. With LabVIEW software and NI hardware, we can store technical data in an easy, suitable way, such as the Technical Data Management Streaming (TDMS) format. In addition, laboratory resources can preliminarily manage test data files thanks to user-friendly NI DIAdem software. This has a big impact on the day-by-day task scheduling between engineers and lab tech resources, resulting in overall team efficiency. The MATLAB and Simulink environments can also process the TDMS format, which facilitates synergy between company departments and easy, quick, specific data analysis. The powerful combination of the NI VeriStand platform, LabVIEW FPGA, the real-time PXI module, and years of fast prototype development and experience with NI products helped us quickly and easily design and develop the whole HIL system. Original Authors: G. Paviglianiti - Whirlpool Fabric Care, Advanced Development Edited by Cyth Systems Talk to an Expert Cyth Engineer to learn more

  • How do you measure torque when designing a test system?

    Cyth Systems | Whitepapers | Sensor Fundamentals | How do you measure torque when designing a test system? How do you measure torque when designing a test system? Measuring Load This guide helps you understand the fundamentals of load measurements and how different sensor specifications impact load cell performance in your application. After you decide on your sensors, you can consider the required hardware and software to properly condition, acquire, and visualize load measurements. You can also consider any extra signal conditioning you may need. What is Torque? Force is the measure of interaction between two or more bodies: for every action there is an equal and opposite reaction. Force is also described as a push or pull on an object. It is a vector quantity with both magnitude and direction.  Torque is the tendency of a force to rotate an object about an axis. Similar to force being described as a push or pull, torque can be described as a twist to an object. The SI unit for the measure of torque is Newton-meters (Nm). In simple terms, torque is equivalent to force times distance, where a clockwise torque or twist is usually positive and a counterclockwise torque is usually negative. Torque sensors are composed of strain gages that fixed to a torsion bar. As the bar turns, the gages respond to the bar’s sheer stress, which is proportional to the torque. The two common ways to measure torque are: reaction torque sensors and rotary torque sensors. Measuring Torque Reaction Torque Sensors Reaction torque is the turning force that is imposed on the stationary portion of a device by the rotating portion as power is either delivered or absorbed. As the load source is rigid while the drive source is trying to rotate, the torque is created. Reaction torque sensors are restrained so they cannot rotate 360 degrees without the cable wrapping up because the housing or cover is fixed to the sensor element. These sensors are commonly used to measure torque of a back-and-forth motion. These types of sensors do not use bearings, slip rings, or any other rotating elements in their installations. Rotary Torque Sensors Rotatory torque sensors are complimentary in design and application to reaction torque sensors except that the torque sensor is installed in line with the device under test. As the shaft of a torque sensor is rotating 360 degrees they must have a way to transfer the signals from the rotational element to a stationary surface. This is accomplished by using one of three mounting methods: slip rings, rotary transformers, or telemetry. Slip Ring Method: The slip ring method entails that the strain gage bridge is connected to four silver slip rings mounted on the rotating shaft. Precision brushes make contact with these slip rings and provide an electrical path for the incoming excitation and the outgoing signal. You can use either AC or DC to excite the strain gage bridge. Rotary Transformer Method: For the transformer method, the rotating transformer differs from conventional transformers by the primary or secondary winding rotating. One transformer is used to transmit the AC excitation voltage to the strain gage bridge and a second transformer is used to transfer the signal output to the nonrotating portion of the sensor. This means two transformers replace multiple rings, and no direct contact is made between the rotating and stationary elements of the transducer. Digital Telemetry Method: The digital telemetry method requires no contact points since it consists of a receiver-transmitter module, coupling module, and signal processing module. The transmitter module is integrated into the torque sensor. It amplifies and digitizes the sensor signal into a radio frequency carrier wave that is picked up by the caliper coupling module (receiver). The digital measurement data is then able to be recovered by the signal processing module. Torque sensor selections primarily depend on your capacity and physical requirements. Choosing the Right Torque Capacity —When taking note of application capacity, determine the minimum and maximum torque you expect. Extra torque and moments can increase the combined stress, which increases fatigue and affects overall sensor accuracy. Any load other than an axial, radial, or bending torque, is considered extraneous and should be noted beforehand. If you cannot design or build your setup to minimize the effects of these loads, consult the sensor guide to verify the extraneous loads are within the sensor’s ratings. Physical and environmental requirements —Evaluate any physical constraints (length, diameter, and so on) and the way the torque sensor can be mounted. Consider environmental factors will be exposed to ensure proper performance across wide temperature ranges, and possible contaminants (oil, dirt, dust). Revolutions per minute (rpm) —For rotary torque sensors in particular, it is important to understand how long the torque sensor will be rotating and at what speed to calculate the RPM.

  • Ensuring the Performance of Wind-Turbines with NI CompactRIO | Cyth Systems

    Project Case Study Ensuring the Performance of Wind-Turbines with NI CompactRIO Mar 26, 2024 cea29db3-ade3-4e23-b51f-47c88fac89ef cea29db3-ade3-4e23-b51f-47c88fac89ef Home > Case Studies > *As Featured on NI.com Original Authors: Acoidan Betancort Montesdeoca, Aresse Engineering S.L. Edited by Cyth Systems Small-Scale Wind-Turbines The Challenge Creating a stand-alone, unified platform to acquire and analyze data that certifies small-scale wind turbine efficiency, operation, and structural integrity. The Solution Using NI CompactRIO hardware to build a system that combines multiple distributed sensors to gather data in four main groups: reference condition, operational, loading, and electrical parameters. Small-scale wind turbine installations are growing with the demand for affordable clean energy for isolated consumption as well as the environmental concern of users who try to sustainably use energy resources. These small-scale wind turbines need efficiency, operation, and structural integrity evaluations to verify that they are safe and appropriate for the users and their communities. In cooperation with Kliux Energies, we developed a stand-alone, unified platform to acquire and analyze the data required by international standards (IEC 61400/2, IEC 61400/11 and IEC 61400/12) to certify the operation of small-scale wind turbines and give the manufacturer a required database to optimize their design. Left: Measurement Points Diagram , Right: Torque Campbell Diagram Left: Power Versus Rotor Speed Results , Right: Data Acquisition System Diagram Hardware Setup Based on the standard requirements and the data required to validate the analytical design, we chose the CompactRIO platform for our system. We combined multiple sensors distributed on the turbine and its adjacent devices to capture the data provided by the Kliux GEO4K vertical axis small-scale wind turbine. The data is classified into four main groups: reference condition data, operational data, loading data, and electrical parameters. All the data, up to 34 channels, is acquired, analyzed, stored, and classified based on the reference condition by the CompactRIO installed in a cabinet at the small-scale wind turbine (SSWT) ground. A router connected to the Internet via 3G permits access to instantaneously check the installation status and download the stored data. The system is defined according to the following four subsystems: Reference Condition Data: We compare the production data to the environmental conditions to calculate the energy the turbine can use. The NI cRIO-9014 embedded controller receives wind speed data from a GILL WindMaster sonic anemometer and temperature, pressure, and humidity data from a Vaisala sensor through RS485. Operational Data: We analyze the gearbox and gearbox high-frequency accelerations, acoustic noise, and internal gearbox temperature to perform predictive machine condition monitoring. We do this with a single NI 9234 C Series DAQ module that conditions and inputs data from a PCB tri-axial accelerometer and a G.R.A.S. microphone. An NI 9219 C Series module acquires temperature from a PT-100. Loading Data: To verify the aerodynamic loads of the wind turbine meet the analytical design, we measure the rotor torque and revolutions, the loads at the tower base, the accelerations at the rotor, and the lateral accelerations at different heights of the tower. We can do all this with a single NI 9205 C Series module that reads data from different sensors distributed along the rotor and the tower. An NI 9219 module conditions three strain gages that detect axial load and bending moments on the tower. Electrical Parameters: To determine the performance of the turbine, we calculate the electrical energy production using an NI 9205 module to digitize three-phase voltage and current data from Phoenix Contact and CR Magnetics sensors at both the generator and the inverter. CompactRIO Original Authors: Acoidan Betancort Montesdeoca, Aresse Engineering S.L. Edited by Cyth Systems Talk to an Expert Cyth Engineer to learn more

  • Measuring Pressure Key Fundamentals Guide | Cyth Systems

    This guide explains the basics of load measurements and how different sensor specifications influence load cell performance in your application. < Back Measuring Pressure Key Fundamentals Guide | Cyth Systems Sensor Fundamentals Previous Next

  • Remote Meteorological Stations Monitored Using CompactRIO | Cyth Systems

    Project Case Study Remote Meteorological Stations Monitored Using CompactRIO Jan 13, 2024 c8015ac6-3bb9-4ab3-a26f-9da22aadb2e6 c8015ac6-3bb9-4ab3-a26f-9da22aadb2e6 Home > Case Studies > Singapore Meteorological Stations Monitored Using CompactRIO *As Featured on NI.com Original Authors: Mark Kubis - Solar Energy Research Institute of Singapore (SERIS) Andre Nobre - Solar Energy Research Institute of Singapore (SERIS) Edited by Cyth Systems National Solar Repository of Singapore Figure 1. Live upload of a irradiance map on National Solar Repository of Singapore (NSR, www.solar-repository.sg), a government initiative to promote the use of solar PV in Singapore. The Challenge Mapping Singapore’s ground solar irradiance in real-time for solar resource assessment and forecasting in PV applications. The Solution Creating a network of remote meteorological stations based on CompactRIO hardware that are placed throughout the country and monitored on a PC running LabVIEW system design software. With the continuous growth of solar photovoltaic (PV) installations worldwide, this renewable form of energy is contributing more and more towards energy matrices around the world. Due to its intermittent nature caused by constantly changing cloud coverage and weather conditions, a tropical location such as Singapore poses extra challenges for utilities wanting to integrate this renewable energy source with existing fossil fuel power generation capacities. With its research project on solar irradiance assessment and forecasting, SERIS has deployed a comprehensive network of meteorological stations throughout the country-state island on a 5 km by 5 km grid. All remote stations run systems using CompactRIO hardware coded using NI LabVIEW software. Acquiring key meteorological parameters utilizes algorithm development in the search for short-term and intraday irradiance forecasts, which later can be used to predict PV electricity production patterns. Additionally, year-round solar resource measurements help create seasonal and annual irradiation maps for the country. Hardware On the front end, 25 remote stations deployed in the field run on cRIO-9075 real-time controllers. The stations contain an NI 9203 C Series analog current input module and an NI 9217 C Series RTD module connected to relevant meteorological sensors, such as global and diffuse solar irradiance; ambient temperature; relative humidity; wind speed and direction; and air pressure. At the back end, hosted at the SERIS PV Systems Monitoring Laboratory, the central monitoring station (CMS) PC runs software based on LabVIEW and interfaces with the remote stations in the field via a 3G network in real time. NI CompactRIO System Description The entire system is configured to perform the following series of tasks: Remote logging—All data is sampled at 1 Hz and logged at one-minute intervals at each of the remote stations Data collection—All remote station communication with the CMS takes place via a 3G Internet network, transmitting both in real time (every second and every minute) and in “research packets” (prepared data for scientists at the end of every day) Data storage—Data is stored in binary format for fast transfer and processing in databases and statistical agents Data publishing services—Researchers can easily tap into historical data sets that can be automatically set or manually prepared Alarm checks—The CMS monitors station health through “heartbeats” received every minute, reporting system-critical characteristics such as transmission status, memory levels, and CPU usage Time synchronization—The system applies a time-correction routine every time a CompactRIO clock drifts beyond 300 ms from the SERIS high-precision time server Key Monitoring Infrastructure Characteristics and Highlights Constantly hot and humid Singapore weather poses extra challenges for remote monitoring components. With rugged NI hardware, SERIS built an extremely tough system that features greater than 99.8% availability for its research data throughout a vast amount of stations on the island. The system also offers seamless time synchronization. All of the CompactRIO systems deployed in the field are synchronized to a dedicated time server in the institute. This guarantees a maximum drift between clocks of 300 ms, which is paramount for research in the correlation of spatially distributed stations. Additional key features of the system include: Remote diagnostics—Stations deployed in the field can be adjusted remotely for tasks such as software updates Alarm functions (system)—Alarms monitor station health Alarm functions (sensors) —Extra statistical routines written using LabVIEW perform checks for parameter ranges, minimizing downtime if a sensor fails in the field Autodownload routines—Research data downloads every day after sunset, so scientists can readily work on new algorithms. Data on demand—Scientists and engineers can easily retrieve data when needed Real-time system visualization capabilities—One-second “live” data displays are available as user interfaces (or “players”) to any PC connected to the Internet Live Irradiance Map Creation After data arrives at the CMS from the remote stations, scientists can run calculations, plots, and algorithms by tapping into tailored databases. Additionally, by selecting and grouping live data into subsets in real-time, using custom-designed LabVIEW software, SERIS created live solar irradiance mapping capabilities, which are then made available through the user interface at the SERIS PV Systems Monitoring Laboratory. Irradiance readings from 25 stations are extracted within one second and made available for a color interpolation algorithm. The map updates every two to three seconds, depending on network transmission delays. Using the map interactive feature, scientists can read irradiance values at a given cursor position as well as at a location chosen by entering a postal code. SERIS System Benefits SERIS is at the forefront of state-of-the-art research in solar cells, modules, systems, and energy-efficient buildings tailored for tropical regions. Its PV grid integration research efforts are facilitated through the comprehensive network of remote meteorological stations described here. The remote monitoring network runs on NI hardware and software, guaranteeing excellent data availability and reliability for SERIS researchers and engineers, and in the near future, other stakeholders, such as public utilities. Through this rugged, spatially resolved network of meteorological stations, SERIS researchers are able to develop forecast algorithms to predict solar energy resources ahead of time for PV applications. With the massive adoption of solar PV throughout the world, especially in new, untapped markets in the tropics and other developing countries, it is only a matter of time before renewable energy systems such as PV contribute considerably to electricity grids everywhere, and, in the process, make use of powerful and comprehensive remote data monitoring capabilities to gather additional parameters needed to smoothly integrate variable energy sources. Talk to an Expert Cyth Engineer to learn more

  • Improve your PRODUCT MANUFACTURING projects | Cyth Systems

    Manufacturing Test involves everything we buy – cars & car parts, TVs & remote controls, sprinklers & sprinkler controllers, and consumer electronics .... INDUSTRIES Product Manufacturing Home > Industries > Product Manufacturing Improving your MANUFACTURING project P ERFORMANCE Manufacturing Test affects everything we buy – cars, televisions, sprinklers, electronics and more. Our proven platform from National Instruments has been used in every industry, every nation, thousands of applications, and billions of parts tested. Product Manufacturing Industry Segments We recognize the differences in your specific industry segments, and we have delivered products and services to all segments of the Product Manufacturing Industry. Consumer Electronics Machinery & Equipment Manufacturing Scientific Instruments Food & Beverage Consumer Products Sporting Goods PRODUCT MANUFACTURING INDUSTRY Project Examples Our platform for Automation, Test, and Control is a good fit for many types of projects in the Product Manufacturing Industry. The following projects were completed and deployed using our products and services. Custom EMF Measurement Solution Doubles End-of-Line Test Throughput Machine Vision Inspection of Implantable Electrode Wire to Combat Parkinson's Disease Multi-PCBA Test Solution Delivers Broad Functional Test Coverage for FDA Compliance Machine Vision System Inspects Medical Guide Wire Electrode for Surgical Safety System

  • PXI Platform Enables the Automated Test of BOSCH AdBlue Pump | Cyth Systems

    Project Case Study PXI Platform Enables the Automated Test of BOSCH AdBlue Pump Mar 27, 2024 f54b506c-8d1f-4695-820c-354eb5daafee f54b506c-8d1f-4695-820c-354eb5daafee Home > Case Studies > *As Featured on NI.com Original Authors: Jiří Kubíček, Robert Bosch Edited by Cyth Systems Diesel motor pump The Challenge Creating an automated test station to verify AdBlue diesel motor pump modules in a hydraulic test method. The Solution Using the NI PXI platform and NI LabVIEW software platform to develop a testing station that can acquire, process, and package data while communicating with other third-party components in the process of quality control testing. Robert Bosch České Budějovice is focused on developing and manufacturing components for passenger cars for many established automotive manufacturers. One of these manufactured components, the AdBlue pump (Figure 1), is a diesel exhaust fluid standardized as ISO 22241. Car manufacturers use diesel exhaust fluid to limit the NOx concentration in diesel exhaust emissions. This helps them keep limits of the European norm Euro IV and higher. The pump must inject the AdBlue liquid under the pressure of 4,5-8,5 bar to the exhaust pipe close to the catalytic converter. Every manufactured pump goes through multiple pressure tests before being shipped to the customer. When we started looking for a solution for automated testing, we already had a PLC that controlled the I/O valves and communicated with our manufacturing execution system (MES). Based on a positive experience on another project, we decided to use NI PXI hardware and NI software to build our test system. The main advantages of the NI PXI platform include a robust industrial form factor, the ability to add new modules to modify the measurement, and simple programming. One reason we chose the NI platform is high-speed acquisition (10 kS/s or higher), which we could not do with the current PLC system. Left: Test station fixture featuring NI PXI hardware, and NI LabVIEW and TestStand software. Right: LabVIEW user-interface. System Architecture As mentioned, we used the PLC to control the valves and pumps. It acts as a master that sends a request to the PXI system. This request contains information about what type of test should be carried out and some configuration parameters. The PXI then carries out the test and sends back the measured data and test result that is saved into a database. Figure 2 shows the system architecture. The PXI system contains a built-in controller and two additional PXI-6281 multifunction modules. One module handles PWM generation and the other one indirectly measures the current using a shunt resistor. During the test, we need to measure the pressure in the outlet of the AdBlue pump and control the I/O valves using PWM. Once the tester gets a TCP message from the PLC with test configuration information, it can start the test sequence. As the pump does not have a dedicated pressure sensor, we must use an indirect method of pressure measurement. The indirect method uses measurement of the current that flows through the winding of the main magnet. On this current curve, we can identify two inflection points (Figure 3). The first one is caused by the valve starting to open. The second one is caused by the valve reaching the final position. From these inflection points, we can calculate the level of pressure in the pump and compare it with required values. We originally wrote the calculation of pressure from the current curve using The MathWorks, Inc. MATLAB® software. We wanted to use the code we had, so we used a structure in LabVIEW called MathScript Node. We used this to import the current .m files and call them from LabVIEW on a station with no MATLAB installed. When we started code development, we were coding everything in LabVIEW. With support from local NI representatives, we discovered an easier means of test management with TestStand software. We attended some NI trainings to learn how to use the tools. We managed to create an architecture that contains a user interface, test execution framework, and test modules dedicated for each step of the test. We chose TestStand as the test execution framework because it saves time during the development phase. We do not need to develop the parts of the test framework that are the same for every test, such as test steps execution, logging, and reporting. We can easily configure these features in TestStand. It is also beneficial for standardization because the test framework is the same for every test. Test Stations in the Manufacturing Area Currently, we have 12 testers running on the production line. However, we are planning a tester for a new generation of the pump that will be more complicated. This test requires communication with a pressure sensor through the SENT protocol, which is a robust serial communication protocol commonly used for lower-cost sensors in the automotive industry. With the need to test both the messages sent and the physical layer of SENT communication, we are evaluating PXI as the main test controller that could also read the data from the SENT sensors and make the test of the physical layer of the SENT communication. Using the NI PXI platform saved us time on development of the test framework and empowered us to build a reconfigurable test station. We gained valuable experience during the first PXI tester that we can use in future projects. Original Authors: Jiří Kubíček, Robert Bosch Edited by Cyth Systems

  • Certified LabVIEW Architect | Cyth Systems

    Certified LabVIEW Architect (CLA) The Certified LabVIEW Architect (CLA) is the final step in the three-part LabVIEW certification process. The exam verifies the user’s ability to build a sensible VI hierarchy and project plan for delivering an application that fulfills a set of high-level requirements. Certified Architects provide technical leadership, helping ensure the developers on their team follow best practices and become more competent and efficient developers. 1 Review the Requirements 2 Prepare for the Exam 3 Schedule an Exam 4 Share your Success 5 Recertify Review the Requirements Step 1. The Certified LabVIEW Architect (CLA) certification is the highest level of LabVIEW certification that is valid for 4 years. Recertification is required to maintain credentials. Benefits include the use of the professional certification badge logo and related digital credentials. NI recommends that you have 36 months or more experience of developing medium to large applications and that you have mastered the content in the Software Engineering for Test Applications Training Courses. Exam Details Prerequisite: An active Certified LabVIEW Developer (CLD) certification Format: VI and application architecture development Duration: 4 hours Location: Online Prepare for the Exam Step 2. Preparing for Your Exam CTA Exam Topics TestStand Advanced Architecture Series Step 3. Schedule the Exam Once you have completed your exam preparation and have met all prerequisite requirements, you are ready to schedule your exam. For in-person exam registration, please email us at solutions@cyth.com Share your success Step 4. 1. When you complete the CLA exam, your exam will be graded by engineers at NI. 2. You will be advised if you passed or failed. -If you passed you will receive a notification email with your digital credential. -If you have not received your notification email within 3 days of receiving the notification that you passed the assessment, email services@ni.com 3. To share your badge, please follow these instructions: a. Log into your account at Credly b. Click on the profile icon at the top right-hand corner of the page and go to “Badge Management” c. Click on the badge you are looking to share d. Scroll down and click “Share” e. You will be brought to the “Share Badge” screen where you can find different tabs directing you to connect your social media accounts and share your badge Recertify Step 5. Certified professionals can recertify using one of two methods: -Recertification exam -Recertification by points Recertification Interval 4 Years Recertification Exam Details Format: Multiple Choice Duration: 1 hour Location: Online Prepare: CLA-R Exam Preparation Resources Recertification by Points -By participating and completing approved activities, certified professionals can earn and accumulate points redeemable toward recertification. For information on recertifying with points. Enroll

  • What is Single-Board RIO (sbRIO)?

    Cyth Systems | Whitepapers | | What is Single-Board RIO (sbRIO)? What is Single-Board RIO (sbRIO)? Core Components of Single-Board RIO Real-Time Processor The sbRIO includes a real-time processor running a reliable operating system (NI Linux Real-Time), which provides deterministic performance for embedded applications. It allows developers to deploy and execute high-level code, manage I/O, and interface with external systems. Field-Programmable Gate Array (FPGA) A key differentiator of the sbRIO is its integrated FPGA, which offers high-speed processing for time-critical tasks. The FPGA can be custom-programmed using LabVIEW FPGA, enabling precise control over I/O timing, high-speed signal processing, and rapid prototyping for custom applications. Analog and Digital I/O The board integrates multiple analog and digital I/O channels, enabling direct connection to sensors, actuators, and other peripherals. The flexible I/O configuration simplifies the development process and minimizes the need for additional hardware. Benefits of Using Single-Board RIO Integrated Platform By combining the real-time processor, FPGA, and I/O on a single board, sbRIO reduces system complexity, enabling faster deployment and lower development costs. Customizable and Scalable The reconfigurability of the FPGA and the flexible software environment (LabVIEW) allow sbRIO to be tailored to unique application requirements. The platform scales easily from prototyping to production. Reliable and Rugged Design sbRIO boards are built for harsh environments with a rugged design that supports extended temperature ranges, vibration, and shock. They are ideal for industrial and embedded applications in critical settings. Common Applications Industrial Control and Automation sbRIO is often deployed in machinery control, process automation, and custom automation systems where reliable and deterministic control is essential. Monitoring and Data Acquisition With its real-time processing and customizable I/O, sbRIO excels in monitoring systems, providing high-speed data acquisition and real-time analytics. Embedded Control Systems The platform is widely used in robotics, medical devices, and smart infrastructure where embedded control, precise timing, and high reliability are required. Software Development with LabVIEW One of the key strengths of sbRIO is its tight integration with NI’s LabVIEW software. Developers can design, simulate, and deploy both real-time and FPGA-based applications in a single environment. LabVIEW’s graphical programming interface simplifies development and reduces time-to-market. Comparison to Other RIO Solutions Single-Board RIO is part of the broader RIO family from NI, including CompactRIO and FlexRIO. This section compares sbRIO with these platforms in terms of performance, form factor, and use cases. Conclusion Single-Board RIO is a versatile, powerful embedded platform designed for engineers and developers who need a flexible and reliable control system. With its integrated real-time processor, FPGA, and flexible I/O, sbRIO delivers a scalable solution for a wide range of applications, from rapid prototyping to full-scale production. Additional Resources National Instruments Website: sbRIO Product Page Case Studies: sbRIO in Industrial Automation LabVIEW FPGA and Real-Time Training Resources

  • Cluster Pumping Station Automation of Oil and Gas Reservoir Pressure Maintenance | Cyth Systems

    Project Case Study Cluster Pumping Station Automation of Oil and Gas Reservoir Pressure Maintenance Mar 27, 2024 9dc19d79-91ba-452e-9448-46e1d9bc8955 9dc19d79-91ba-452e-9448-46e1d9bc8955 Home > Case Studies > *As Featured on NI.com Original Authors: Ovak Technologies Edited by Cyth Systems Cluster Pumping Station Automation of Oil and Gas The Challenge Developing reservoir pressure maintenance automation system to remotely control, monitor, and protect devices in the pumping station. The Solution Combining the benefits of the NI CompactRIO FPGA and processor with various I/O modules to create a rugged monitoring system that records multiple data formats at varying rates, synchronizes data, communicates to at least six types of third-party systems with 96 or more devices per system, and performs real-time analysis to remotely monitor 10 types of sensors (at least 42 sensors per system) for extended durations. The user-friendly application was developed using NI LabVIEW system design software and it helps configure each automation system for the appropriate devices and sensors for the specific station before installation. The modern approach to oil production automation dictates strict requirements for control and monitoring of cluster pumping station (CPS) hardware and software. This is because of the depletion of oil reservoirs, high cost of electricity, and the tendency of oil companies to reduce well maintenance costs, increase operational functionality, and decrease human involvement. Centrifugal Sectional Pumps The CPS is designed to pump water into the oil reservoir. The system contains power and pump blocks. The pump blocks increase water pressure to a level that provides water injection into wells of the reservoir pressure maintenance system. The power blocks are used for automatic control of pump units, parameter control, alarming, automatic pump unit shut-off, reserve unit switching, and equipment protection when process parameters change beyond admissible limits. The automated control system is based on the reconfigurable CompactRIO platform. This technology is customized to be used as an embedded automated technological process control system working 24/7 in real-Time over a wide temperature range (-40◦C to 70◦C) under elevated vibration conditions, and it can withstand fast impacts of up to 50 g. The controller allows users to connect to different types of third-party devices and analog/digital sensors. The developed software is an automated monitoring and control system based on data collected from liquid meters, climatic condition sensors in production areas, pump units, electric meters, tanker controllers, alarming devices, and devices for modem queries. The system can work with both NI cRIO-9073 and NI cRIO-9074 controllers, depending on the customer’s needs. CPS Automation System Left: The CPS Automation System Setup with the NI cRIO-9074, Right: System Architecture The software is based on the LabVIEW Real-Time Module. With LabVIEW it is easier and faster to produce code with a nice graphical user interfaces. LabVIEW provides us with a platform to easily write automation code for systems such as cluster pump stations. The whole system was developed with LabVIEW system design software. Since the CompactRIO platform was chosen to develop the project, and it is an advanced embedded control and monitoring system that includes a real-time processor, field-programmable gate array (FPGA), and interchangeable C Series modules, the NI LabVIEW Real-Time and NI LabVIEW FPGA modules were also used. This allowed us to create a reliable, stand-alone embedded system with a graphical programming approach that runs for extended periods of time. The NI Modbus Library for LabVIEW was also used for Modbus communication between the CompactRIO and Set of Technical Tools (STT) Napor, canal, flow-meters IVK, MR, Rapira. We selected NI cRIO-9073 and NI cRIO-9074 controllers, an NI 9871 serial interface module, an NI 9208 analog input module, an NI 9426 digital input module, and an NI 9476 digital output module. Serial interfaces communicate with third-party devices, such as STT Napor, STT Canal, flow-meters IVK, MR, Rapira, electric-meter Mercury. Analog inputs read pump intake pressure, pump discharge pressure, temperature, and gas concentration. Digital inputs read the state of the doors, valves, and pumps, and the digital outputs control the state of the pumps (on/off). The Benefits of Our New Solution Our new solution is safer, easy to maintain and monitor, flexible, and more energy efficient. It has an easy-to-use graphical user interface and information can be communicated in real time either via a wireless radio channel or Ethernet. The system works with a variety of equipment manufacture in the United States and Russia and can operate in harsh climates. The NI software and hardware configuration allowed us to develop the CPS automation system in four months. This system replaced the old equipment, 70 percent of which was no longer in operational condition. Overall, we chose the CompactRIO embedded system because it provides a complete solution that contains a real-time processor to perform time-critical algorithms, an extendable interface to handle various sensor signals simultaneously, and is fully supported by LabVIEW. In addition, the modularity of the equipment gives us the option of future system expansion. We achieved excellent results by developing and installing such a complex system in such a short amount of time. We also had frequent interaction with the NI Russia office throughout project. Original Authors: Ovak Technologies Edited by Cyth Systems Talk to an Expert Cyth Engineer to learn more

  • CompactRIO Enables Undergraduate Power Electronics Education | Cyth Systems

    Project Case Study CompactRIO Enables Undergraduate Power Electronics Education Nov 7, 2025 0d40db0b-9017-40f4-9543-c8c31678a570 0d40db0b-9017-40f4-9543-c8c31678a570 Home > Case Studies > *As Featured on NI.com Original Author: Mats Alaküla, Lund Univerisity Edited by: Cyth Systems Project Summary Lund University integrated the NI CompactRIO into its power electronic lab, teaching students real-time power electronics with research-grade systems. System Features & Components Real-time operating system (RTOS) enabled speed control and PID optimization FPGA-level logic enabled implementation of hysteresis bounds and the simplification of overall system architecture Live data visualization and parameter adjustment enabled through HMI Outcomes Achieved “fast computer” model levels of determinism , enabling real-world levels of system responsiveness Reliable control loop execution delivers continuous live monitoring Equipped undergraduate students with hands-on experience using research-grade control systems Technology at-a-glance Hardware: NI cRIO-9063 chassis NI cRIO-9038 chassis Software: LabVIEW LabVIEW FPGA LabVIEW Real-Time Control Theory in Practice In university electrical engineering labs, students learn how motor drives and power electronics operate. These types of systems require microsecond-level precision to ensure continuous and smooth operation of motors. For educators, it can be a challenge to bridge the gap between theoretical “fast computer” models and real-world control systems that introduce computational delays. At Lund University in Sweden, they needed to address this education gap needed to ensure their students could experience firsthand how control theory performs in a real-world context. Determinism Requirements Professor Mats Alaküla needed to teach students how to control electrical motor drives and power electronics systems with sub-milisecond time constraints. Maintaining currents within safe operating limits require voltage controll within hundreds of microseconds. Their existing MATLAB/Simulink and DSpace technology platform could not keep pace with modern electrical drives requiring increasingly higher frequencies. The Windows-based monitoring system interfered with control, disrupting the simulation of a realistic control system. The majority of students’ time was spent creating workarounds for hardware limitations, not mastering control algorithms themselves. Lund University needed a solution that would prepare their students for the real-world scenarios they would encounter in ther future careers. Hysteresis Control Enabled The university chose to adopt the NI CompactRIO platform, paired with LabVIEW Real-time and FPGA Modules to implement a control architecture that would eliminate computation delays. NI cRIO-9063 & NI cRIO-9038 CompactRIO controllers. System Architecture & Capabilities FPGA-based current control: time-critical electrical current control implemented directly on the FPGA Real-time processing: slower control loops, for ensuring optimal system performance, run on real-time operating system (RTOS), including engine speed trajectory following and continuous PID parameter recalculation Windows OS: live data visualization and datalogging enabled through user interface housed on the Windows OS Integrated resolver signal processing: cRIO I/O availability and measurement speed capabilities eliminated the need for dedicated resolver circuits Hysteresis control capability: FPGA measurement speed enabled direct current control with real-time three-phase current visualization in real-imaginary planes Sub-100 microsecond voltage control: implemented on FPGA and RTOS to maintain current within acceptable intervals required by electrical drives Learn FPGA Programming Fundamentals The responsiveness of the cRIO enabled the implementation of control methods that their previous solution couldn’t support. Direct current control via hysteresis required high determinisim to keep current within precise tolerances. Applied Motion Stepper Motor Drives, controlled and communicated with using NI LabVIEW software. Traditional rotor position measurement requires high-frequency input signals and additional processing circuits. The measurement speed and I/O flexibility of the NI cRIO platform were capable of directly handling resolver signal processing and simplifying the system architecture students interact with. The self-contained nature of the cRIO, paired with its ability to push live updates to host computers eliminated the Windows OS interference problems that previously disrupted control loops. Real-World "Fast Computer" The architecture enabled by the technology platform eliminated the gap between theory and practice for these students, as the solution responds as theoretical “fast computer” models would, making control theory directly applicable to real-world systems. Lund University’s introduction of the NI CompactRIO platform to undergraduate students enabled continuity and best practice sharing with graduate students already using the platform for advanced electrical machine development. The university is now fully capable of preparing their students for their future careers by enabling them to gain hands-on experience with the optimal control strategies driving the pace of development in modern power electronics engineering. Let's Talk Original Author: Mats Alaküla, Lund Univerisity Edited by: Cyth Systems

  • MD&M West 2024

    Events ||MD&M West 2024| MD&M West 2024 MD&M West 2024 February 6, 2024 Anaheim, CA, USA MD&M West 2024 is a medical design and manufacturing trade show held in Anaheim, California, from February 6-8, 2024. It's a leading event in the US medical device and manufacturing industry, bringing together professionals to showcase new technologies, network, and learn about the latest advancements. The event featured over 1,600 exhibitors and 14,000 trade visitors, including decision-makers from top medical technology companies. MD&M West is part of a larger advanced manufacturing event, offering specialized sectors for medical technology, automation, design & manufacturing, and plastics. Attendees could explore innovations in areas like medical devices, digital health, automation, and more. The event also included educational seminars and conferences, such as the "Discover. Engineer. Build." conference, focusing on research and development, design, and process engineering in the medical technology field. Key aspects of MD&M West 2024: Exhibitors: Over 1,600 companies showcased their products and services. Attendees: 14,000 trade visitors, including key decision-makers. Focus Areas: Medical devices, digital health, automation, design & manufacturing, and plastics. Educational Opportunities: Seminars, conferences, and workshops. Networking: Opportunities to connect with industry experts and peers.

  • High-Voltage Dielectric Test System for Magnetic Couplers with CompactRIO | Cyth Systems

    Project Case Study High-Voltage Dielectric Test System for Magnetic Couplers with CompactRIO Sep 17, 2024 739c8cee-049b-43dc-9a1e-693fb79ecc04 739c8cee-049b-43dc-9a1e-693fb79ecc04 Home > Case Studies > *As Featured on NI.com Original Authors: Flavio Floriani, INTEK S.p.A. Laboratory, Sector Manager Edited by Cyth Systems High voltage simultaneous test of 50 magnetic couplers using CompactRIO. The Challenge INTEK needed to invent and build a fully automatic measurement system that could test up to 50 magnetic couplers simultaneously while they are subjected to a high voltage (up to 8 kV rms) and placed in a 150 °C oven. The system also needed to record the mean time to fail and monitor the overvoltages generated when a device breaks. The Solution INTEK used the CompactRIO platform with FPGA technology and LabVIEW to develop a system with four functional levels. The system combines the power of NI hardware with the flexibility of LabVIEW. The use of a magnetic coupler, or the device under test (DUT), is similar to that of an optocoupler. It must guarantee the insulation between two points at different potentials to satisfy the security standard (SOT-24 package). A test required in the reference standard is to estimate the mean time to fail (MTTF) by applying a 50 Hz sinusoidal high voltage and record the time to breakdown. Once a consistent number of data is collected using a statistical technique, we can parametrize a Weibull distribution and predict lifetime. When a failure occurs, the DUT looks like a short circuit (an internal discharge path shorts the two points), and when the DUT is in normal condition it looks like a small capacitor. Magnetic Couplers (Credit: https://www.magnetictech.com/magnetic-couplings/ ) Right: Magnetic Coupler Test LabVIEW User Interface After three years of study on how to manage the AC high voltage (up to 8 kV rms) on these small DUTs (simpler 15-position equipment have run continuously for two years and collected a lot of data), we can focus on the main issues emerging from these kind of tests: - How to cut the current that flows through a DUT when it fails, without generating an extra voltage (which could damage all the other DUTs) - How to use low voltage components to have small equipment dimensions and save on cost - How to manage wiring efficiently and safely merge high-voltage circuits with low-voltage circuits Combining the result of these studies with NI technologies, we have developed a system architecture that includes a cRIO-9035 controller and two NI-9205 analog input modules to read the current in each of the 50 circuits and read the high voltage. We acquired the current by reading voltage drop on shunt resistors and acquire the voltage by reading the output of a customized high-voltage divider developed specifically for this application. The system also includes two NI-9476 digital output modules to command the 50 relays that cut off the current when a breakdown occurs. We also used an NI-9217 RTD module to acquire the temperature inside the oven through a PT100. We developed a four-level software architecture using LabVIEW. An FPGA runs the part of code that detects the fault current and commands the cut-off relay. Because we need speed and determinism, the FPGA is perfect for this application. With some optimization, we finally read the RMS fault current (calculated on 60 ms perdiod) and ensured the fault clearing in less than 100 ms. It is important to clear the fault quickly because the current damages the DUT and the customer needs to analyse them after the test to study and improve the technology. We developed a kind of scope with a pre-trigger function and ran it in a parallel loop. Every 100 ms, the scope controls the insantaneous maximum voltage value and eventually records and saves the waveform if the threshold alarm is exceeded. We can easily edit all the scope parameters such as pre-trigger time, sample rate, and more in the front panel. We used DMA and FIFO structures to have high speed and share data in the project. We also monitored the oven temperature with a lower sample rate in another parallel loop and compared with an alarm threshold. Every time a DUT fails, the FPGA VI changes the state of a shared variable in the project and the VI detects it running on the real-time enviroment. By using the real-time features, we could easily create a VI that automatically acquires the time to fail and saves it directly on a file on the CompactRIO system. This makes it easy to download that file during or at the end of the test and have all the required information. The real-time VI is an auto-running executable on the system and has no user interface, which ensures stability. We created another VI inside the project and converted it to an EXE file. This VI can be run on any PC connected to the same LAN network to control the state of the whole system (alarms, state of DUT, overvoltages, temperature, and more). The technicians can easily monitor the test status every day or when needed (a test session can go on for longer than two months). It can be used also during the debug operations. We also created a VI with the channel status only and converted it to a web page. The customer can connect directly to the system and know the status of the DUT, but cannot interfere with the apparatus settings. Original Authors: Flavio Floriani, INTEK S.p.A. Laboratory, Sector Manager Edited by Cyth Systems Talk to an Expert Cyth Engineer to learn more

  • Why Choose NI Embedded Systems?

    NI embedded systems include powerful platforms such as CompactRIO (cRIO), Single-Board RIO (sbRIO), and the latest System on Module (SOM) solutions. < Back Why Choose NI Embedded Systems? Unmatched Performance, Flexibility, and Integration Previous Next

bottom of page