CoroWare’s work in parallel programming encompasses developing stream
processing tools and hardware-based acceleration for several software vendors and R&D groups.
Stream Processing is a programming model that allows for easier, faster and more efficient execution of data-processing algorithms. It provides a mechanism for speeding up applications that expose the following three characteristics: compute intensity, data parallelism and data locality.
Compute-intense applications can be recognized by their compute to I/O ratio. When an application loads or stores a piece of data rarely compared to the number of computations performed on it, the application is considered compute-intensive. Compute Intensive algorithms can provide their results dramatically faster when developed with the correct know-how, thus reducing wait times and costs.
Applications are said to be data parallel when the operations they perform on a dataset can be applied on each of its data items independently from the others. Data parallelism allows applying an operation on several data items at the same time, providing huge speed gains when compared to a traditional approach.
Finally, an application exposes data locality when the data it produces is read or written at most once or twice and then never again. This is very common in signal processing applications, such as video and audio filters.
If your application fits these criteria, you may consider a stream-processing implementation as it will provide dramatic speedup times while reducing infrastructure costs.