Chuck Wolfe has 30 years of experience in chemical laboratory and pilot plant operations. He has been involved in the design and construction of high and low-pressure reactor systems, the development of analytical methods, and the synthesis of novel highly reactive compounds. He is familar with inert atmosphere techniques, including glassware and glove box systems. He also has extensive experience in various techniques for the preparation of heterogeneous catalysts, including incipient wetness, co-precipitation, evaporation, ion exchange, and passivation.
MATRIC is using ‘big data’ computing to help the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) to perform or enhance data mining, analysis, simulation and modeling of high volume, velocity, and variety of spatial and non-spatial data. Big data computing is a combination of hardware (computing clusters) and software technologies (Hadoop, Spark, MapReduce) that make it possible to realize value from “Big Datasets” that are too large for a single computer to process. Big data computing shares similarities with high performance computing in that both approaches utilize clusters of computers (cloud or locally hosted) to distribute and accomplish complex tasks. However, big data computing is specifically designed to process and analyze larger scale datasets.