Prior to the widespread availability of electronic components after the Second World War, laboratory automation was constructed by end users and designed for specific tasks, mostly filtration, percolation, and washing operations. The earliest mention of automation in the chemical literature of the United States was in 1875, announcing a device to wash filtrates unattended. In the years that followed, a small number of commercial automated devices were sold, including large grinders for the preparation of coal samples. Around 1900, power stations began adopting automated carbon dioxide analysis. The development of electrical equipment for conductivity measurements enabled the first commercial, automated gas detection instruments for laboratory and field use around the time of the First World War. The growth of industrial production in the 1920s led to a desire for automated testing equipment, and the growing rubber industry was among the more successful early adapters. Photoelectric cells were first used in the early 1930s to create automatic titrators, and by the 1950s, automatic titration encompassed coulometric, potentiometric, and photometric devices. Combinations of chart recorders, photocells, and timers created other types of automated equipment such as stills and fraction collectors. The first true stand-alone automation for the laboratory included clinical chemistry analyzers, introduced during the 1950s.