The primary motivation for multi-sensor image fusion is to combine the complementary information derived from different modality sensors. Building on the work reported in two of our earlier papers from IRIS Passive Sensors 1996, we show how opponent-color processing and center-surround shunting neural networks can be used to develop a variety of image fusion architectures. By emulating single-opponent color processing cells in the retina, and double-opponent color cells in primary visual cortex, we demonstrate an effective strategy for color image fusion as applied to: low-light visible and thermal IR fusion for color night vision, 6-band multispectral fusion for camouflage detection, EO/IR/SAR multi-modal fusion from separate sensor platforms. We have also developed a realtime visible/IR fusion processor from multiple C80 DSP chips using commercially available boards, and use it in conjunction with the Lincoln Lab low-light CCD and an uncooled IR camera. Limited human factors testing of visible/IR fusion has shown improved human performance using our color fused imagery as compared to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures which match opponent-sensor contrasts to human opponent-color pathways will yield fused image products of high image quality and utility.