Archive

Intel switches from C31 DSP to 486 Host-Processing Strategy

No Comments

The question was too large for Ralph to have simply thrown it out casually in conversation, but I do know he was the first to ask me “So, instead of the C31, can you write it for the 486?”

My answer birthed the entire software audio revolution.

Automatic Music Synthesizer Voice Optimizer

No Comments

Music-Driven Allocation

No Comments
NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail
Page 1
Page 2
Page 3
Page 4

Variable Resource Allocation

No Comments
NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail
Page 1
Page 2
Page 3
Page 4
Page 5

Intel Proposal PC Software Synthesis v2

No Comments

Intel Invite

No Comments
NextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnailNextGen ScrollGallery thumbnail
1992 bio
920220 INTEL DEMO SJ
SCAN0590
SCAN0591

Avram sends Stanley to PC Enhancement Division of Intel Architecture Development Lab, Hillsboro, Oregon to teach MIDI.

Mikado DSP Board Planning

No Comments

Intel is a large OEM computer provider and needs to expand 386-based systems to at least meet MPC 1991 standards. Intended to provide the needed services, the Mikado was basically a DSP card based on a Texas Instruments TMS320C31. It’s minimum requirement was to be able to send and received faxes, and that code was being written. In November of 1991 Vice President Avram Miller noticed that among the other planned features there was no specification of how audio or synthesis would actually perform on the system.

Rather than Santa Clara, the impetus for my consulting actually came from Hillsboro, Oregon—location of the Intel Architecture Laboratory (IAL) and its strategists. My job became specifying what the Mikado audio and synthesizer system should do, and analyze in fact whether it could do it. Assuming the positive, then find a company to code it.

To aid in this research I enlisted three of my most experienced friends: Fred Malouf-a brilliant programmer and fine musician, Dave Smith of Sequential Circuits, and Chris Chafe from Stanford’s CCRMA. We met several times to discuss architecture and count the actual DSP instructions it would take to process different types of voice patches. At some point I went to TI itself to talk about the ‘C31. Pessimism began to set in; we all agreed it was only marginally powerful enough to emulate a Sound Blaster.

As significantly, DSP using the general-purpose industrial SPOX operating system was certainly not the means by which pro audio or synthesis was being done. “Everybody” used custom chips in some combination of analog and digital fashion. Scott Peer— also from Sequential and one of the genuinely nicest guys you are likely to meet contributed several insights which helped us tune our calculations. And he also set in motion an email that was lost for about sixths months, and when found created an amazing collaboration that profoundly maneuvered the convoluted path towards sealing the deal.

Planning began for me to go to Oregon and lay out the status and case for MIDI and synthesis. Very fortunately I had just completed a few years of service as Curriculum Director for the MIDI program at Cogswell College, thus had in hand all the lectures I needed, graphically-intensive and pre-tested in dozens of courses.

Blue Taste Theme created by Jabox