10-12
Trying to understand music at scale

The Echo Nest is a music intelligence platform that has collected and analyzed data about over two million artists and 35 million songs. We've crawled billions of documents, parsed every phrase written about any band you can think of, and can tell you the pitch and timbre of every note in almost every song ever recorded. And all of that data in one way or another invisibly affects the music experience of over 150 million people every month, from recommenders to search to fingerprinting to discovery. We've done it using pragmatic and scalable approaches to machine learning, natural language processing and information retrieval and I'll be describing the particular challenges in doing quality "machine listening" at scale while recognizing the inherent reticence of music itself to be computationally analyzed.

Brian Whitman (The Echo Nest) teaches computers how to make, listen to, and read about music. He received his doctorate from the Machine Listening group at MIT’s Media Lab in 2005 and his masters in Computer Science from Columbia University’s Natural Language Processing group in 2000. As the co-founder and CTO of the Echo Nest Corporation, Brian architects an open platform with billions of data points about the world of music: from the listeners to the musicians to the sounds within the songs.

Date and Time
Wednesday October 12, 2011 4:30pm - 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Rebecca Fiebrink

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List