Projects/Summer of Code/2009/Projects/Visualization in Phonon: Difference between revisions

From KDE TechBase
m (obligatory Star-Trek reference)
m (The image violated copyrights.)
 
(One intermediate revision by one other user not shown)
Line 39: Line 39:
===Until August 10.===
===Until August 10.===
Here I will go over my code, polish it, write some extensive documentation, and any time left over will be spent first writing a “fullscreen” visualization for Amarok, and if I still have time left over, I will implement support in the GStreamer backend.
Here I will go over my code, polish it, write some extensive documentation, and any time left over will be spent first writing a “fullscreen” visualization for Amarok, and if I still have time left over, I will implement support in the GStreamer backend.
[[Image:Locutus.jpg|frame|Locutus will make sure the deadline is maintained.]]

Latest revision as of 19:39, 26 June 2011

whoami

Name: Martin Sandsmark

Email Address: [email protected]

Freenode IRC Nick: sandsmark

Location: Trondheim, Norway

Goal

All media-players using Phonon currently lacks visualizations and analyzers.

Most modern media players today have some form for visualization when playing music, from simple bar displays to scripts drawing on opengl-contexts, and I don't think music players based on phonon should be any different.

Implementation Details

First I will either make a custom Phonon::MediaNode or alter the Phonon::AudioOutput class so it makes available raw audio data (PCM), and some pre-processed data, for example Hartley-transforms (reusing code from Amarok 1). All data will be made available on request. The class will most probably always cache at least one frame, so data is always available, but it will not queue up data, so the data displayed is always up to date. Some interpolation could be applied, either in the application or in phonon, to make sure the visualization doesn't get too jumpy.

Then I will implement support for this in the Xine Phonon backend. This should be fairly simple as the data exported is just raw audio data, as it is sent to most audio outputs. If there's time left over after I finish all the required parts of this GSoC, I might look into implementing support in the GStreamer backend too.

Lastly I will implement first some simple visualizations in Amarok, as a demonstration. I have planned two; one like the default baranalyzer found in the default installation of Amarok 1, and a sonogram/spectrogram like the one found in Foobar2000 (and Amarok 1). The first will use the pre-processed data, and the last will use the raw PCM data. If there's time left over, I will implement a “fullscreen”-visualization using projectm (projectm just needs raw PCM data, and already has rather good integration with Qt).


Tentative Timeline:

April/May: I will start reading through and playing with the Amarok 1 code for analyzers. I will also read up on domain transformation and other related math, as I don't remember much of it, but still have the books. It isn't strictly necessary for this, but having a basic understanding of what is done should be good.

May - 4. June

I will start familiarizing myself with the internal workings of Phonon. I've only used the public API before, not really looking into how things works “behind the scenes”. Also try to get in contact with some Phonon hackers, and decide if I should make a custom MediaNode or not.

Until June 11.

I will branch off Phonon, implementing the necessary changes in the API.

Until July 2.

I will implement support in the Phonon Xine backend for exporting audio data, and also process the data to get levels, etc.

Until July 30.

During this time I will implement the simple analyzers/visualizations in amarok.

Until August 10.

Here I will go over my code, polish it, write some extensive documentation, and any time left over will be spent first writing a “fullscreen” visualization for Amarok, and if I still have time left over, I will implement support in the GStreamer backend.