Home Papers Reports Projects Code Fragments Dissertations Presentations Posters Proposals Lectures given Course notes

Where are the emotional cues in music ?

Werner Van Belle1* - werner@yellowcouch.org, werner.van.belle@gmail.com
Bruno Laeng2 - bruno.laeng@psykologi.uio.no

1- Yellowcouch;
2- Department of Psychology; University of Tromsø; Tromsø; Norway
* Corresponding author

Abstract :  What are the underlying dimensions that give structure and meaning to music ? To answer this question, we aim to integrate methodological techniques from other disciplines (signal processing & mathematics) into the field of psychology and psycho-physiology. Our main goal is to measure in a mathematical and computational way musical parameters and relate those to the human judgment of a song its emotional content and in addition compare these to psycho-physiological measures (pupillometry and EEG).

Keywords:  psychoacoustics, emotional cues, audio content extraction, psycho acoustics, BPM, tempo, rhythm, composition, echo, spectrum, sound color
Reference:  Werner Van Belle, Bruno Laeng; Where are the emotional cues in music ?; Ideas in Psychology; YellowCouch Scientific; May 2006


Table Of Contents
1. Background
2. Methods
    2.1 Procedures
    2.2 Design & Analysis
    3.3 Pupillometry
    2.4 EEG Measurement
    2.5 BpmDj
3. Specific Projects
    3.1 Project: Influence of participant mood to music rating
        3.1.1 Tests
        3.1.2 Design & Analysis
    3.2 Project: Pupillometric responses to additional dimensions
        3.2.1 Background
        3.2.2 Tests
        3.2.3 Design & Analysis
    3.3 Project: Complex Emotional Content
    3.4 Project: The relation between music and EEG
        3.4.1 Tests
        3.4.2 Design & Analysis.
    3.5 Project: Synthesis of emotional sound
4. Dissemination of Results
5. Usefulness
    5.1 Design of music therapy
    5.2 Commercial
6. Ethics
7. Professional Position
    Bibliography

1. Background

Most of us feel that music is closely related to emotion or that music expresses emotions and/or can elicit emotional responses in the listeners. There is no doubt that music is a human universal. Every known human society is characterized by making music from their very beginnings. One fascinating possibility is that, although different cultures may differ in how they make music and may have developed different instruments and vocal techniques, we may all perceive music in a very similar way. That is, there may be considerable agreement in what emotion we associate with a specific musical performance [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13].

Musical emotions can be characterized in very much the same way as the basic human emotions. The happy and sad emotional tones are among the most commonly reported in music and these basic emotions may be expressed, across musical styles and traditions, by similar structural features. Among these, pitch and rhythm seem basic features. Research on infants has shown that these structural features can be perceived already early in life [14]. Recently, it has been proposed that at least some portion of the emotional content of a musical piece is due to the close relationship between vocal expression of emotions (either as used in speech, e.g. a sad tone, or in non-verbal expressions, e.g., crying). In other words, the accuracy with which specific emotions can be communicated is dependent on specific patterns of acoustic cues [15]. This account can explain why music is considered expressive of certain emotions. Specifically, some of the relevant acoustic cues that pertain to both domains (music and verbal communication, respectively) are: speech rate/tempo, voice intensity/sound level, and high-frequency energy. Speech rate/tempo may be most related to basic emotions as anger and happiness (when they increase) or sadness and tenderness (when they decrease). Similarly, the high-frequency energy also plays a role in anger and happiness (when it increases) and sadness and tenderness (when it decreases). Different combinations and/or levels of these basic acoustic cues could result in several specific emotions. For example, fear may be associated in speech or song with low voice intensity and little high-frequency energy, but panic expressed by increasing both intensity and energy.

Juslin & Laukka [15] observed that the following dimensions related to the emotional expression of music and/or speech: pitch or $F_{0}$ (i.e., the strongest cycle component of a waveform), contour or intonation, vibrato, intensity or loudness, attack or the rapidity of tone onset, tempo or the velocity of music, articulation or the proportion of sound-to-silence, timing or rhythm variation, timbre or high-frequency energy of an instrument/singers formant.

Certainly, the above is not an exhaustive list of all the relevant dimensions of music or of the relevant dimensions of emotional content in music. Additional dimensions might include echo, harmonics, interval structure, melody and low frequency oscillations.

2. Methods

The proposed project will gather psychoacoustic information by measuring the emotional responses of a group of participants to various short musical pieces. Signal processing expertise will be relied upon as to objectively analyze musical excerpts for various parameters believed to be emotional cues. Below we discuss general procedures and tools that will be used. Afterwards specific projects will be described.

2.1 Procedures

The stimuli we will present to the listeners will be short (a couple of measures) and will capture the essence of the musical piece. Participants will be asked to rate on a step-scale how well a particular emotion describes the sound fragment heard. Responses and latency will be recorded with mouse clicks from participants on a screen display implemented in an extension of BpmDj.

Ratings will be made by use of 24 antonyms. The antonyms will name emotions and moods (e.g., happy-sad), whereas some items will mention objects (e.g., sun-moon) or non-acoustic attributes (e.g., bright-dark). Lines of equal length (10 cm) will be drawn between the two opposites. Participants will be asked to indicate the degree of appropriateness of the probed expression by marking with a pencil a position on the line. If the participants find any of the sounds to be neutral in meaning with respect to a specific pair of opposites, they should mark the center of the line.

All experiments will use within-subjects designs. A Latin square design will be used to order the presentations of the intervals to prevent either fatigue or practice effects. The sounds will be played on a CD-player and heard through stereo headphones. The volume will be the same in all experiments and set to the level of normal speech. The listeners will sit comfortably in a quiet room while listening to the sound fragments.

2.2 Design & Analysis

The selection of musical pieces presented to the participants will be determined using a 'design of experiment' methodology [16, 17, 18]. This allows one, before conducting the experiment, to vary the relevant factors systematically as to optimize the results that can be obtained through that specific data-set. A design of experiment relies on design variables and response variables. In our experiments, the design variables will encompass a set of measured song properties while the response variables will encompass the emotional response. Cluster analysis and principal component analysis will be applied to the song collection to determine classes of sounds that can be used as input into the design of experiment.

After conducting the experiment, analysis of variance will identify the factors that most influence the results, as well as the existence of interactions and synergies between factors [19, 20].

3.3 Pupillometry

Pupillometry will be performed by means of the Remote Eye Tracking Device, R.E.D., built by SMI-SensoMotoric Instruments in Teltow (Germany). Analyzes of recordings will be computed by use of the iView software, also developed by SMI. The R.E.D. II can operate at a distance of 0.5-1-5 m and the recording eye-tracking sample rate is 50 Hz., with a resolution better than 0.1 degree. The eye-tracking device operates by determining the positions of two elements of the eye: the pupil and the corneal reflection. The sensor is an infrared light sensitive video camera typically centered on the left eye of the subject. The coordinates of all the boundary points are fed to a computer that, in turn, determines the centroids of the two elements. The vector difference between the two centroids is the "raw" computed eye position. Pupil diameters are expressed in number of video-pixels of the horizontal and vertical diameter of the ellipsoid projected onto the video image by the eye pupil at every 20 ms sample.

2.4 EEG Measurement

The psychology department at the University of Tromsø recently acquired a 64 channel BrainAmp MRPlus system (model BP-1310) from Algol Pharma (Finland). The hardware allows simultaneous recording of EEG and ERP. The software associated with the scanner (the BrainVision II software) provides an array of tools to inspect and analyze the data. Most important is the possibility to export the raw data, so that we can import it into BpmDj. Analysis tools present in the analyzing software include standard Fourier and wavelet analysis. The wavelet analysis provides, among other, Morlet and Mexican hat wavelets.

2.5 BpmDj

The open source software BpmDj [21, 22] analyzes and stores a large number of soundtracks. The program has been developed since 2000 under the form of a hobby project. It contains advanced algorithms to measure spectrum, tempo, rhythm, composition and echo characteristics. Tempo module - Five different tempo measurement techniques are available of which auto-correlation [23] and ray-shooting [24] are most appropriate. For other techniques see [25, 26, 27, 28]. All analyzers in BpmDj make use of the Bark psychoacoustic [29, 30, 31] scale. The spectrum or sound color is visualized as a 3 channel color (red/green/blue) based on a Karhunen-Loéve transform [32] of the available songs. Echo/delay Modules - Measuring the echo characteristics is based on a distribution analysis of the frequency content of the music and then enhancing it using a differential auto-correlation [33]. Rhythm/composition modules - To calculate rhythm and composition properties the song is split in all its measures. The rhythm property is the superimposition of all those measures. The composition property measures the probability of a content change after $x$ measures. From an end-user point of view the program supports distributed analysis, automatic mixing of music, distance metrics for all analyzers as well as clustering and automatic classification based on this information. Everything is tied together in a Qt [34] based user interface. BpmDj will form the basis platform in which musical parameters will be measured.

Figure: analysis as produced by BpmDj decomposes a signal into position (horizontal) /frequency (vertical) and amplitude (color) components. Unique to this analysis is its high accuracy both in time and frequency. Harmonic content is preserved and visible (the horizontal lines with red and blue dots).
Image InMyEyesHaarWavepackets

3. Specific Projects

3.1 Project: Influence of participant mood to music rating

Music cannot only elicit emotional response, many observations indicate that one's mental state influences music preference and thus symptomatically reveals mental aspects of the listener [35, 36, 37, 38]. This means that one cannot simply relate audio-cues to a reported emotion since the participants' emotional state will resist or accept the 'cued' emotion differently. This specific project aims to understand the influence of the participants' initial mood to their reports.

3.1.1 Tests

Normal participants (equally distributed male/female, N=100) will be drawn from the student population. Before and after the test we will assess their mental state using a merge of different questionnaires and also ask them to list a number of their favorite songs. Limited biographic information will be gathered including gender, handedness, age and the presence of a neurological and/or psychiatric history. Information regarding the music preference and musical expertise of each participant will be gathered. We will classify every person in one of 3 groups: ''Naïve'' listeners, without any musical training; ``Amateurs'', who have learned to play an instrument since childhood, via formal training, or have become autodidacts at some point in their lives; and ``Professionals'', who have at least some training at the conservatory and are still practicing their instrument (at least four hours a day).

Short fragments of songs (N=50) will be presented and questions asked regarding the emotional content. The response variables will be based on 8 basic emotions [39, 40, 41] presented in a semantic differential questionnaire (as explained in Procedures in the Methods section). The sounds will be perceived through Philips SBC HP 840 headphones.

3.1.2 Design & Analysis

The design variables will be a well chosen set of songs targeting pitch, loudness, tempo, articulation, rhythm variation, timbre and high-frequency energy (according to [15]). These variables will be mathematically measured for a large set of songs (>20000 songs) using BpmDj and then a subset will be chosen based on the various multi-variate distributions and clusters.

The questionnaire will be created by merging the Pittsburgh sleep quality index, the Epworth sleepiness scale, the beck depression inventory II, the beck anxiety inventory, profile of mood states [42], state anxiety inventory [43] and the Plutchik's emotions profile index [44]. Analysis of the results will reveal which and how strong the relations are between initial participant mood and observed emotion.

3.2 Project: Pupillometric responses to additional dimensions

A second experiment will rely on new design variables including low frequency oscillations, harmonic classes, key, echo, contour, vibrato, attack and velocity. We will furthermore measure pupil responses throughout the experiments.

3.2.1 Background

Emotions are not only subjective, mental states but they are also physiological states that can be observed externally and measured. Many of the physiological manifestations of emotions are mediated by the autonomic nervous system [45, 46, 47, 48]and there are systematic changes in various physiological responses mediated by this system to naturally occurring acoustic stimuli [49]. Given that the pupil of the eye is also controlled by the autonomic nervous system [50], then monitoring changes of pupil diameter can provide a window onto the emotional state of an individual.

3.2.2 Tests

The experiment will be set up similar to the first experiment, but with additionally procedures to measure pupil response. A standard calibration routine will be conducted at the beginning of each session. During the calibration process, the system 'learns' the relationship between the eye movement and gaze position. Specifically, each participant will be asked to gaze at the screen while a calibration map appears on screen, consisting of nine standard points marked as white crosses on a blue background. Each participant will be asked to gaze at each one of the crosses on the screen in a particular order. The eye position is then recorded. Subsequently, a blank screen, consisting of a uniform background in a light blue color, replaced the calibration map. This will remain on screen during the experiments. Each participant will be tested in a windowless laboratory room and artificial lighting will be kept constant for each participant and across sessions.

3.2.3 Design & Analysis

The new parameters will be mathematically measured using new BpmDj modules. The key/scale module will measure the occurrence of chords by measuring individual notes. To provide information on the scale (we assume an equitemporal scale) this module will also measure the tuning/detuning of the different notes. The dynamic module will measure energy changes at different frequencies. First order energy changes provide attack and decay parameters of the song. Second order energy changes might provide information on song temperament. The LFO module will measure low frequency oscillations between 0 and 30 Hz using a digital filter on the energy envelope of a song. The harmonic module will inter-relate different frequencies in a song by investigating the probability that specific frequencies occur together. A Bayesian classification of the time based frequency and phase content will determine different classes. Every class will describe which attributes (frequencies and phases) belong together, thereby providing a characteristic sound or waveform of the music. This classification will allow us to correlate harmonic relations to the perception of music. Autoclass [51, 52] will perform the Bayesian classification. The melody module will rely on a similar technique by measuring relations between notes in time. All of the above modules need to decompose a song into its frequency content. To this end, we will initially make use of a sliding window Fourier transform [23]. Later in the project, integration of multi-rate filter-banks will achieve more accurate decomposition [53, 54]by relying on various wavelet bases [55] (see figure).

3.3 Project: Complex Emotional Content

Another experiment investigates how well complex emotional content is recognized in music. It has been recognized that combinations of basic emotions yield more complex (less powerful) emotions [39, 40, 41], therefore it is of interest to verify whether those $2^{nd}$ level or $3^{th}$ level emotions can be found back in music and whether these are consistent with what one would expect. This experiment is set up using similar procedures as the previous ones. The main difference are the antonyms we will be using (with opposing complex emotions this time instead of basic emotions). Two different designs will be used. First we will create new songs by perfectly superimposing existing songs with a strong, well known basic emotional content. This will be done using the automixer feature of BpmDj. Secondly, we will choose songs that have multiple, almost equally strong, basic emotional reports and see whether probing the combined emotion yields a significant better response.

3.4 Project: The relation between music and EEG

Low frequency oscillations, as observed in electro-encefalograms, show information on the modus operandi of the brain. Delta-waves (below 4 Hz), theta-waves (4 to 8 Hz), alpha-waves (8 to 12 Hz) and beta-waves (13 to 30 Hz) all relate to some form of attention and focus. They also seem to relate to mood [56][57]. The brainwave pattern can synchronize to external cues under the form of video and audio [58, 59]. Interestingly, major and minor chords produce second order undertones that beat at frequencies of respectively 10 Hz and 16 Hz, which places them in distinct brainwave scales (regardless of the multiplication factor). Rhythmical energy bursts also seem to influence the brainwave pattern (techno and 'trance' vs 'ambient'). We believe that both a quantification of the low frequency content and measurement of long period energy oscillations in songs can provide crucial input into our study [60, 61].

3.4.1 Tests

A small number of participants (N=10) will be connected to the EEG measurement device and then asked to listen to full length songs (with a maximum of 3 minutes per song). Every 15 minutes a pause of 5 minutes will be introduced (totaling 24 songs over 2 hours). To avoid electromagnetic inference from headphones, standard loud speakers will be used.

3.4.2 Design & Analysis.

The music will be chosen primarily on its low frequency behavior. Analysis will include cross-correlation between the music energy envelope and different EEG channels. Further analysis will relate EEG imbalance and laterality effects (E.g; right hemisphere's potentials versus left hemisphere's potentials) to previously reported emotions as well as expected emotions.

3.5 Project: Synthesis of emotional sound

After establishing the prominent variables related to music, we are in the position to create music relying on this knowledge. Instead of selecting appropriate songs, music will be created to test specific parameters. This is especially important for parameters that cannot be easily isolated using standard songs. Parameters of interest include a) dynamics, such as fortissimo, forte, mezzo-forte, mezzo-piano, piano and pianissimo, b) attack and sustain factors, c) microtonality to measure the impact of detuning, d) melodic tempo e) song waveforms can be altered relying on different instruments. f) Echo and delay characteristics can be influenced in the sound production phase. Further parameters of influence will be g) rhythm (time signatures, poly-rhythmical structures), h) key (major, minor & modal scales) and i) melody (ambitus and intervals). By creating a number of etudes, presented in different styles we will validate our research. To this end, we will cooperate with a professional musician Geir Davidsen at The northern Norwegian Music Conservatory.

4. Dissemination of Results

The planned research will primarily be of interest to international research communities in psychology and computer science. Articles based on the research will be submitted to top-end journals in cognitive science and cognitive neuroscience, such as Cognitive Psychology, Cognition, Cognitive Science or the journal of experimental psychology, perception and psycho-physics. In addition, findings from this research will be presented at international conferences. Methods and software produced in this project will be open-sourced to attract international attention from computer scientists working in the field of database meta-data extraction and content extraction. The unique inter-disciplinary nature of this project and the highly interesting concept of 'emotion and music' allows us to present our findings to larger audiences [62].

5. Usefulness

5.1 Design of music therapy

The presented research might improve the design of music therapy. The effect of music as a psycho-therapeutic tool has been recognized for a long time. Clearly, music can have a soothing and relaxing effect and can enhance well-being by reducing anxiety, enhancing sleep, and by distracting a patient from agitation, aggression, and depression states. We briefly touch upon various health-related areas. Depression - depression and dementia remain two of the most significant mental health issues for nursing home residents [63]. Nowadays, there is a growing interest in therapeutic use of music in nursing homes. A widely shared conclusion is that music can supplement medical treatment and has a clear potential for improving nursing homes' care. Music also seems to improve major depression [64]. Anxiety - it would seem that, in general, affective processes are critical to understanding and promoting lasting therapeutic change. Insomnia - music improves sleep quality in older adults. [65] showed that music significantly improves sleep quality. Pain reduction - music therapy seems an efficient treatment for different forms of chronic pain, including fibromyalgia, myofascial pain syndromes, polyarthritis [66]; chronic headaches [67]and chronic low back pain [68]. Music seems to affect especially the communicative and emotional dimension of chronic pain [66]. Sound induced trance also enables patients to distract them from their condition and it may result in pain relief 6-12 month later [67].

The general impact of music on the nervous system extends to the immune system. Research by [69] indicates that listening to music after a stressful task increases norepinephrine levels. This is in agreement with [70], who verified the immunological impact of drum circles. Drum circles have been part of healing rituals in many cultures throughout the world since antiquity. Composite drumming directs the immune system away from classical stress and results in increased dehydroepiandrosterone-to-cortisol ratios, natural killer cell activity and lymphokine-activated killer cell activity without alteration in plasma interleukin 2 or interferon-gamma. One area of application for these effects could be cancer treatment. Autologous stem cell transplantation, a common treatment for hematologic malignancies, causes significant psychological distress due to its effect on the immune system. A study by [71]reveals that music therapy reduces mood disturbance in such patients. The fact that music can be used as a mood induction procedure, with the required physiological effects can make its use relevant for pharmaceutical companies [72]. Positive benefits of music therapy have also been observed in Multiple Sclerosis patients [73].

These positive aspects of music have led to the use of 'music therapy' as an aid in the everyday care of patients in, for example, nursing homes [74, 75]. Understanding which musical aspects lead to an emotional response might lead to creation of efficient play-lists and a more scientific way of assessing and selecting songs. Depending on the results of the presented work we might be able to present recommendations to different patients on what kind of music might be suitable to them. Creation of typical 'likes-a-lot' and 'should-listen-to' play-lists per emotional state might enhance the psychotherapists toolbox.

5.2 Commercial

Commercial relevance of this project lies in the possibility to search Internet for similar musical pieces. This can further be extended into semantic peer to peer systemsin which file sharing programs cluster songs to machines of which the owner will probably like the music. Clustering songs based on their emotional co-notations is also indispensable in database systems. Recognizing similar emotions is also a first step in data content extraction algorithms. Radio stations and DJ's might be able to generate play-lists and do interesting things regarding the 'emotion' of a play-list. E.g: it might be possible to make a transition from sad to happy, from low-energy to high-energy and so on. Further commercial relevance can be found in plugins for existing software such as Cubase [76], Cakewalk [77], Protools [78]and others.

6. Ethics

All participants in the experiments will participate on a voluntary basis and after written informed consent. They will be informed that they can interrupt the procedure at any time, without having to give a reason for it and at no costs for withdrawing. In none of the experiments will sensitive personal information or names or other characteristics that might identify the participant be recorded. All participants will be thoroughly debriefed after the experiment.

7. Professional Position

The computational processes underlying music and the emotions are a little investigated topic and interdisciplinary collaborations on this topic are rare. Hence, one way the present proposal is of relevance is the way it combines computational techniques with the empirical study of human responses in concert with explicit compositional methods and musicological structure.

Dr. Werner Van Belle - originally from Belgium, now lives in Norway, where he changed his career from pure computer science to signal processing in life sciences. In his spare time he is passionate about digital signal processing for audio applications. Of particular relevance for this proposal is his work on mood induction [79] and sound analysis [21, 24, 62, 33].

Prof. Bruno Laeng - has a 100% academic position (50% research appointment) in the biologisk psykologi division of the Department of Psychology. Recent quality evaluations from Norges Forksningsråd show that the division of biologisk psykologi at Universitetet i Tromsø (UiTø) has received the highest level of evaluation within the institute for psychology from the examining committee (i.e., very good). Moreover, this applicant was awarded in the year 2000 the Pris til yngre forsker from UiTø.

The project also benefits from a collaboration with 'het Weyerke', a Belgian service center/nursing home for mentally handicapped and elderly. They are mainly interested in music as a stimulation and soothing mechanism to alleviate stress and depressive symptoms from dementing elderly. Their long standing tradition in this matter will provide input into our study.

Bibliography

1.Interval distributions, mode, and tonal strength of melodies as predictors of perceived emotion M. Costa, P. Fine, and P.E. Ricci Bitti Music Perception, 22(1), 2004
2.Psychological connotations of harmonic intervals M. Costa, P.E. Ricci Bitti, and F.L. Bonfiglioli Psychology of Music, 28:4-22, 2000
3.Children's artistic responses to musical intervals L.D Smith and R.N. Williams American Journal of Psychology, 112(3):383-410, 1999
4.The language of music D. Cooke London: Oxford University Press, 1959
5.Music and emotion: Theory and research P.N. Juslin chapter Communication emotion in music performance: A review and theoretical framework, pages 309-337. Oxford: Oxford University Press, 2001
6.The Routledge Companion to Aesthetics M. DeBellis London: Routledge, 2001
7.Music, mind, and brain S. Makeig chapter Affective versus analytic perception of musical intervals., pages 227-250. New York; London: Plenum Press, 2000
8.Sound sentiment P. Kivy Technical report, Philadelphia: Temple University Press, 1989
9.Semantica dei bicordi P. Bozzi In G. Stefani and F. Ferrari, editors, La psicologica della musica in Europa e in Italia. Bologna: CLUEB, 1985
10.Perception of the major/minor distinction: I. historical and theoretical foundations R.G. Crowder Psychomusicology, 4:3-12, 1984
11.Verbal and explanatory responses to melodic musical intervals T.F. Maher and D.E. Berlyne Psychology of Music, 10(1):11-27, 1982
12.The new experimental aesthetics D.E. Berlyne New York: Halsted, pages 27-90, 1974
13.Triadic comparison of musical intervals W. J. M. Levelt, J.P. van de Geer and R. Plomp British Journal of Mathematical and Statistical Psychology, 19:163-179, 1966
14.Infants use meter to categorize rhythms and melodies: Implicatiopns for musical structure learning E.E. Hannon, S.P. Johnson Cognitive Psychology, 50:354-377, 2005
15.Communication of emotions in vocal expression and music performance: Different channels, same code ? P.N. Juslin and P.Laukka Psychological Bulletin, 129:770-814, 2003
16.Unscrambler CAMO, 2005 http://www.camo.com/
17.The Designs of Experiments R.A. Fisher Hqfner Press, New York, 9th edition, 1971
18.Design and Analysis of Experiments, Introduction to Experimental Design K.Hinkelman, O.Kempthorne Wiley, New York, 1994
19.Some theorems on quadratic forms applied in the study of analysis of variance problems G.E.P. Box Annals of Statistics, 25:290-302, 1954
20.ANOVA Repeated Measures Ellen R. Girden Thousand Oaks, CA: Sage Publications, 1992
21.BpmDj: Free DJ Tools for Linux Werner Van Belle 1999-2010 http://bpmdj.yellowcouch.org/
22.DJ-ing under Linux with BpmDj Werner Van Belle Published by Linux+ Magazine, Nr 10/2006(25), October 2006 http://werner.yellowcouch.org/Papers/bpm06/
23.Discrete-Time Signal Processing Alan V. Oppenheim, Ronald W. Schafer, John R. Buck Signal Processing Series. Prentice Hall, 1989
24.BPM Measurement of Digital Audio by Means of Beat Graphs & Ray Shooting Werner Van Belle December 2000 http://werner.yellowcouch.org/Papers/bpm04/
25.Apparatus for detecting the number of beats Yamada, Kimura, Tomohiko, Funada, Takeaki, Inoshita, and Gen US Patent Nr 5,614,687, December 1995
26.Estimation of tempo, micro time and time signature from percussive music Christian Uhle, Jürgen Herre Proceedings of the 6th International Conference on Digital Audio Effects (DAFX-03), London, UK, September 2003
27.Tempo and beat analysis of acoustic musical signals Scheirer, Eric D Journal of the Acoustical Society of America, 103-1:588, January 1998
28.Music understanding at the beat level: Real-time beat tracking for audio signals M.Goto, Y.Muraoka Readings in Computational Auditory Scene Analysis, 1997
29.Comparison of a discrete wavelet transformation and a nonuniform polyphase filterbank applied to spectral-subtraction speech enhancement T.Gulzow, A.Engelsberg, U.Heute Signal Processing, 64(1):5-19, 1998
30.Flexible nonuniform filter banks using allpass transformation of multiple order M.Kappelan, B.Strauss, P. Vary In Proceedings 8th European Signal Processing Conference (EUSIPCO'96), pages 1745-1748, 1996
31.Psychoacoustics, Facts & Models E. Zwicker, H. Fastl Springer Verlag, 2nd Edition, Berlin 1999
32.Neural Networks for Pattern Recognition C.M.Bishop Oxford University Press, 1995
33.Observations on spectrum and spectrum histograms in BpmDj Werner Van Belle September 2005 http://werner.yellowcouch.org/Papers/obsspect/index.html
34.Qt website Trolltech/Nokia http://www.trolltech.com/products/qt/index.html
35.Music as a symptom A. Portera Sanchez An R Acad Nac Med (Madr), 121(3):501-513, 2004
36.Music preferences and tobacco smoking J.Posluszna, A.Burtowy, R.Palusinski Psychol Rep, 94(1):240-242, Feb 2004
37.Music preference, depression, suicidal preoccupation and personality: comment on stack and gundlack's papers D.Lester, M.Whipple Suicide Life Threat Behav., 26(1):68-70, 1996
38.Music preferences and suicidality: a comment on stack M.Burge, C.Goldblat, D.Lester Death Stud., 26(6):501-504, Aug 2002
39.The Emotional Brain: The Mysterious Underpinnings of Emotional Life Joseph LeDoux Toughstone: Simon & Schuster, 1996
40.What's basic about basic emotions ? A.Ortony, T.J. Turner Psychological Review, 97:315-331, 1990
41.Emotion: Theory, research, and experience: Vol 1. Theories of emotion, chapter 'A general psychoevolutionary theory of emotion', pages 3-33 R.Plutchik New York: Academic, 1980
42.Manual for the profile of mood states D.M. McNair, M.Lorr, L.F. Droppleman Technical report, San Diego, CA: Educational and Industrial Testing Services., 1971
43.The ability of corah's dental anxiety scale and spielberger's state anxiety inventory to distinguish between fearful and regular norwegian dental patients G.Kvale, E.Berg, M.Raadal Acta Odontol Scand, Department of Clinical Psychology, University of Bergen, Norway, 56(2):105-109, April 1998
44.Emotions profile index - manual R.Plutchik Technical report, California: Western Psychological Services, 1974
45.Autonomic nervous system activity distinguishes among emotions P.Ekman, R.W. Levenson, W.V. Friesen Science, 221:1208-1210, 1983
46.Sentralnervesystemet P.Brodal Technical report, Oslo: Universitetsforlaget, 2001
47.Autonomic nervous system differences among emotions R.W. Levenson Psychological Science, 3(1):23-26, 1992
48.Human neuroanatomy R.C. Truex and M.B. Carpenter Baltimore: Williams & Wilkins, 1969.
49.Affective reactions to acoustic stimuli M.M Bradley, P.J. Lang Psychophysiology, 37:204-215, 2000
50.The Pupil: Anatomy, physiology and clinical applications I.E. Loewenfeld Detroit: Wayne State University Press, 1993
51.Maximum Entropy and Bayesian Methods J. Stutz and P. Cheeseman Cambridge 1994; chapter AutoClass - a Bayesian Approach to Classification. Kluwer Acedemic Publishers, Dordrecht, 1995.
52.Advances in Knowledge Discovery and Data Mining P. Cheeseman and J. Stutz chapter Bayesian Classification (AutoClass): Theory and Results. AAAI Press/MIT Press, 1996.
53.Iso/iec mpeg-2 advanced audio coding M. Bosi, K. Brandenburg, Sch. Quackenbush, L.Fielder, K.Akagiri, H. Fuchs, M. Dietz, J. Herre, G. Davidson, and Yoshiaki Oikawa Proc. of the 101st AES-Convention, preprint 4382, 1996
54.Perfect reconstruction filter banks with rational sampling factors J. Kovacevic and M. Vetterli IEEE Transactions on Signal Processing, 41(6), June 1993
55.A Friendly Guide to Wavelets Gerald Kaiser Birkhäuser, 6th edition edition, 1996
56.Differential lateralization for positive and negative emotion in the human brain: Eeg spectral analysis. G.L. Ahem and G.E. Schwartz Neuropsychologia, (23):745-756, 1985
57.Regional brain electrical asymmetries discriminate between previously depressed and healthy control subjects J.B. Henriques, R.J. Davidson Journal of Abnormal Psychology, 29:22-31, 1990
58.Complex spectral patterns with interaural differences: Dichotic pitch and the central spectrum Yost William A., Harder P.J., and Dye R.H. In Yost William A. and C.S.Watson, editors, Auditory Processing of Complex Sounds. Lawrence Erlbaum, 1987
59.Validation of a music mood induction procedure: Some preliminary findings P. Kenealy Cognition and Emotion, 2:11-18, 1988
60.Effects of music preference and selection on stress reduction G.C. Mornhinweg Journal of Holistic Nursing, 10:101-109, 1992
61.Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion AnneJ. Blood, RobertJ. Zatorre PNAS, 98(20):11818-11823, 25 September 2001
62.Advanced Signal Processing Techniques in BpmDj Werner Van Belle In 'Side Program of the Northern Norwegian Insomia Festival'. Tvibit August 2005. http://werner.yellowcouch.org/Papers/insomnia05/index.html
63.Assessment and treatment of nursing home residents with depression or behavioral symptoms associated with dementia: a review of the literature M.Snowden, K.Sato, and P. Roy-Byrne Journal American Geriatric Society, 51(9):1305-1317, Sep 2003
64.Massage and music therapies attenuate frontal eeg asymmetry in depressed adolescents N.A. Jones, T.Field Adolescence, 34(135):529-534, Fall 1999
65.Music improves sleep quality in older adults H.L. Lai, M.Good J Adv Nurs., 49(3):234-244, Feb 2005
66.Active music therapy for chronic pain: a prospective study H.C. Muller-Busch, P.Hoffmann Schmerz., 11(2):91-100, Apr 18 1997
67.Music therapy for chronic headaches. evaluation of music therapeutic groups for patients suffering from chronic headaches M. Risch, H. Scherg, R. Verres Schmerz, 15(2):116-125, Apr 2001
68.Effect of music therapy among hospitalized patients with chronic low back pain: a controlled, randomized trial S. Guetin, E. Coudeyre, M.C. Picot, P. Ginies, B. Graber-Duvernay, D. Ratsimba, W. Vanbiervliet, J.P. Blayac, and C. Herisson Ann Readapt Med Phys., 48(5):217-224, June 2005.
69.The effects of music listening after a stressful task on immune functions, neuroendocrine responses, and emotional states in college students E.Hirokawa, H.Ohira Journal of Music Therapy, 40(3):189-211, Fall 2003
70.Composite effects of group drumming music therapy on modulation of neuroendocrine-immune parameters in normal subjects B.B. Bittman, L.S. Berk, D.L Felten, J.Westengard, O.C. Simonton, J.Pappas, M.Ninehouser Altern Ther Health Med, 7(1):38-47, Jan 2001
71.Music therapy for mood disturbance during hospitalization for autologous stem cell transplantation: a randomized controlled trial B.R. Cassileth, A.J. Vickers, L.A. Magill Cancer, 98(12):2723-2729, Dec 15 2003
72.Reproducibility of negative mood induction: a self-referent plus musical mood induction procedure and a controllable/uncontrollable stress paradigm R.A. Richell, M.Anderson Journal of Psychopharmacology, 18(1):94-101, Mar 2004
73.Active music therapy in the treatment of multiple sclerosis patients: a matched control study W. Schmid, D. Aldridge Journal of Music Therapy, 41(3):225-240, Fall 2004
74.The effects of music and paintings on mood N. Stratton, A. H. Zalanowski Journal of Music Therapy, 26(1):30-41, 1989
75.The effect of music on pain relief and relaxation of the terminally ill S.L. Curtis Journal of Music Therapy, 23(1):10-24, 1986
76.Cubase http://www.steinberg.de/
77.Cakewalk Twelve Tone Systems, 2005 http://www.cakewalk.com/
78.Protools Digidesign, 2005 http://www.digidesign.com/
79.The CryoSleep Brainwave Generator Werner Van Belle http://cryosleep.yellowcouch.org/

http://werner.yellowcouch.org/
werner@yellowcouch.org