Playing percussion for Les Miserables with MainStage

from HERE

A few months ago, I was invited to play percussion for a local production of the musical “Les Miserables”. The percussion book contains cues for 30 or so instruments, including the usual timpani, snare, sus. cymbal parts, and lesser used things like garbage can lids, brake drums, and extensive use of roto toms. Needless to say, I wasn’t going to go out and buy all these things, and the theater didn’t have access to all the stuff I needed. Enter MainStage.

For those of you who aren’t privy to the endlessly capable world of MainStage, google it. Seriously. Imagine being able to create a custom piece of software that has all the functionality you need to get through even the most complex of gigs. MainStage lets you do this and more. Seriously, google it.

I started by tackling the problem of convincing and effectual timpani parts. I needed to be able to manually play rolls [meaning I didn't want a sample of a roll]. So I found some samples I liked, tweaked the velocity curve to get them responsive, then I copied the entire patch up two octaves, effectively creating a “second mallet” so I could play the same note with both hands on different keys.

The split between the two timpani patches.

I created another patch for the rest of the unpitched percussion. I used the same “two stick” mentality for the concert toms, roto toms, gran cassa, and hi hat.

For the snare drum, I actually ended up with three keys under each hand. I had the D and E keys triggering the normal samples which allowed me to do double strokes and flams by using my first and second fingers on each hand. The D# key triggered a buzz roll on both sides.

Another tricky aspect was the suspended cymbal rolls. I decided to use my mod wheel as an expression controller. I assigned a key to a looped sample of a cymbal roll. If I held the key it would roll forever, then on release I’d get a nice natural decay. To get a nice dynamic roll, I’d turn the expression all the way down before triggering the sample. Then, while the key was held down, I’d slowly raise the expression and get nice [sometimes long] crescendi.

There were also a few pitched percussion patches for xylophone, glockenspiel, crotales, and tubular bells. No sweat there.

I ran into a problem when I encountered passages where the timpani and toms would trade phrases. To solve this problem, I decided to use my sustain pedal as a momentary patch change, so anytime my pedal was down, the timpani patch was active, and every time I released the pedal it would go back to the “everything else” patch. This also proved useful in quickly getting back to drums and cymbals after I played a quick xylophone cue or something.

It’s worth noting that I ended up using all samples that were included with MainStage. So if any of you need some outlandish solution to live performance stuff, you can probably make it happen in MainStage without any third party samples or plugins.

Happy Gigging!

-John

Mental Telepathy is now possible

Mental Telepathy is Now Possible!


from here http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0105225#abstract0

There has been much speculation about what could be achieved in the area of human brain-to-brain transfer of information.

A series of studies have intimated at the possibilities:
Brain-to-Brain Control Established Between Humans and Animals at Harvard
Remote Controlled Humans Via Internet Now a Reality
New Mind Reading Research Aims to Synchronize Humans

Now an international team is declaring a successful brain-to-brain data transfer between a person sitting in India to a receiving person in France.
Journal PLOSone reports that the first brain-to-brain interface has been achieved, and that “brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces.” They summarize the history of this research as follows:

The evolution of civilization points to a progressive increase of the interrelations between human minds, where by “mind” we mean a set of processes carried out by the brain [1]. Until recently, the exchange of communication between minds or brains of different individuals has been supported and constrained by the sensorial and motor arsenals of our body. However, there is now the possibility of a new era in which brains will dialogue in a more direct way [2]. … Pioneering research in the 60′s using non-invasive means already demonstrated the voluntary control of alpha rhythm de-synchronization to send messages based on Morse code [11]. Over the last 15 years, technologies for non-invasive transmission of information from brains to computers have developed considerably, and today brain-computer interfaces embody a well-established, innovative field of study with many potential applications[12]–[16]. Recent work has demonstrated fully non-invasive human to rat B2B communication by combining motor imagery driven EEG in humans on the BCI sidewith ultrasound brain stimulation on the CBI-rat side [17]. … Here we show how to link two human minds directly by integrating two neurotechnologies – BCI and CBI –, fulfilling three important conditions, namely a) being non-invasive, b) cortically based, and c) consciously driven (Fig. 1). In this framework we provide the first demonstration of non-invasive direct communication between human minds. (emphasis added)

The method used was Transcranial Magnetic Stimulation, which has shown the most promise in directly accessing the brain and “thought.”

The intensity of pulses was adjusted for each subject so that a) one particular orientation of the TMS-induced electric field produced phosphenes [19](representing the “active direction” and coding the bit value “1”), and b) the orthogonal direction did not produce phosphenes (representing the “silent direction” and coding the bit value “0”). Subjects reported verbally whether or not they perceived phosphenes on stimulation.

This resulted in online data transfer from mind to mind to mind - telepathic e-mail, essentially:


On March 28th, 2014, 140 bits were encoded by the BCI emitter in Thiruvananthapuram and automatically sent via email to Strasbourg, where the CBI receiver (subject 3) was located. There, a program parsed incoming emails to navigate the robot and deliver TMS pulses precisely over the selected site and with the appropriate coil orientation. A similar transmission with receiver subject 2 took place on April 7th, 2014. In both cases, the transmitted pseudo-random sequences carried encrypted messages encoding a word – “hola” (“hello” in Catalan or Spanish) in the first transmission, “ciao” (“hello” or “goodbye” in Italian) in the second. Words were encoded using a 5-bit Bacon cipher [31] (employing 20 bits) and replicated for redundancy 7 times (for a total of 140 bits). The resulting bit streams were then randomized using random cyphers selected to produce balanced pseudo-random sequences of 0′s and 1′s (for subject blinding and proper statistical analysis purposes in addition to providing word-coding). On reception, de-cyphering and majority voting from the copies of the word were used to decode the message.

All of this is a technical way of saying that, for the first time, not only has there been a signal transfer representing data, the potential has opened up for the transmitting of emotions – a mind-to-mind transfer, not merely brain-to-brain.

Here is where this type of research could become alarming, as mind control researchers have been studying ways to remotely control human subjects through TMS for the implantation of certain narrative structures as highlighted by a secret DARPA project at the University of Arizona. My emphasis added:

We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or other motor/PNS mediated means in interpersonal communication. Although certainly limited in nature (e.g., the bit rates achieved in our experiments were modest even by current BCI standards, mostly due to the dynamics of the precise CBI implementation), these initial results suggest new research directions, including the non-invasive direct transmission of emotions and feelings or the possibility of sense synthesis in humans, that is, the direct interface of arbitrary sensors with the human brain using brain stimulation, as previously demonstrated in animals with invasive methods [2].

The proposed technology could be extended to support a bi-directional dialogue between two or more mind/brains (namely, by the integration of EEG and TMS systems in each subject). In addition, we speculate that future research could explore the use of closed mind-loops in which information associated to voluntary activity from a brain area or network is captured and, after adequate external processing, used to control other brain elements in the same subject. This approach could lead to conscious synthetically mediated modulation of phenomena best detected subjectively by the subject, including emotions, pain and psychotic, depressive or obsessive-compulsive thoughts.

Finally, we anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely. The widespread use of human brain-to-brain technologically mediated communication will create novel possibilities for human interrelation with broad social implications that will require new ethical and legislative responses.

In short, researchers are admitting that we have crossed a key threshold in jumping from lab rats to human rats subjects, which will bring massive social transformation that is dependent upon the ethics of the scientific establishment and the legislative permissions of government.

Given some of the stated ethics from the esteemed halls of science (check out the statements of Oxford’s Dr. Roache here) and the history of legislative response to potential doomsday scenarios, we would be wise not to hold our collective breath. Time is short to speak up and inject morality and ethics into this discussion.