Real-time Electric Guitar Sound Modelling Application
Info: 8029 words (32 pages) Dissertation
Published: 9th Dec 2019
Tagged: Information TechnologyMusic
Abstract
Sound modelling is an essential part of guitar playing experience. Creating a Java application
that would imitate the sound from a traditional, physical tube amplifier may be a very
challenging task. The application’s functionality must be well developed, providing latencyfree,
real-time sound processing.
This document will provide the reader with information regarding the planning process of an audio-processing project in the Java programming language, a literature review about Digital Signal Processing in Software Engineering and Electric Guitar Sound Effects, a detailed System Design and Implementation of this application.
The main objective of this project is to utilize digital signal processing techniques and create a Java application that will imitate the behaviour of a physical pre-amplifier for an electric guitar, with built in effects.
ii
Table of Contents
Abstract ii
Table of Contents iii
List of Tables vi
List of Figures vii
1 Introduction 1
1.1 Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Project Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.1 Software Release . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Project Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.2 Technical Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 High-Level List of Requirements . . . . . . . . . . . . . . . . . . . . . . . 6
iii
1.6.1 Work Breakdown Structure . . . . . . . . . . . . . . . . . . . . . . 6
1.6.2 Proposed Methodology . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Expected Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Literature Review 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Real Time Audio Processing . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Modelling Sound Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Linear Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Distortion and Overdrive . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Audio Processing on Android . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 System Design 15
3.1 Functional Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Audio Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Audio Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 GUI Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Implementation and Methodology 22
4.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Audio Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 Audio Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3.1 Delay Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
iv
4.3.2 Overdrive Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3.3 Low-Pass Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 Front-End GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5 Conclusion and Future Work 31
Bibliography 33
v
List of Tables
1.1 Work Breakdown Structure Table . . . . . . . . . . . . . . . . . . . . . . . 6
vi
List of Figures
1.1 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Agile Methodology (xandermar.com, 2016) . . . . . . . . . . . . . . . . . 8
3.1 Audio Engine Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Audio Processor Class Diagram . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 GUI Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 GUI Screen-shot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Clean Signal (Ryazanov, 2012) . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Soft Clipping (Ryazanov, 2012) . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Hard Clipping (Ryazanov, 2012) . . . . . . . . . . . . . . . . . . . . . . . 29
vii
Chapter 1
Introduction
1.1 Title
Real-Time Electric Guitar Sound Modelling Application.
1.2 Overview
The purpose of this project is to understand the concept of Digital Signal Processing, mainly
in the area of Audio Manipulation. Digital signal processing is a major domain in software
engineering, used widely in fields which include music, radio and television entertainment,
telecommunications, biomedical engineering, data compression, computer vision, medicine,
science and military. The main focus of this research is to study digital signal processing and
use the acquired knowledge in order to develop a musical application.
1.3 Background
A preamplifier (preamp) is an electronic amplifier that prepares a small electrical signal for
further amplification or processing. They are typically used to amplify signals from microphones,
instrument pickups, and phonographs to line level. Preamplifiers are often integrated
into the audio inputs on mixing consoles, DJ mixers, and sound cards. They can also be
1
1.3. BACKGROUND 2
stand-alone devices. (Wikipedia.com, 2016)
Many modern guitarists have switched from traditional equipment to software, when recording
their work in a studio. At present, there is a large selection of very expensive and complex
programs that can be used to model the sound of an electric guitar in real-time, such as Guitar
Rig or Bias FX, which are only available for Windows and Mac systems. The main reason
why modern guitarists are more willing to use such software, is because it provides many
options and possibilities. Instead of buying separate amplifiers and a rack of effect pedals,
the musician can simply scroll through a list in the software and click on an amplifier that is
desired.
Unfortunately such software also contains its disadvantages. When using physical equipment,
the connection between the guitar and the amplifier is obtained by using a simple,
6.3mm jack cable. However, in order to set up the software correctly, the guitarist is required
to use either a MIDI (Musical Instrument Digital Interface) connection, or special adapters
which convert the input to make it possible to connect a guitar to a computer.
The main aspects of this project are to create an application that will be simple to use yet still
provide the necessary functions. The input signal should be received by the device’s Line-In
interface, meaning that the connection will be much simplified, as it will be achieved by only
one Jack cable (same as when using a physical amplifier). Finally, what should make this application
unique, is that it will be fully developed in Java programming language without the
use of any third-party libraries. The majority of digital signal processing software developers
have tended to avoid Java, as the implementations require low level memory management
and the Java Virtual Machine caused high latency issues, which is not acceptable in real-time
applications. The main research question of this project is, ”will the current available version
of Java prove powerful enough to develop such an application”?
1.4. PROJECT SCOPE 3
1.4 Project Scope
The aim of this project is to utilize digital signal processing techniques and create a Java
application that will imitate the behaviour of a physical pre-amplifier for an electric guitar.
The application should allow any guitarist to connect an electric guitar directly to the Line-In
interface of their device. The user should then be presented with features which will modify
the input signal before the application broadcasts it to the speaker output.
1.4.1 Software Release
The deadline and expected submission for this project is scheduled for 27th of April, 2017.
1.4.2 Objectives
The main objectives for completion of this project are as follows:
_ Research Digital Signal Processing for Software Engineering.
_ Research Digital Signal Processing techniques in Java programming language.
_ Adapt the acquired knowledge to develop an audio streaming application.
_ Research how different guitar effects are achieved in physical hardware and implement
audio manipulation effects in the application.
_ Develop a working application with a front-end User Interface.
1.5. FEASIBILITY 4
1.5 Feasibility
1.5.1 Project Requirements
The main requirement for this project is that the developer has a good skill and experience
of development in the Java programming language and Eclipse Integrated Development Environment.
The technical requirements for this project are:
_ A laptop or PC with Eclipse Integrated Development Environment and Java Software
Development Kit.
_ An electric guitar with working pick-ups.
_ A 6.3mm monophonic jack cable.
_ A 6.3mm to 3.5mm jack cable adapter OR a jack cable with separate ends.
_ A stereo AUX cable for the speaker.
_ A stereo speaker.
1.5. FEASIBILITY 5
1.5.2 Technical Challenges
Initially the idea for this project was to develop such application for Android devices, in order
to make it more portable. Unfortunately, the Marshmallow version of Android Operating
System, used at the time of development, had many issues which did not allow for the project
to succeed. First discovered issue was that the Android Java API for pulse code modulation,
which is an essential step in reading the audio samples, did not provide sufficient low level
memory management required for low latency operations.
Following further research, the programmer decided to attempt development using the C++
programming language and Java Native Interface, which allows to execute C++ code inside
Java code. With the help of Superpowered Software Development Kit, which provides algorithms
to perform audio processing on ARM processors architecture that are used in Android
devices, latency still remained to be an unsolvable problem. It has been later discovered that
the fault lies deep within the Android Operating System itself and the audio hardware architecture.
Android Audio’s 10 Millisecond Problem, a little understood yet extremely difficult technical
challenge with enormous ramifications, prevents these sorts of revenue producing apps
from performing in an acceptable manner and even being published on Android at this point
in time. (szantog and Pv, 2015)
After consulting this problem with Dr. Simon McLoughlin, it has been decided to resume
and continue the development of this project on the Windows system using Java.
1.6. HIGH-LEVEL LIST OF REQUIREMENTS 6
1.6 High-Level List of Requirements
1.6.1 Work Breakdown Structure
Table 1.1: Work Breakdown Structure Table
Task Description
1. Research DSP. Research the Digital Signal Processing topic in Software
Engineering.
2. Research Android DSP. Research how to implement Audio DSP for an Android
application.
3. Develop a prototype. Develop an audio streaming application using the
Java Android Audio API.
4. Research APIs. Research third-party APIs that could be used for this
project.
5. Research JNI. Research Java Native Interface and how to integrate
Android Studio in order to use it.
6. Implement APIs. Implement Superpowered SDK and use the C++ library
to develop an audio streaming application.
7. Research Java DSP. Research how to implement Audio DSP in standard
Java for a Windows application.
8. Develop a prototype. Develop an audio streaming application for Windows
using the Java Sound API.
9. Research guitar effects. Research how guitar effects work, create and implement
these effects in the application based on the
mathematical formulas.
10. Test the prototype. Test the prototype and the implemented effects.
11. Implement filters. Implement filters to clean up the audio and remove
noise.
12. Develop the final application.
Integrate all components into a final version of the
application and develop a GUI.
1.6. HIGH-LEVEL LIST OF REQUIREMENTS 7
Figure 1.1 illustrates the work breakdown structure for this project.
Figure 1.1: Gantt Chart
1.6. HIGH-LEVEL LIST OF REQUIREMENTS 8
1.6.2 Proposed Methodology
Figure 1.2: Agile Methodology (xandermar.com, 2016)
The main reason why Agile Software Development has been chosen as the methodology
for this project, is because it ensures adaptive planning and a continuous improvement
approach. In order to carry out such a project, it is safe to assume that constant testing
and introduction of changes, even in a later stage of development, will be inevitable. Agile
methodology not only accounts for that, but it also supports a frequent software delivery approach.
Meaning that there is always space for additions and improvements, even after the
software has been released.
1.7. EXPECTED RESULTS 9
1.7 Expected Results
The final version of the application is expected to process the audio signal in real-time,
meaning that there will be no latency between the input and the output of the sound. The
software should allow the user to apply filters and effects to the sound, such as overdrive or
delay and control the levels of the parameters that each effect uses. The application should
support a direct input and output of the audio, through the device’s Line-In and Line-Out
interfaces.
1.8 Summary
Although there is a large selection of programs which can be used for guitar sound modelling,
the majority is very expensive and complex. Creating such an application in Java
allows for cross-platform compatibility and can also prove, that low memory programming
is possible in Java as it is in C++ or Matlab.
Considering the number of features and the techniques required for a successful implementation,
this project can be very time consuming and challenging. Despite the difficulty, the
development process can hugely benefit the developer’s knowledge and provide with great
experience about many software engineering concepts.
Chapter 2
Literature Review
2.1 Introduction
Digital Signal Processing is one of the most powerful technologies that will shape science
and engineering in the twenty-first century. Revolutionary changes have already been made
in a broad range of fields: communications, medical imaging, radar & sonar, high fidelity
music reproduction, and oil processing, to name just a few. Each of these areas has developed
a deep DSP technology, with its own algorithms, mathematics and specialized techniques.
This combination of breadth and depth makes it impossible for any one individual to master
all of the DSP technology that has been developed. (Smith, 1999)
The aim of this literature review chapter, is to understand the aspects of digital signal
processing in audio processing and sound modelling. The research is mainly based on real
time DSP, the modelling of guitar sound effects and the implementation of such functions in
Java programming language.
2.2 Real Time Audio Processing
In real world, sound travels across air as waves, which are then interpreted by the human
ear through vibrations (Trebien, 2006). In order to process real sound using a computer,
the audio must be translated into a digital representation of the sound waves. This can be
10
2.2. REAL TIME AUDIO PROCESSING 11
achieved using Pulse Code Modulation, which converts an analog signal into a byte input
stream. In general, any finite series of numbers (digital representation) can only represent an
analog wave (real world representation) to a finite accuracy. (Browning, 1997).
Although nowadays the hardware used in most of the devices is quite powerful, the digital
representation of the signal may occasionally deviate slightly from the analog input.
The electrical characteristics of the equipments digital-to-analog converter, the device that
converts data points to corresponding voltage levels, usually result in a moderately smooth
curved waveform. (Browning, 1997).
When converting an analog signal into a digital signal, it is important that a correct sampling
rate is chosen. During conversion of the signal using pulse code modulation, the analog
sound waves, which is a continuous signal, need to be sequenced into samples. Choosing
a correct sample rate, which in most musical recording circumstances is 44.1 kHz, helps to
eliminate any distortion or other undesired noises during the output of the audio. (Beckmann
and Fung, 2004)
As described in a thesis document by Fernando Trebien (Trebien, 2006), real time processing
can be achieved with the use of a modular architecture. This approach allows the
programmer to develop processing modules which can represent different hardware units. In
this project, the very first module would be the connection of the guitar to the device. This
module will be responsible for receiving the audio input coming into the device and transforming
it into a digital signal. Further multiple modules should be implemented to process
the digital signal, such as a physical amplifier or effect pedals would. The final module of the
architecture, is responsible for the output of the processed signal. The modular architecture
is a good approach for this project, as each module can use different parameters to affect
the processing of a signal and multiple modules can be mixed together at the same time, to
provide a single output.
2.3. MODELLING SOUND EFFECTS 12
2.3 Modelling Sound Effects
2.3.1 Linear Effects
The most common linear effects used during the modelling of an electric guitar sound are
delay, flanger and reverb.
In signal processing applications, delay is the most used building block which expectedly
has effects both in time and frequency domains. (Zeki, 2015). When applied to a guitar
amplifier, the delay effect is mostly used to shift the sound in time while linearly affecting
its frequency. The processed effect is then played alongside with the original input signal. In
digital guitar audio effect delay, we generally use the input signal buffer as our reference and
filter that signal with gain and delay parameters then sum it up back to the original signal. In
this way, we can hear our real-time input and delayed version of our signal in delay guitar
audio effect. (Zeki, 2015).
The flanger is an effect which uses delay as a filter, in which the amount of the delay is
changing as time passes. The user can pre-define parameters such as the pattern and amount
of the delay, which affects the end result of the flanger effects and variations. The delay
of an audio flanger generally ranges sinusoidally between 0 – 15 ms. Assuming 44100Hz
(samples=sec) sampling rate, we take each sample on 2.2676e – 05sec intervals. This results
into a delay range of approximately 0 – 660 samples. This will in turn creates small variations
over the input note that has been given by the player, sinusoidally changing with time. (Zeki,
2015).
When recording an electric guitar, the sound processed by an amplifier is not affected by
the outside surroundings, as it would be during the recording of an acoustic guitar. Usually
when recording something with a microphone, the sound scatters around the closed environment,
such as a room or a hall, and arrives back to the receiver. In order to achieve such
effect, reverberation is required. Reverberation environments include the simulation of a
room, concert hall, cave, arena, hangar, etc. Different reverberation algorithms are designed
2.4. AUDIO PROCESSING ON ANDROID 13
to model diverse reverberation environments. The basic of these algorithms is room acoustics
modelling, where the reflections and scattering of the room is considered. (Zeki, 2015).
2.3.2 Distortion and Overdrive
Distortion and overdrive are two main highly nonlinear guitar audio effects. The signature
guitar tones of famous guitar players are compared discussing their distortion and overdrive
guitar effects. Through the emergence of blues, rock and metal, the implementations of
distortion and overdrive guitar effects pedals have changed drastically. (Zeki, 2015).
In a physical amplifier the distortion effect can be achieved by a method called the two
way clipping of a sound. This means, that the valves in an amplifier are overdriven to such
a level that they are being used at their nonlinear points. Distortion and overdrive usually
happen at the first stage of the signal amplification, for which the pre-amplifier is being used.
2.4 Audio Processing on Android
With more than 80 percent market share, Android is the dominant mobile operating system
today. It’s running on countless models of smartphones and tablets, as well as many other
devices. (Plesac, 2015).
During the research of Android application development, it has been observed that the
programming environment, as well as the operating system itself, has improved hugely during
the course of last few years. The Android Studio Integrated Development Environment
(IDE) provides access to a large amount of Application Programming Interfaces (APIs) and
packages which aid the programmer during the development. It is said that with the release
of the Lollipop version of Android, every part of Android has undergone some modifications
and improvements (Plesac, 2015). Also the hardware used in the smartphones has improved,
both in quality and performance. This provides a large range of devices with powerful processors,
graphics chips and capacious batteries which are a helpful ingredient in order to
carry out this project.
2.5. SUMMARY 14
Another major improvement issued by Google for Android developers, was the release of
Native Development Kit (NDK) and the Java Native Interface (JNI) for the Gradle compiler,
used in Android Studio. The NDK and JNI allows the developer to implement C++ classes
and functions in the applications source code, which is mainly written in Java. This is a
very important feature for Audio Processing on Android, as the C++ language allows the
programmer to implement more precise real time DSP functions in the application. ”During
my search I got acquainted with some libraries and SDKs that work with audio data and
I’d like to tell you about them. Java programming language used in Android development
doesn’t allow you to work effectively with audio content, that’s why you’ll mostly have to
deal with JNI and C++ code.” (Verovets, 2016)
2.5 Summary
The key ingredient for a real time electric guitar sound modelling application, is an accurate
simulation of physical hardware. A modular architecture should be used for the digital signal
processing, which will allow the programmer to create modules that will carry out operations
needed to achieve different sound effects. Based on research, as long as the application
properly receives a continuous signal from the input interface, the signal processing modules
can be developed separately. The implementation of various effects should consist of similar
methods and mathematical formulas that are used in physical hardware. The implemented
mathematics should modify the audio samples to achieve a desired output.
Chapter 3
System Design
The following chapter will provide the reader with an insight into how the system has been
designed with the use of Class Diagrams. The diagrams will give an overview of the system
by showing the relationships among different classes.
15
3.1. FUNCTIONAL DESIGN 16
3.1 Functional Design
3.1.1 Audio Engine
Figure 3.1: Audio Engine Class Diagram
The Audio Engine is split into two separate modules, which are managed by the Audio
Controller class.
3.1. FUNCTIONAL DESIGN 17
_ Audio Controller class contains all parameters which are required for the audio engine
modules. It initializes the microphone and speaker modules by opening the Line-In and
Line-Out interfaces.
_ Microphone Input Engine class handles the pulse code modulation and streaming of
audio. Once the input line is started, a separate thread is executed, which receives a
stream of bytes representing the audio received by the microphone interface. Each two
bytes are then shifted into a Short value, representing a single audio sample. An array
of audio samples is then passed to the Speaker Output Engine.
_ Speaker Output Engine class is responsible for dealing with the received audio samples.
The playSignal method is accessed by the microphone thread and contains instructions
to process the sound before being broadcast to the output interface. The
samples are put back into a pair of bytes and an array of bytes is written out to the
speaker.
3.1. FUNCTIONAL DESIGN 18
3.1.2 Audio Processor
Figure 3.2: Audio Processor Class Diagram
The Audio Processor is split into multiple modules working together. The Effects Controller
is initialized by the Audio Controller class. A reference to this controller is passed to
the Speaker Output Engine, in order to access other modules required to modify the received
audio samples.
3.1. FUNCTIONAL DESIGN 19
_ Effects Controller class contains references to currently active effects and filters. This
class is also responsible for management of all parameters that are required by each
processing module.
_ Effect is an interface listing the behaviour that each processing module needs to implement.
The applyEffect(short[]) method, receives an array of audio samples and must
contain instructions to process the audio, based on the module’s function. The processed
samples are then returned back. Every processing module must also implement
the updateValues method, in order to give the user the ability to constantly change the
parameters of an active effect or filter.
_ Speaker Output Engine. Before the audio is written to the output interface, the output
engine accesses the Effects Controller in order to check if the user has selected any
effects or filters from the list. If so, a reference to that module is acquired from the
controller and the audio samples are passed to the processing module.
_ Overdrive class is a processing module which applies the overdrive effect to the audio.
The overdrive effect distorts the input signal based on the amount of drive that the user
selects. Low drive values create soft-clipping, altering the sound to remain a distorted
sinusoidal wave. High drive values create hard-clipping, altering the sound to have
sharp edges, resulting in a highly distorted square wave.
_ Delay class is a processing module which creates a time delayed signal with echoes.
The user can control the time of the delay and delay feedback.
_ FFTLowPassFilter class is an implementation of Fast Fourier Transform to achieve
Low-Pass filtering of the audio. The filtering is based on cut-off frequency and input
frequency parameters, which are controlled by the user.
3.2. GUI DESIGN 20
3.2 GUI Design
Figure 3.3: GUI Class Diagram
_ GUI Controller is initialized by the Main Controller class and runs on the main thread
of the application. It contains methods which are called by the listeners of the GUI
components that the user has access to. These methods communicate with the Main
Controller and gain reference to both, Audio Controller and Effects Controller in order
to have full admittance to the application’s functionality.
_ MainGUI class implements the behaviours of a JFrame. It contains the graphical components
and create the look-and-feel of the application. This class also contains listener
methods for all components which communicate with the GUI Controller when
necessary.
3.2. GUI DESIGN 21
Figure 3.4: GUI Screen-shot
Chapter 4
Implementation and Methodology
4.1 Functionality
The application’s functionality imitates a physical pre-amplifier with a built-in effects loop.
The signal is received from the input line interface of the device and intercepted by the application’s
audio engine. The input engine samples the signal into values which can then be
used in order to process and modify the audio.
The audio processing is split into separate modules. A modular architecture allows the programmer
to develop processing modules which can represent different hardware units. This
way, a single effect such as Overdrive, can be implemented as a separate module which represents
a single, physical Overdrive pedal.
Each processing module is executed in the output engine. A module receives the current
signal and processes it accordingly to the effect selected. The processed samples are then returned
to the output engine and the engine broadcasts them to the output line interface of the
device. All modules can be accessed and fully modified by the user, through the front-end
graphical interface.
22
4.2. AUDIO ENGINE 23
4.2 Audio Engine
The application’s audio engine has been implemented with the use of Java Sound API.
The Java Sound API is a low-level API for effecting and controlling the input and output of
sound media, including both audio and Musical Instrument Digital Interface (MIDI) data.
The Java Sound API provides explicit control over the capabilities normally required for
sound input and output, in a framework that promotes extensibility and flexibility. The Java
Sound API provides the lowest level of sound support on the Java platform. It provides application
programs with a great amount of control over sound operations, and it is extensible.
(Oracle, 2015)
The Java Sound API provides TargetDataLine and SourceDataLine classes, which handle
the low-level pulse code modulation, that is essential for the conversion of an Analog signal
into a Digital signal. Once both lines have been opened and started, the application then
executes a separate thread, which receives a stream of bytes representing the audio signal.
This stream is acquired by the TargetDataLine class.
4.2. AUDIO ENGINE 24
byt e [ ] d a t a = new byt e [ r e a d l e n g t h ] ;
publ i c void run ( ) f
whi l e ( t rue ) f
amo u n t r e a d = mi c r o p h o n e l i n e . r e a d ( da t a , 0 , r e a d l e n g t h ) ;
sho r t [ ] s ampl e s = new sho r t [ amo u n t r e a d / 2 ] ;
i n t i = 0 ;
whi l e ( i < amo u n t r e a d 2) f
By t eBu f f e r bb = By t eBu f f e r . a l l o c a t e ( 2 ) ;
bb . o r d e r ( Byt eOrde r . LITTLE ENDIAN ) ;
bb . p u t ( d a t a [ i ] ) ;
bb . p u t ( d a t a [ i + 1 ] ) ;
s ampl e s [ ( i / 2 ) ] = bb . g e t S h o r t ( 0 ) ;
i += 2 ;
g
/ / Proceed wi t h p r o c e s s i n g .
g
g
In the above code, MicrohponeInputEngine class, the thread performs operations to read the
input audio signal. Every iteration, the buffer from the input line interface is stored in a byte
array, called data. The read method also accepts the offset from the beginning of the array
and requested number of bytes to read, as parameters.
The most optimal configuration for audio streaming, is representing a single audio sample
with 16 bits. This means, that one sample will consist of two bytes. Depending on the order
of these bytes, whether they are stored as Big or Little Endian, they can be shifted to give a
single value. Once the byte array is filled, the thread proceeds to extract the samples using a
ByteBuffer.
4.2. AUDIO ENGINE 25
publ i c void p l a y S i g n a l ( sho r t [ ] s i g n a l ) f
/ / Check f o r a c t i v e f i l t e r s .
/ / Check f o r a c t i v e e f f e c t s .
byt e [ ] o u t p u t b u f f e r = new byt e [ r e a d l e n g t h ] ;
i n t i = 0 ;
whi l e ( i <= r e a d l e n g t h 1) f
By t eBu f f e r bb = By t eBu f f e r . a l l o c a t e ( 2 ) ;
bb . o r d e r ( Byt eOrde r . LITTLE ENDIAN ) ;
bb . p u t S h o r t ( s i g n a l [ i / 2 ] ) ;
o u t p u t b u f f e r ( i ) = bb . g e t ( 0 ) ;
o u t p u t b u f f e r ( i +1) = bb . g e t ( 1 ) ;
i += 2 ;
g
s p e a k e r l i n e . wr i t e ( o u t p u t b u f f e r , 0 , o u t p u t b u f f e r . l e n g t h ) ;
g
After the samples are extracted, the array is passed to the playSignal method in SpeakerOutputEngine
class. The method is executed by the same I/O Thread that reads the samples.
Once it reaches the playSignal method, the application checks whether or not the user has
enabled any effects or filters. If so, the processing modules are applied. This functionality
will be explained later in the current chapter. The samples are then allocated back into a byte
array and written out to the output line interface of the device, in order to play the sound
through a speaker.
4.3. AUDIO PROCESSOR 26
4.3 Audio Processor
The Audio Processor consists of an EffectsController class that manages all processing modules
that are available in this application. This controller can communicate with other controllers
to provide reference to active effects, filters and all parameters that each module
requires. Every class used to modify the audio, must implement behaviours of the Effects
interface in order to be classified as a processing module.
4.3.1 Delay Effect
As described in a technical report by Engin Zeki (Zeki, 2015), the delay effect is used to shift
the sound in time, also affecting the signal’s frequency in a linear way. This effect uses two
parameters, length of the delay and feedback.
_ With the length parameter, users can configure how far the signal should be shifted
from the original input, in milliseconds.
_ The feedback parameter acts as a gain level for the processed signal. It is used to modify
the volume of the output signal, causing attenuation. Attenuation can be used to
reduce the power of an audio signal, without distorting the waveform. This attenuation
value has an effect on the amount of echoes, that are generated to be played alongside
the delayed input signal.
4.3. AUDIO PROCESSOR 27
p r i v a t e i n t d e l a y l e n g t h ;
p r i v a t e double d e l a y f e e d b a c k ;
p r i v a t e sho r t [ ] d e l a y e d s i g n a l = new sho r t [ ma x d e l a y l e n g t h ] ;
p r i v a t e sho r t [ ] p r o c e s s e d s i g n a l = new sho r t [ s i g n a l l e n g t h ] ;
p r i v a t e i n t d e l a y p o s i t i o n ;
publ i c sho r t [ ] a p p l y E f f e c t ( sho r t [ ] s i g n a l ) f
f o r ( i n t i = 0 ; i < s i g n a l . l e n g t h ; i ++) f
p r o c e s s e d s i g n a l [ i ] = d e l a y e d s i g n a l [ d e l a y p o s i t i o n ] ;
d e l a y e d s i g n a l [ d e l a y p o s i t i o n ] += s i g n a l [ i ] ;
d e l a y e d s i g n a l [ d e l a y p o s i t i o n ] _= d e l a y f e e d b a c k ;
d e l a y p o s i t i o n ++;
d e l a y p o s i t i o n %= d e l a y l e n g t h ;
g
r e turn p r o c e s s e d s i g n a l ;
g
The array delayed signal is an empty array of short values, with length of the maximum time
that this effect can have. It is used to shift the input sample in time, based on the length of
the delay as specified by the user. Once the sample is delayed, attenuation is applied in order
to generate echoes of that same sample.
4.3. AUDIO PROCESSOR 28
4.3.2 Overdrive Effect
Overdrive is the most popular, non linear effect used by musicians that play an electric guitar.
When using physical hardware, the overdrive pedals are used to produce the sound of a tube
amplifier turned all the way up. The incoming signal from an overdrive pedal is too much
for the amplifier’s tubes to handle, so it gets clipped and compressed. This gives a result
of a stable, distorted sound. Overdrive pedals can also work as a booster for the signal, by
slightly distorting the sound and adding more volume, giving more tone than the amplifier
itself can produce.
Figure 4.1: Clean Signal (Ryazanov, 2012)
The key ingredient for an overdrive effect is the drive parameter. The amount of drive
applied, modifies the shape of the signal’s waveform and represents how the signal should
be clipped.
Figure 4.2: Soft Clipping (Ryazanov, 2012)
Low drive values create soft-clipping, the signal becomes distorted but the waveform still
resembles a sinusoidal function.
4.3. AUDIO PROCESSOR 29
Figure 4.3: Hard Clipping (Ryazanov, 2012)
High drive values create hard-clipping, creating more distortion and altering the waveform
to have sharp edges, which makes it more of a square wave.
Based on the research of electric guitar effects, the implemented overdrive effect has been
adapted for this application, from a mathematical formula outlined in a technical report by
Cheng-Hao Chang (Chang, 2011).
p r i v a t e double d r i v e ;
p r i v a t e double k , a , x ;
publ i c sho r t [ ] a p p l y E f f e c t ( sho r t [ ] s i g n a l ) f
f o r ( i n t i = 0 ; i < s i g n a l . l e n g t h ; i ++) f
x = ( double ) ( s i g n a l [ i ] / S h o r t .MAX VALUE) ;
a = Math . s i n ( d r i v e _ Math . PI / 2 ) ;
k = 2 _ a / (1 a ) ;
x = (1 + k ) _ x / (1 + k _ Math . abs ( x ) ) ;
s i g n a l [ i ] = ( sho r t ) ( x _ S h o r t .MAX VALUE) ;
g
r e turn s i g n a l ;
g
4.4. FRONT-END GUI 30
4.3.3 Low-Pass Filtering
In order to only process the sound of the guitar strings, any other undesired noise should be
removed from the signal. The low-pass filter allows to eliminate any sound, which frequency
is higher than a specified cut-off threshold value. A Fast Fourier Transform implementation
provides steps to convert the signal into a frequency domain, which can be used to apply the
filter.
The Fast Fourier Transform (FFT) is another method for calculating the DFT. While it produces
the same result as the other approaches, it is incredibly more efficient, often reducing
the computation time by hundreds. This is the same improvement as flying in a jet aircraft
versus walking! While the FFT only requires a few dozen lines of code, it is one of the most
complicated algorithms in DSP (Smith, 1999).
In this application, the FFT algorithms are implemented using the Apache Commons Math
3.4 API for Java.
4.4 Front-End GUI
The Graphical User Interface has been fully developed using the Java Swing components,
such as JPanels, JLabels, JIcons, JButtons and JSliders. The MainGUI class implements
behaviours of a JFrame and is used to configure the layout for the user interface. This class
creates the look-and-feel of the application but also implements all necessary listeners for
the Swing components. These listeners communicate with the GUIController class allowing
the user to have full control over all available functions.
Chapter 5
Conclusion and Future Work
The idea for this project arose during the process of planning and choosing a topic. The
author has decided to take upon the challenge of creating something that relates to a personal
hobby, making the development process more exciting. The convincing factor was the fact
that, although there already is a large selection of software that can be used to model the
sound of an electric guitar in real-time, none of these programs have been developed in the
Java programming language.
The initial idea for this project was to develop such application for Android devices, in order
to make it more portable than the other available programs. During the research and
development process it has been discovered, that due to Android’s audio architecture, it is
at present impossible to create an application with such features as were planned. Diverting
from Java, the achievement of audio processing functions was successful using the C++ language,
although high latency has affected the concept of ”real-time” making this application
ineffective and impractical. Following this discovery and after consulting with the supervisor
of this project, Dr. Simon McLoughlin, a decision to continue the development of this
project on a Windows system, in Java, has been made.
31
32
Researching and implementing different Digital Signal Processing techniques in both,
Java and C++ languages, has been a very beneficial influence, immensely expanding and
improving the developer’s software engineering skills. It has been quite a challenging task to
develop a simple audio streaming application, as the developer did not have any experience
in digital signal processing before starting this project.
After the unsuccessful and very prolonged experience with Android, the implementation of
audio processing modules turned out to be another very time consuming, yet simultaneously
exciting challenge. In order to develop effects that would imitate the sound acquired from
physical pedals as precisely as possible, an extensive research of electric guitar effects was
required.
Overall the development of this project has been a very challenging, time consuming, exciting
and beneficial experience. The encountered challenges and requirements have greatly
improved the developer’s idea and skills of software engineering, digital signal processing
and sound engineering.
Some possible future work for this project would include the following:
_ Adding more effects and audio processing modules to the application.
_ Improving the implementation of already existing modules in order to provide unique
tones that can be acquired by using effect pedals created by different companies.
_ Based on the work carried out in this project and the results achieved, it could be
worthwhile to re-develop this application for Android devices, once the Android audio
architecture will be improved.
Bibliography
P. Beckmann and V. Fung. Designing efficient, real-time audio systems with visualaudio.
Analog Dialogue, 38-11, 2004.
P. Browning. Audio digital signal processing in real time. Technical report, West Virginia
University, 1997.
C. Chang. A guitar overdrive/distortion effect of digital signal processing. Technical report,
The University of Sydney, 2011.
Oracle. Trail: Sound (the java tutorials), 2015. URL https://docs.oracle.com/
javase/tutorial/sound/.
Z. Plesac. The past, present and future of android development, nov 2015.
URL https://infinum.co/the-capsized-eight/articles/
the-past-present-and-future-of-android-development.
Mikhail Ryazanov. Clipping compared to limiting, nov 2012. URL https://commons.
wikimedia.org/wiki/File:Clipping_waveform.svg.
S. Smith. The scientist and engineer’s guide to digital signal processing, 2nd ed. California
Technical Pub., San Diego, Calif., second edition, 1999.
Gabor szantog and Patrick Pv. Android audio’s 10 millisecond problem: The android
audio path latency explainer, 2015. URL http://superpowered.com/
androidaudiopathlatency.
F. Trebien. A gpu-based real-time modular audio processing system. Technical report, Universidade
Federal do Rio Grande do Sul, 2006.
33
BIBLIOGRAPHY 34
S. Verovets. Tools for audio processing in android development,
nov 2016. URL :https://anadea.info/blog/
tools-for-audio-processing-in-android-development.
Wikipedia.com. Preamplifier, oct 2016. URL https://en.wikipedia.org/wiki/
Preamplifier.
xandermar.com. Agile methodology, oct 2016. URL https://www.xandermar.com/
sites/default/files/agile.jpeg.
E. Zeki. Digital modelling of guitar audio effects. Technical report, Middle East Technical
University, 2015.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allRelated Content
All TagsContent relating to: "Music"
Music is an art form of arranging sounds, whether vocal or instrumental, in time as a form of expression. Some form of music can be seen in all known societies throughout history, making it culturally universal.
Related Articles
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: