pax_global_header00006660000000000000000000000064134772200070014514gustar00rootroot0000000000000052 comment=16838b374cf9c7e796cc23af280a18ef1e0b115d zytrax-master/000077500000000000000000000000001347722000700137125ustar00rootroot00000000000000zytrax-master/.clang-format000066400000000000000000000073561347722000700163000ustar00rootroot00000000000000# Commented out parameters are those with the same value as base LLVM style # We can uncomment them if we want to change their value, or enforce the # chosen value in case the base style changes (last sync: Clang 6.0.1). --- ### General config, applies to all languages ### BasedOnStyle: LLVM AccessModifierOffset: -4 AlignAfterOpenBracket: DontAlign # AlignConsecutiveAssignments: false # AlignConsecutiveDeclarations: false # AlignEscapedNewlines: Right # AlignOperands: true AlignTrailingComments: false AllowAllParametersOfDeclarationOnNextLine: false # AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: true AllowShortFunctionsOnASingleLine: Inline AllowShortIfStatementsOnASingleLine: true # AllowShortLoopsOnASingleLine: false # AlwaysBreakAfterDefinitionReturnType: None # AlwaysBreakAfterReturnType: None # AlwaysBreakBeforeMultilineStrings: false # AlwaysBreakTemplateDeclarations: false # BinPackArguments: true # BinPackParameters: true # BraceWrapping: # AfterClass: false # AfterControlStatement: false # AfterEnum: false # AfterFunction: false # AfterNamespace: false # AfterObjCDeclaration: false # AfterStruct: false # AfterUnion: false # AfterExternBlock: false # BeforeCatch: false # BeforeElse: false # IndentBraces: false # SplitEmptyFunction: true # SplitEmptyRecord: true # SplitEmptyNamespace: true # BreakBeforeBinaryOperators: None # BreakBeforeBraces: Attach # BreakBeforeInheritanceComma: false BreakBeforeTernaryOperators: false # BreakConstructorInitializersBeforeComma: false BreakConstructorInitializers: AfterColon # BreakStringLiterals: true ColumnLimit: 0 # CommentPragmas: '^ IWYU pragma:' # CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: true ConstructorInitializerIndentWidth: 8 ContinuationIndentWidth: 8 Cpp11BracedListStyle: false # DerivePointerAlignment: false # DisableFormat: false # ExperimentalAutoDetectBinPacking: false # FixNamespaceComments: true # ForEachMacros: # - foreach # - Q_FOREACH # - BOOST_FOREACH # IncludeBlocks: Preserve IncludeCategories: - Regex: '".*"' Priority: 1 - Regex: '^<.*\.h>' Priority: 2 - Regex: '^<.*' Priority: 3 # IncludeIsMainRegex: '(Test)?$' IndentCaseLabels: true # IndentPPDirectives: None IndentWidth: 4 # IndentWrappedFunctionNames: false # JavaScriptQuotes: Leave # JavaScriptWrapImports: true # KeepEmptyLinesAtTheStartOfBlocks: true # MacroBlockBegin: '' # MacroBlockEnd: '' # MaxEmptyLinesToKeep: 1 # NamespaceIndentation: None # PenaltyBreakAssignment: 2 # PenaltyBreakBeforeFirstCallParameter: 19 # PenaltyBreakComment: 300 # PenaltyBreakFirstLessLess: 120 # PenaltyBreakString: 1000 # PenaltyExcessCharacter: 1000000 # PenaltyReturnTypeOnItsOwnLine: 60 # PointerAlignment: Right # RawStringFormats: # - Delimiter: pb # Language: TextProto # BasedOnStyle: google # ReflowComments: true # SortIncludes: true # SortUsingDeclarations: true # SpaceAfterCStyleCast: false # SpaceAfterTemplateKeyword: true # SpaceBeforeAssignmentOperators: true # SpaceBeforeParens: ControlStatements # SpaceInEmptyParentheses: false # SpacesBeforeTrailingComments: 1 # SpacesInAngles: false # SpacesInContainerLiterals: true # SpacesInCStyleCastParentheses: false # SpacesInParentheses: false # SpacesInSquareBrackets: false TabWidth: 4 UseTab: Always --- ### C++ specific config ### Language: Cpp Standard: Cpp03 --- ### ObjC specific config ### Language: ObjC Standard: Cpp03 ObjCBlockIndentWidth: 4 # ObjCSpaceAfterProperty: false # ObjCSpaceBeforeProtocolList: true --- ### Java specific config ### Language: Java # BreakAfterJavaFieldAnnotations: false ... zytrax-master/.gitignore000066400000000000000000000004641347722000700157060ustar00rootroot00000000000000# Prerequisites *.d # Compiled Object files *.slo *.lo *.o *.obj # Precompiled Headers *.gch *.pch # Compiled Dynamic libraries *.so *.dylib *.dll # Fortran module files *.mod *.smod # Compiled Static libraries *.lai *.la *.a *.lib # Executables *.exe *.out *.app *.config *.include *.creator* *.fileszytrax-master/LICENSE000066400000000000000000000020571347722000700147230ustar00rootroot00000000000000MIT License Copyright (c) 2018 Juan Linietsky Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. zytrax-master/Makefile000066400000000000000000000000131347722000700153440ustar00rootroot00000000000000all: sconszytrax-master/README.md000066400000000000000000000026141347722000700151740ustar00rootroot00000000000000![](zytrax_logo.png) # Zytrax ZyTrax is an easy to use music sequencer with an interface heavily inspired by 90's "tracker" software (most specifically [Impulse Tracker](https://en.wikipedia.org/wiki/Impulse_Tracker)). While contemporary software that uses this approach exists, it usually has a high entry barrier because it maintains compatibility with old formats. In contrast to this, ZyTrax starts afresh with an user friendly approach (no hex numbers, pure plugin-based architecture, inlined automation envelopes, smart automations, zoomable patterns and a simple pattern/orderlist layout). ![](zytrax.png) ## Usage Currently, ZyTrax runs only on Windows and Linux/X11. It supports VST2 plugins via Vestige. It should compile on MacOS but no plugin code exists (If anyone wants to contribute LV2 support, that would be awesome! I just don't have the time). You can find a tutorial [here](http://zytrax.org/tutorial/). ## Download Head over to the [releases](https://github.com/reduz/zytrax/releases) section. ## Building You need to download MSys2, and then GTKmm for Windows, instructions [here](https://wiki.gnome.org/Projects/gtkmm/MSWindows). Make sure to download Python and Scons too from the package manager. To build, type: ``` scons ``` To run: ``` cd bin start zytrax.exe ``` Check the release to see how to package the executable for redistribution (just replace the .exe file). zytrax-master/SConstruct000066400000000000000000000063221347722000700157470ustar00rootroot00000000000000import os import sys EnsureSConsVersion(0,14); env = Environment(CPPPATH=['#/globals','#gui','#.'],ENV=os.environ) env.ParseConfig("pkg-config gtkmm-3.0 --libs --cflags") env.Append(CXXFLAGS=["-g3"]) opts = Variables(ARGUMENTS) if (os.getenv("XDG_CURRENT_DESKTOP")!=None): detected_platform="freedesktop" elif (os.getenv("APPDATA")!=None): detected_platform = "windows" else: detected_platform = "" opts.Add(EnumVariable("platform","Platform to build",detected_platform,("windows","osx","freedesktop"))) opts.Add(BoolVariable("enable_rtaudio","Use RtAudio as Sound Driver",True)) opts.Add(BoolVariable("use_jack","Use Jack with RtAudio",False)) opts.Add(BoolVariable("use_pulseaudio","Use Pulseaudio with RtAudio",True)) opts.Add(BoolVariable("use_alsa","Use Alsa with RtAudio and RtMidi",True)) opts.Add(BoolVariable("enable_vst2","Enable VST2",True)) opts.Add(BoolVariable("use_wasapi","Enable Wasapi",True)) opts.Add(BoolVariable("use_directsound","Enable Wasapi",True)) opts.Add(BoolVariable("enable_rtmidi","Use RtMidi as MIDI Driver",True)) opts.Add(BoolVariable("use_winmm","Enable WinMM for RtMidi",True)) opts.Update(env) # update environment Help(opts.GenerateHelpText(env)) # generate help if (detected_platform==""): print("No build platform detected, available platforms: windows, freedesktop,osx") sys.exit() if (env["enable_rtaudio"]): env.Append(CXXFLAGS=["-DRTAUDIO_ENABLED"]) if (env["enable_rtmidi"]): env.Append(CXXFLAGS=["-DRTMIDI_ENABLED"]) if (env["platform"]=="windows"): env.Append(CXXFLAGS=["-DWINDOWS_ENABLED"]) if (env["enable_vst2"]): env.Append(CXXFLAGS=["-DVST2_ENABLED"]) if (env["use_wasapi"]): env.Append(CXXFLAGS=["-D__WINDOWS_WASAPI__"]) if (env["use_directsound"]): env.Append(CXXFLAGS=["-D__WINDOWS_DS__"]) if (env["use_winmm"]): env.Append(CXXFLAGS=["-D__WINDOWS_MM__"]) #env.Append(CXXFLAGS=["-D__WINDOWS_ASIO__"]) env.Append(LIBS=["dsound","mfplat","mfuuid","wmcodecdspuuid","ksuser"]) if (env["platform"]=="freedesktop"): env.Append(CXXFLAGS=["-DFREEDESKTOP_ENABLED"]) if (env["enable_vst2"]): env.Append(CXXFLAGS=["-DVST2_ENABLED"]) if (env["use_pulseaudio"]): env.Append(CXXFLAGS=["-D__LINUX_PULSE__"]) env.ParseConfig("pkg-config libpulse --libs --cflags") env.ParseConfig("pkg-config libpulse-simple --libs --cflags") if (env["use_alsa"]): env.Append(CXXFLAGS=["-D__LINUX_ALSA__"]) env.ParseConfig("pkg-config alsa --libs --cflags") if (env["use_jack"]): env.Append(CXXFLAGS=["-D__LINUX_JACK__"]) env.ParseConfig("pkg-config jack --libs --cflags") env.ParseConfig("pkg-config x11 --libs --cflags") env.Append(LIBS=["dl"]) def add_sources(self, sources, filetype, lib_env = None, shared = False): import glob; import string; #if not lib_objects: if not lib_env: lib_env = self if type(filetype) == type(""): dir = self.Dir('.').abspath list = glob.glob(dir + "/"+filetype) for f in list: sources.append( self.Object(f) ) else: for f in filetype: sources.append(self.Object(f)) env.__class__.add_sources=add_sources Export('env') env.libs=[] SConscript('globals/SCsub'); SConscript('dsp/SCsub'); SConscript('engine/SCsub'); SConscript('gui/SCsub'); SConscript('effects/SCsub'); SConscript('drivers/SCsub'); SConscript('bin/SCsub'); zytrax-master/bin/000077500000000000000000000000001347722000700144625ustar00rootroot00000000000000zytrax-master/bin/SCsub000066400000000000000000000002211347722000700154170ustar00rootroot00000000000000Import('env') ctfiles=['zytrax.cpp']; env.Append(LINKFLAGS=["-Wl,--start-group"]) env.Append(LIBS=env.libs) env.Program('zytrax', ctfiles); zytrax-master/bin/zytrax.cpp000066400000000000000000000160321347722000700165310ustar00rootroot00000000000000#include #ifdef VST2_ENABLED #include "drivers/vst2/factory_wrapper_vst2.h" #endif #include "effects/effects.h" #include "engine/song.h" #include "globals/json_file.h" #include "gui/interface.h" #ifdef RTAUDIO_ENABLED #include "drivers/rtaudio/sound_driver_rtaudio.h" #endif #ifdef RTMIDI_ENABLED #include "drivers/rtmidi/midi_driver_rtmidi.h" #endif int main(int argc, char *argv[]) { AudioEffectFactory effect_factory; #ifdef VST2_ENABLED AudioEffectProvider *provider_vst2 = create_vst2_provider(); effect_factory.add_provider(provider_vst2); #endif #ifdef RTAUDIO_ENABLED register_rtaudio_driver(); #endif #ifdef RTMIDI_ENABLED register_rtmidi_driver(); #endif auto app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Theme theme; KeyBindings key_bindings; /* Time to load the Settings */ { String path = SettingsDialog::get_settings_path() + "/settings.json"; JSON::Node node; int use_driver_index = SoundDriverManager::get_driver_count() ? 0 : -1; int use_midi_in_driver_index = MIDIDriverManager::get_input_driver_count() ? 0 : -1; if (load_json(path, node) == OK) { if (node.has("audio")) { //audio JSON::Node audio_node = node.get("audio"); std::string driver_id = audio_node.get("id").toString(); for (int i = 0; i < SoundDriverManager::get_driver_count(); i++) { SoundDriver *driver = SoundDriverManager::get_driver(i); if (driver->get_id() == driver_id.c_str()) { use_driver_index = i; } break; } int mixing_hz = audio_node.get("mixing_hz"); int buffer_size = audio_node.get("buffer_size"); int block_size = audio_node.get("block_size"); if (mixing_hz >= 0 && mixing_hz < SoundDriverManager::MIX_FREQ_MAX) { SoundDriverManager::set_mix_frequency(SoundDriverManager::MixFrequency(mixing_hz)); } if (buffer_size >= 0 && buffer_size < SoundDriverManager::BUFFER_SIZE_MAX) { SoundDriverManager::set_buffer_size(SoundDriverManager::BufferSize(buffer_size)); } if (block_size >= 0 && block_size < SoundDriverManager::BUFFER_SIZE_MAX) { SoundDriverManager::set_step_buffer_size(SoundDriverManager::BufferSize(block_size)); } std::string midi_driver_id = audio_node.get("midi_in_id").toString(); for (int i = 0; i < MIDIDriverManager::get_input_driver_count(); i++) { MIDIInputDriver *driver = MIDIDriverManager::get_input_driver(i); if (driver->get_id() == midi_driver_id.c_str()) { use_midi_in_driver_index = i; } break; } } if (node.has("plugins")) { //plugins JSON::Node plugin_node = node.get("plugins"); for (int i = 0; i < AudioEffectProvider::MAX_SCAN_PATHS; i++) { std::string key = String::num(i).ascii().get_data(); if (plugin_node.has(key)) { std::string path = plugin_node.get(key).toString(); String pathu; pathu.parse_utf8(path.c_str()); AudioEffectProvider::set_scan_path(i, pathu); } } } if (node.has("theme")) { //theme JSON::Node theme_node = node.get("theme"); if (theme_node.has("font")) { theme.font.parse_utf8(theme_node.get("font").toString().c_str()); } if (theme_node.has("colors")) { JSON::Node colors_node = theme_node.get("colors"); for (int i = 0; i < Theme::COLOR_MAX; i++) { if (colors_node.has(theme.color_names[i])) { JSON::Node array = colors_node.get(theme.color_names[i]); theme.colors[i].set_red(array.get(0).toFloat()); theme.colors[i].set_green(array.get(1).toFloat()); theme.colors[i].set_blue(array.get(2).toFloat()); } } if (theme_node.has("use_dark_theme") && bool(theme_node.get("use_dark_theme").toBool())) { theme.color_scheme = Theme::COLOR_SCHEME_DARK; } } } if (node.has("key_bindings")) { //key bindings JSON::Node bindings = node.get("key_bindings"); if (bindings.has("keys")) { JSON::Node array = bindings.get("keys"); for (int i = 0; i < array.getCount(); i++) { JSON::Node bind = array.get(i); std::string name = bind.get("name").toString(); KeyBindings::KeyBind name_index = KeyBindings::BIND_MAX; for (int j = 0; j < KeyBindings::BIND_MAX; j++) { KeyBindings::KeyBind b = KeyBindings::KeyBind(j); if (name == key_bindings.get_keybind_name(b)) { name_index = b; break; } } if (name_index != KeyBindings::BIND_MAX) { int key = bind.get("key").toInt(); int state = bind.get("mods").toInt(); key_bindings.set_keybind(name_index, key, state); } } } } if (node.has("default_commands")) { //default commands JSON::Node def_commands = node.get("default_commands"); for (int i = 0; i < def_commands.getCount(); i++) { JSON::Node command = def_commands.get(i); int index = command.get("index").toInt(); String name; name.parse_utf8(command.get("identifier").toString().c_str()); char c = char(command.get("command").toInt()); SettingsDialog::set_default_command(index, name, c); } } } SoundDriverManager::init_driver(use_driver_index); MIDIDriverManager::init_input_driver(use_midi_in_driver_index); } register_effects(&effect_factory); /* make it dark */ if (theme.color_scheme == Theme::COLOR_SCHEME_DARK) { g_object_set(gtk_settings_get_default(), "gtk-application-prefer-dark-theme", TRUE, NULL); } /* Load the cached plugins */ { //plugins String path = SettingsDialog::get_settings_path() + "/plugins.json"; JSON::Node node; if (load_json(path, node) == OK) { JSON::Node plugin_array = node.get("plugins"); for (int i = 0; i < plugin_array.getCount(); i++) { JSON::Node plugin_node = plugin_array.get(i); AudioEffectInfo info; info.caption.parse_utf8(plugin_node.get("caption").toString().c_str()); info.description.parse_utf8(plugin_node.get("description").toString().c_str()); info.author.parse_utf8(plugin_node.get("author").toString().c_str()); info.category.parse_utf8(plugin_node.get("category").toString().c_str()); info.unique_ID.parse_utf8(plugin_node.get("unique_id").toString().c_str()); info.icon_string.parse_utf8(plugin_node.get("icon_string").toString().c_str()); info.version.parse_utf8(plugin_node.get("version").toString().c_str()); info.provider_caption.parse_utf8(plugin_node.get("provider_caption").toString().c_str()); info.provider_id.parse_utf8(plugin_node.get("provider_id").toString().c_str()); info.path.parse_utf8(plugin_node.get("path").toString().c_str()); info.synth = plugin_node.get("synth").toBool(); info.has_ui = plugin_node.get("has_ui").toBool(); effect_factory.add_audio_effect(info); } } } /* Initialize the UI */ Interface window(app.operator->(), &effect_factory, &theme, &key_bindings); window.set_default_size(1280, 720); #ifdef VST2_ENABLED window.add_editor_plugin_function(get_vst2_editor_function()); #endif int ret = app->run(window); SoundDriverManager::finish_driver(); #ifdef VST2_ENABLED delete provider_vst2; #endif #ifdef RTAUDIO_ENABLED cleanup_rtaudio_driver(); #endif return ret; } zytrax-master/build.sh000066400000000000000000000003621347722000700153460ustar00rootroot00000000000000export PATH=/mingw64/bin:/usr/local/bin:/usr/bin:/bin:/c/Windows/System32:/c/Windows:/c/Windows/System32/Wbem:/c/Windows/System32/WindowsPowerShell/v1.0/ export PKG_CONFIG_PATH="/mingw64/lib/pkgconfig:/mingw64/share/pkgconfig" /usr/bin/scons zytrax-master/drivers/000077500000000000000000000000001347722000700153705ustar00rootroot00000000000000zytrax-master/drivers/SCsub000066400000000000000000000006041347722000700163320ustar00rootroot00000000000000Import('env'); Export('env'); targets=[] if (env["enable_vst2"]): env.add_sources(targets,"vst2/*.cpp") if (env["enable_rtaudio"]): env.add_sources(targets,"rtaudio/*.cpp") env.add_sources(targets,"rtaudio/rtaudio/*.cpp") if (env["enable_rtmidi"]): env.add_sources(targets,"rtmidi/*.cpp") env.add_sources(targets,"rtmidi/rtmidi/*.cpp") env.libs+=env.Library('drivers', targets); zytrax-master/drivers/lv2/000077500000000000000000000000001347722000700160735ustar00rootroot00000000000000zytrax-master/drivers/lv2/audio_effect_factory_lv2.cpp000066400000000000000000000340021347722000700235250ustar00rootroot00000000000000#include "audio_effect_factory_lv2.h" #include "engine/sound_driver_manager.h" #ifdef UNIX_ENABLED //process bool AudioEffectLV2::process(const AudioFrame2 *p_in, AudioFrame2 *p_out, const Event *p_events, bool p_prev_active) { return false; } //info bool AudioEffectLV2::has_synth() const { return false; } const AudioEffectInfo *AudioEffectLV2::get_info() const { } int AudioEffectLV2::get_control_port_count() const { return 0; } ControlPort *AudioEffectLV2::get_control_port(int p_port) { return 0; } const ControlPort *AudioEffectLV2::get_control_port(int p_port) const { return NULL; } void AudioEffectLV2::_clear() { if (instance) { lilv_instance_free(instance); instance = NULL; } if (instance_mono_pair) { lilv_instance_free(instance_mono_pair); instance_mono_pair = NULL; } controls.clear(); control_outs.clear(); active = false; } void AudioEffectLV2::reset() { _clear(); const LilvPlugins *plugins = lilv_world_get_all_plugins(world); const LilvPlugin *plugin = NULL; LilvIter *plug_itr = lilv_plugins_begin(plugins); for (int i = 0; i < lilv_plugins_size(plugins); ++i) { const LilvPlugin *plugin = lilv_plugins_get(plugins, plug_itr); String unique_id; unique_id.parse_utf8(lilv_node_as_uri(lilv_plugin_get_uri(plugin))); if (unique_id == p_info->unique_ID) { break; } plug_itr = lilv_plugins_next(plugins, plug_itr); } ERR_FAIL_COND_V(!plugin, NULL); //sorry not found int ports = lilv_plugin_get_num_ports(plugin); int input_count = 0; int output_count = 0; int event_input = 0; int control_in = 0; int control_out = 0; buff_size=SoundDriverManager::get_internal_buffer_size(); int midi_buff_size = 4096; bool use_midi = false; bool use_old_midi_api = false; //check buffer sizes for (int j = 0; j < ports; j++) { const LilvPort *port = lilv_plugin_get_port_by_index(plugin, j); if (lilv_port_is_a(plugin, port, uris.audio_port)) { LilvNode* min_size = lilv_port_get(plugin,port,AudioEffectProviderLV2::singleton->uris.port_minimum_size); if (min_size && lilv_node_is_int(min_size)) { int size = lilv_node_as_int(min_size); buff_size = MIN(size,buff_size); } } if ((lilv_port_is_a(plugin, port, uris.event_port) || lilv_port_is_a(plugin, port, uris.atom_port)) && lilv_port_is_a(plugin, port, uris.input_port)) { LilvNode* min_size = lilv_port_get(plugin,port,AudioEffectProviderLV2::singleton->uris.port_minimum_size); if (min_size && lilv_node_is_int(min_size)) { int size = lilv_node_as_int(min_size); midi_buff_size = MIN(size,midi_buff_size); use_midi=true; } use_old_midi_api = lilv_port_is_a(plugin, port, uris.event_port); } } in_buff_left.resize(buff_size); in_buff_right.resize(buff_size); out_buff_left.resize(buff_size); out_buff_right.resize(buff_size); if (use_midi) { evbuf = lv2_evbuf_new( midi_buff_size, use_old_midi_api ? LV2_EVBUF_EVENT : LV2_EVBUF_ATOM, jalv->map.map(jalv->map.handle, lilv_node_as_string(jalv->nodes.atom_Chunk)), jalv->map.map(jalv->map.handle, lilv_node_as_string(jalv->nodes.atom_Sequence))); jalv->plugin, port->lilv_port, jalv->nodes.rsz_minimumSize); if (min_size && lilv_node_is_int(min_size)) { port->buf_size = lilv_node_as_int(min_size); jalv->opts.buffer_size = MAX( jalv->opts.buffer_size, port->buf_size * N_BUFFER_CYCLES); } lilv_node_free(min_size); if (lilv_port_is_a(plugin, port, uris.output_port)) { output_count++; } else if (lilv_port_is_a(plugin, port, uris.input_port)) { input_count++; } } if ((lilv_port_is_a(plugin, port, uris.event_port) || lilv_port_is_a(plugin, port, uris.atom_port)) && lilv_port_is_a(plugin, port, uris.input_port)) { event_input++; } if (lilv_port_is_a(plugin, port, uris.control_port)) { if (lilv_port_is_a(plugin, port, uris.output_port)) { control_out++; } else if (lilv_port_is_a(plugin, port, uris.input_port)) { control_in++; } } } int plugins_to_create = output_count; mix_rate = SoundDriverManager::get_driver()->get_mix_rate(); LV2_Feature *feat = NULL; instance = lilv_plugin_instantiate(plugin, mix_rate, &feat); if (!instance) return; //well, failed if (output_count == 1) { instance_mono_pair = lilv_plugin_instantiate(plugin, mix_rate, &feat); } } /* Load/Save */ Error AudioEffectLV2::save(TreeSaver *p_tree) { return OK; } Error AudioEffectLV2::load(TreeLoader *p_tree) { return OK; } AudioEffectLV2::AudioEffectLV2() { active = false; instance = NULL; instance_mono_pair = NULL; mix_rate = 44100; evbuf = NULL; } AudioEffectLV2::~AudioEffectLV2() { _clear(); } AudioEffect *AudioEffectProviderLV2::create_effects(const AudioEffectInfo *p_info) { return ((AudioEffectProviderLV2 *)p_info->provider)->create_effect(p_info); } AudioEffect *AudioEffectProviderLV2::create_effect(const AudioEffectInfo *p_info) { AudioEffectLV2 *effect = new AudioEffectLV2; effect->info = *p_info; effect->reset(); if (!effect->active) { delete effect; effect = NULL; } return effect; return NULL; } void AudioEffectProviderLV2::scan_effects(AudioEffectFactory *p_factory) { lilv_world_load_all(world); const LilvPlugins *plugins = lilv_world_get_all_plugins(world); int valid_effects = 0; LilvIter *plug_itr = lilv_plugins_begin(plugins); for (int i = 0; i < lilv_plugins_size(plugins); ++i) { const LilvPlugin *plugin = lilv_plugins_get(plugins, plug_itr); //lets see if this is useful int ports = lilv_plugin_get_num_ports(plugin); int input_count = 0; int output_count = 0; int event_input = 0; int event_output = 0; for (int j = 0; j < ports; j++) { const LilvPort *port = lilv_plugin_get_port_by_index(plugin, j); if (lilv_port_is_a(plugin, port, uris.audio_port)) { if (lilv_port_is_a(plugin, port, uris.output_port)) { output_count++; } else if (lilv_port_is_a(plugin, port, uris.input_port)) { input_count++; } } if ((lilv_port_is_a(plugin, port, uris.event_port) || lilv_port_is_a(plugin, port, uris.atom_port)) && lilv_port_is_a(plugin, port, uris.input_port)) { event_input++; } if ((lilv_port_is_a(plugin, port, uris.event_port) || lilv_port_is_a(plugin, port, uris.atom_port)) && lilv_port_is_a(plugin, port, uris.output_port)) { event_output++; } } bool valid = false; if (input_count == 0 && event_input == 1) { //generator if (output_count == 1 || output_count == 2) { valid = true; } } else if (input_count == output_count) { if (output_count == 1 || output_count == 2) { valid = true; } } if (event_output > 0) valid = false; //not sure what this is, but i guess its not useful if (event_input > 1) valid = false; //only one input supported LilvNodes *features = lilv_plugin_get_required_features(plugin); int count = lilv_nodes_size(features); lilv_nodes_free(features); if (count > 0) { valid = false; } if (valid) { AudioEffectInfo info; const LilvNode *node = lilv_plugin_get_uri(plugin); info.unique_ID.parse_utf8(lilv_node_as_uri(node)); node = lilv_plugin_get_name(plugin); info.caption.parse_utf8(lilv_node_as_string(node)); node = lilv_plugin_class_get_label(lilv_plugin_get_class(plugin)); info.category.parse_utf8(lilv_node_as_string(node)); info.provider = this; info.synth = event_input == 1; info.version = 0; p_factory->add_audio_effect(info); printf("plugin.uri = %s\n", info.unique_ID.utf8().get_data()); printf("plugin.caption = %s\n", info.caption.utf8().get_data()); printf("plugin.category = %s\n", info.category.utf8().get_data()); valid_effects++; } plug_itr = lilv_plugins_next(plugins, plug_itr); } printf("Total valid effects: %i\n", valid_effects); #if 0 /* Cache URIs for concepts we'll use */ nodes.atom_AtomPort = lilv_new_uri(world, LV2_ATOM__AtomPort); nodes.atom_Chunk = lilv_new_uri(world, LV2_ATOM__Chunk); nodes.atom_Float = lilv_new_uri(world, LV2_ATOM__Float); nodes.atom_Path = lilv_new_uri(world, LV2_ATOM__Path); nodes.atom_Sequence = lilv_new_uri(world, LV2_ATOM__Sequence); nodes.ev_EventPort = lilv_new_uri(world, LV2_EVENT__EventPort); nodes.lv2_AudioPort = lilv_new_uri(world, LV2_CORE__AudioPort); nodes.lv2_CVPort = lilv_new_uri(world, LV2_CORE__CVPort); nodes.lv2_ControlPort = lilv_new_uri(world, LV2_CORE__ControlPort); nodes.lv2_InputPort = lilv_new_uri(world, LV2_CORE__InputPort); nodes.lv2_OutputPort = lilv_new_uri(world, LV2_CORE__OutputPort); nodes.lv2_connectionOptional = lilv_new_uri(world, LV2_CORE__connectionOptional); nodes.lv2_control = lilv_new_uri(world, LV2_CORE__control); nodes.lv2_default = lilv_new_uri(world, LV2_CORE__default); nodes.lv2_enumeration = lilv_new_uri(world, LV2_CORE__enumeration); nodes.lv2_integer = lilv_new_uri(world, LV2_CORE__integer); nodes.lv2_maximum = lilv_new_uri(world, LV2_CORE__maximum); nodes.lv2_minimum = lilv_new_uri(world, LV2_CORE__minimum); nodes.lv2_name = lilv_new_uri(world, LV2_CORE__name); nodes.lv2_reportsLatency = lilv_new_uri(world, LV2_CORE__reportsLatency); nodes.lv2_sampleRate = lilv_new_uri(world, LV2_CORE__sampleRate); nodes.lv2_symbol = lilv_new_uri(world, LV2_CORE__symbol); nodes.lv2_toggled = lilv_new_uri(world, LV2_CORE__toggled); nodes.midi_MidiEvent = lilv_new_uri(world, LV2_MIDI__MidiEvent); nodes.pg_group = lilv_new_uri(world, LV2_PORT_GROUPS__group); nodes.pprops_logarithmic = lilv_new_uri(world, LV2_PORT_PROPS__logarithmic); nodes.pprops_notOnGUI = lilv_new_uri(world, LV2_PORT_PROPS__notOnGUI); nodes.pprops_rangeSteps = lilv_new_uri(world, LV2_PORT_PROPS__rangeSteps); nodes.pset_Preset = lilv_new_uri(world, LV2_PRESETS__Preset); nodes.pset_bank = lilv_new_uri(world, LV2_PRESETS__bank); nodes.rdfs_comment = lilv_new_uri(world, LILV_NS_RDFS "comment"); nodes.rdfs_label = lilv_new_uri(world, LILV_NS_RDFS "label"); nodes.rdfs_range = lilv_new_uri(world, LILV_NS_RDFS "range"); nodes.rsz_minimumSize = lilv_new_uri(world, LV2_RESIZE_PORT__minimumSize); nodes.work_interface = lilv_new_uri(world, LV2_WORKER__interface); nodes.work_schedule = lilv_new_uri(world, LV2_WORKER__schedule); nodes.end = NULL; #endif } String AudioEffectProviderLV2::get_name() const { return "LADSPAv2"; } /* uint32_t AudioEffectProviderLV2::uri_to_id(LV2_URI_Map_Callback_Data callback_data, const char *map, const char *uri) { AudioEffectProviderLV2 *self = (AudioEffectProviderLV2 *)callback_data; // Jalv* jalv = (Jalv*)callback_data; // zix_sem_wait(&jalv->symap_lock); const LV2_URID id = self->symap_map(self->symap, uri); // zix_sem_post(&jalv->symap_lock); return id; } */ AudioEffectProviderLV2 *AudioEffectProviderLV2::singleton=NULL; AudioEffectProviderLV2::AudioEffectProviderLV2(int *argc, char ***argv) { singleton=this; world = lilv_world_new(); uris.atom_port = lilv_new_uri(world, LILV_URI_ATOM_PORT); uris.audio_port = lilv_new_uri(world, LILV_URI_AUDIO_PORT); uris.control_port = lilv_new_uri(world, LILV_URI_CONTROL_PORT); uris.cv_port = lilv_new_uri(world, LILV_URI_CV_PORT); uris.event_port = lilv_new_uri(world, LILV_URI_EVENT_PORT); uris.input_port = lilv_new_uri(world, LILV_URI_INPUT_PORT); uris.midi_event = lilv_new_uri(world, LILV_URI_MIDI_EVENT); uris.output_port = lilv_new_uri(world, LILV_URI_OUTPUT_PORT); uris.port = lilv_new_uri(world, LILV_URI_PORT); uris.port_minimum_size = lilv_new_uri(world, LV2_RESIZE_PORT__minimumSize); uris.atom_chunk = lilv_new_uri(world, LV2_ATOM__Chunk); uris.atom_sequence = lilv_new_uri(world, LV2_ATOM__Sequence); #if 0 suil_init(argc, argv, SUIL_ARG_NONE); symap = symap_new(); lv2_atom_forge_init(&forge, &map); env = serd_env_new(NULL); serd_env_set_prefix_from_strings( env, (const uint8_t *)"patch", (const uint8_t *)LV2_PATCH_PREFIX); serd_env_set_prefix_from_strings( env, (const uint8_t *)"time", (const uint8_t *)LV2_TIME_PREFIX); serd_env_set_prefix_from_strings( env, (const uint8_t *)"xsd", (const uint8_t *)NS_XSD); sratom = sratom_new(&jalv.map); ui_sratom = sratom_new(&jalv.map); sratom_set_env(sratom, jalv.env); sratom_set_env(ui_sratom, env); midi_event_id = uri_to_id(this, "http://lv2plug.in/ns/ext/event", LV2_MIDI__MidiEvent); urids.atom_Float = symap_map(symap, LV2_ATOM__Float); urids.atom_Int = symap_map(symap, LV2_ATOM__Int); urids.atom_Object = symap_map(symap, LV2_ATOM__Object); urids.atom_Path = symap_map(symap, LV2_ATOM__Path); urids.atom_String = symap_map(symap, LV2_ATOM__String); urids.atom_eventTransfer = symap_map(symap, LV2_ATOM__eventTransfer); urids.bufsz_maxBlockLength = symap_map(symap, LV2_BUF_SIZE__maxBlockLength); urids.bufsz_minBlockLength = symap_map(symap, LV2_BUF_SIZE__minBlockLength); urids.bufsz_sequenceSize = symap_map(symap, LV2_BUF_SIZE__sequenceSize); urids.log_Error = symap_map(symap, LV2_LOG__Error); urids.log_Trace = symap_map(symap, LV2_LOG__Trace); urids.log_Warning = symap_map(symap, LV2_LOG__Warning); urids.midi_MidiEvent = symap_map(symap, LV2_MIDI__MidiEvent); urids.param_sampleRate = symap_map(symap, LV2_PARAMETERS__sampleRate); urids.patch_Get = symap_map(symap, LV2_PATCH__Get); urids.patch_Put = symap_map(symap, LV2_PATCH__Put); urids.patch_Set = symap_map(symap, LV2_PATCH__Set); urids.patch_body = symap_map(symap, LV2_PATCH__body); urids.patch_property = symap_map(symap, LV2_PATCH__property); urids.patch_value = symap_map(symap, LV2_PATCH__value); urids.time_Position = symap_map(symap, LV2_TIME__Position); urids.time_bar = symap_map(symap, LV2_TIME__bar); urids.time_barBeat = symap_map(symap, LV2_TIME__barBeat); urids.time_beatUnit = symap_map(symap, LV2_TIME__beatUnit); urids.time_beatsPerBar = symap_map(symap, LV2_TIME__beatsPerBar); urids.time_beatsPerMinute = symap_map(symap, LV2_TIME__beatsPerMinute); urids.time_frame = symap_map(symap, LV2_TIME__frame); urids.time_speed = symap_map(symap, LV2_TIME__speed); urids.ui_updateRate = symap_map(symap, LV2_UI__updateRate); temp_dir = "/tmp/jalv-XXXXXX/"; memset(&jalv, '\0', sizeof(Jalv)); jalv.prog_name = argv[0]; jalv.block_length = 4096; /* Should be set by backend */ jalv.midi_buf_size = 1024; /* Should be set by backend */ jalv.play_state = JALV_PAUSED; jalv.bpm = 120.0f; jalv.control_in = (uint32_t)-1; #endif } #endif zytrax-master/drivers/lv2/audio_effect_factory_lv2.h000066400000000000000000000100451347722000700231730ustar00rootroot00000000000000#ifndef AUDIO_EFFECT_FACTORY_LV2_H #define AUDIO_EFFECT_FACTORY_LV2_H #ifdef UNIX_ENABLED #include "lilv/lilv.h" #include "suil/suil.h" #include "engine/audio_effect.h" class AudioEffectLV2 : public AudioEffect { Vector in_buff_left; Vector in_buff_right; Vector out_buff_left; Vector out_buff_right; LV2_Evbuf* evbuf; int buff_size; int mix_rate; AudioEffectInfo info; bool active; LilvInstance *instance; LilvInstance *instance_mono_pair; //for mono plugins, need a pair Vector controls; Vector control_outs; //outs are unused public: //process virtual bool process(const AudioFrame2 *p_in, AudioFrame2 *p_out, const Event *p_events, bool p_prev_active); //info virtual bool has_synth() const; virtual const AudioEffectInfo *get_info() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); const ControlPort *get_control_port(int p_port) const; virtual void reset(); /* Load/Save */ virtual Error save(TreeSaver *p_tree); virtual Error load(TreeLoader *p_tree); AudioEffectLV2(); virtual ~AudioEffectLV2(); }; class AudioEffectProviderLV2 : public AudioEffectProvider { #if 0 struct { LV2_URID atom_Float; LV2_URID atom_Int; LV2_URID atom_Object; LV2_URID atom_Path; LV2_URID atom_String; LV2_URID atom_eventTransfer; LV2_URID bufsz_maxBlockLength; LV2_URID bufsz_minBlockLength; LV2_URID bufsz_sequenceSize; LV2_URID log_Error; LV2_URID log_Trace; LV2_URID log_Warning; LV2_URID midi_MidiEvent; LV2_URID param_sampleRate; LV2_URID patch_Get; LV2_URID patch_Put; LV2_URID patch_Set; LV2_URID patch_body; LV2_URID patch_property; LV2_URID patch_value; LV2_URID time_Position; LV2_URID time_bar; LV2_URID time_barBeat; LV2_URID time_beatUnit; LV2_URID time_beatsPerBar; LV2_URID time_beatsPerMinute; LV2_URID time_frame; LV2_URID time_speed; LV2_URID ui_updateRate; } urid; struct { LilvNode* atom_AtomPort; LilvNode* atom_Chunk; LilvNode* atom_Float; LilvNode* atom_Path; LilvNode* atom_Sequence; LilvNode* ev_EventPort; LilvNode* lv2_AudioPort; LilvNode* lv2_CVPort; LilvNode* lv2_ControlPort; LilvNode* lv2_InputPort; LilvNode* lv2_OutputPort; LilvNode* lv2_connectionOptional; LilvNode* lv2_control; LilvNode* lv2_default; LilvNode* lv2_enumeration; LilvNode* lv2_integer; LilvNode* lv2_maximum; LilvNode* lv2_minimum; LilvNode* lv2_name; LilvNode* lv2_reportsLatency; LilvNode* lv2_sampleRate; LilvNode* lv2_symbol; LilvNode* lv2_toggled; LilvNode* midi_MidiEvent; LilvNode* pg_group; LilvNode* pprops_logarithmic; LilvNode* pprops_notOnGUI; LilvNode* pprops_rangeSteps; LilvNode* pset_Preset; LilvNode* pset_bank; LilvNode* rdfs_comment; LilvNode* rdfs_label; LilvNode* rdfs_range; LilvNode* rsz_minimumSize; LilvNode* work_interface; LilvNode* work_schedule; LilvNode* end; ///< NULL terminator for easy freeing of entire structure } nodes; Symap *symap; LV2_Atom_Forge forge; LV2_URID_Map map; SerdEnv env; Sratom *sratom; ///< Atom serialiser Sratom *ui_sratom; ///< Atom serialiser for UI thread uint32_t midi_event_id; ///< MIDI event class ID in event context String temp_dir; #endif struct { LilvNode *atom_port; LilvNode *audio_port; LilvNode *control_port; LilvNode *cv_port; LilvNode *event_port; LilvNode *input_port; LilvNode *midi_event; LilvNode *output_port; LilvNode *port; LilvNode *port_minimum_size; LilvNode *atom_chunk; LilvNode *atom_sequence; } uris; LilvWorld *world; AudioEffect *create_effect(const AudioEffectInfo *p_info); static AudioEffect *create_effects(const AudioEffectInfo *p_info); //static uint32_t uri_to_id(LV2_URI_Map_Callback_Data callback_data, const char *map, const char *uri); friend class AudioEffectLV2; static AudioEffectProviderLV2 *singleton; public: virtual void scan_effects(AudioEffectFactory *p_factory); virtual String get_name() const; AudioEffectProviderLV2(int *argc, char ***argv); }; #endif #endif // AUDIO_EFFECT_FACTORY_LV2_H zytrax-master/drivers/rtaudio/000077500000000000000000000000001347722000700170375ustar00rootroot00000000000000zytrax-master/drivers/rtaudio/rtaudio/000077500000000000000000000000001347722000700205065ustar00rootroot00000000000000zytrax-master/drivers/rtaudio/rtaudio/RtAudio.cpp000066400000000000000000013274641347722000700226020ustar00rootroot00000000000000/************************************************************************/ /*! \class RtAudio \brief Realtime audio i/o C++ classes. RtAudio provides a common API (Application Programming Interface) for realtime audio input/output across Linux (native ALSA, Jack, and OSS), Macintosh OS X (CoreAudio and Jack), and Windows (DirectSound, ASIO and WASAPI) operating systems. RtAudio GitHub site: https://github.com/thestk/rtaudio RtAudio WWW site: http://www.music.mcgill.ca/~gary/rtaudio/ RtAudio: realtime audio i/o C++ classes Copyright (c) 2001-2019 Gary P. Scavone Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Any person wishing to distribute modifications to the Software is asked to send the modifications to the original developer so that they can be incorporated into the canonical version. This is, however, not a binding provision of this license. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ /************************************************************************/ // RtAudio: Version 5.1.0 #include "RtAudio.h" #include #include #include #include #include #include // Static variable definitions. const unsigned int RtApi::MAX_SAMPLE_RATES = 14; const unsigned int RtApi::SAMPLE_RATES[] = { 4000, 5512, 8000, 9600, 11025, 16000, 22050, 32000, 44100, 48000, 88200, 96000, 176400, 192000 }; #if defined(__WINDOWS_DS__) || defined(__WINDOWS_ASIO__) || defined(__WINDOWS_WASAPI__) #define MUTEX_INITIALIZE(A) InitializeCriticalSection(A) #define MUTEX_DESTROY(A) DeleteCriticalSection(A) #define MUTEX_LOCK(A) EnterCriticalSection(A) #define MUTEX_UNLOCK(A) LeaveCriticalSection(A) #include "tchar.h" static std::string convertCharPointerToStdString(const char *text) { return std::string(text); } static std::string convertCharPointerToStdString(const wchar_t *text) { int length = WideCharToMultiByte(CP_UTF8, 0, text, -1, NULL, 0, NULL, NULL); std::string s( length-1, '\0' ); WideCharToMultiByte(CP_UTF8, 0, text, -1, &s[0], length, NULL, NULL); return s; } #elif defined(__LINUX_ALSA__) || defined(__LINUX_PULSE__) || defined(__UNIX_JACK__) || defined(__LINUX_OSS__) || defined(__MACOSX_CORE__) // pthread API #define MUTEX_INITIALIZE(A) pthread_mutex_init(A, NULL) #define MUTEX_DESTROY(A) pthread_mutex_destroy(A) #define MUTEX_LOCK(A) pthread_mutex_lock(A) #define MUTEX_UNLOCK(A) pthread_mutex_unlock(A) #else #define MUTEX_INITIALIZE(A) abs(*A) // dummy definitions #define MUTEX_DESTROY(A) abs(*A) // dummy definitions #endif // *************************************************** // // // RtAudio definitions. // // *************************************************** // std::string RtAudio :: getVersion( void ) { return RTAUDIO_VERSION; } // Define API names and display names. // Must be in same order as API enum. extern "C" { const char* rtaudio_api_names[][2] = { { "unspecified" , "Unknown" }, { "alsa" , "ALSA" }, { "pulse" , "Pulse" }, { "oss" , "OpenSoundSystem" }, { "jack" , "Jack" }, { "core" , "CoreAudio" }, { "wasapi" , "WASAPI" }, { "asio" , "ASIO" }, { "ds" , "DirectSound" }, { "dummy" , "Dummy" }, }; const unsigned int rtaudio_num_api_names = sizeof(rtaudio_api_names)/sizeof(rtaudio_api_names[0]); // The order here will control the order of RtAudio's API search in // the constructor. extern "C" const RtAudio::Api rtaudio_compiled_apis[] = { #if defined(__UNIX_JACK__) RtAudio::UNIX_JACK, #endif #if defined(__LINUX_PULSE__) RtAudio::LINUX_PULSE, #endif #if defined(__LINUX_ALSA__) RtAudio::LINUX_ALSA, #endif #if defined(__LINUX_OSS__) RtAudio::LINUX_OSS, #endif #if defined(__WINDOWS_ASIO__) RtAudio::WINDOWS_ASIO, #endif #if defined(__WINDOWS_WASAPI__) RtAudio::WINDOWS_WASAPI, #endif #if defined(__WINDOWS_DS__) RtAudio::WINDOWS_DS, #endif #if defined(__MACOSX_CORE__) RtAudio::MACOSX_CORE, #endif #if defined(__RTAUDIO_DUMMY__) RtAudio::RTAUDIO_DUMMY, #endif RtAudio::UNSPECIFIED, }; extern "C" const unsigned int rtaudio_num_compiled_apis = sizeof(rtaudio_compiled_apis)/sizeof(rtaudio_compiled_apis[0])-1; } // This is a compile-time check that rtaudio_num_api_names == RtAudio::NUM_APIS. // If the build breaks here, check that they match. template class StaticAssert { private: StaticAssert() {} }; template<> class StaticAssert{ public: StaticAssert() {} }; class StaticAssertions { StaticAssertions() { StaticAssert(); }}; void RtAudio :: getCompiledApi( std::vector &apis ) { apis = std::vector(rtaudio_compiled_apis, rtaudio_compiled_apis + rtaudio_num_compiled_apis); } std::string RtAudio :: getApiName( RtAudio::Api api ) { if (api < 0 || api >= RtAudio::NUM_APIS) return ""; return rtaudio_api_names[api][0]; } std::string RtAudio :: getApiDisplayName( RtAudio::Api api ) { if (api < 0 || api >= RtAudio::NUM_APIS) return "Unknown"; return rtaudio_api_names[api][1]; } RtAudio::Api RtAudio :: getCompiledApiByName( const std::string &name ) { unsigned int i=0; for (i = 0; i < rtaudio_num_compiled_apis; ++i) if (name == rtaudio_api_names[rtaudio_compiled_apis[i]][0]) return rtaudio_compiled_apis[i]; return RtAudio::UNSPECIFIED; } void RtAudio :: openRtApi( RtAudio::Api api ) { if ( rtapi_ ) delete rtapi_; rtapi_ = 0; #if defined(__UNIX_JACK__) if ( api == UNIX_JACK ) rtapi_ = new RtApiJack(); #endif #if defined(__LINUX_ALSA__) if ( api == LINUX_ALSA ) rtapi_ = new RtApiAlsa(); #endif #if defined(__LINUX_PULSE__) if ( api == LINUX_PULSE ) rtapi_ = new RtApiPulse(); #endif #if defined(__LINUX_OSS__) if ( api == LINUX_OSS ) rtapi_ = new RtApiOss(); #endif #if defined(__WINDOWS_ASIO__) if ( api == WINDOWS_ASIO ) rtapi_ = new RtApiAsio(); #endif #if defined(__WINDOWS_WASAPI__) if ( api == WINDOWS_WASAPI ) rtapi_ = new RtApiWasapi(); #endif #if defined(__WINDOWS_DS__) if ( api == WINDOWS_DS ) rtapi_ = new RtApiDs(); #endif #if defined(__MACOSX_CORE__) if ( api == MACOSX_CORE ) rtapi_ = new RtApiCore(); #endif #if defined(__RTAUDIO_DUMMY__) if ( api == RTAUDIO_DUMMY ) rtapi_ = new RtApiDummy(); #endif } RtAudio :: RtAudio( RtAudio::Api api ) { rtapi_ = 0; if ( api != UNSPECIFIED ) { // Attempt to open the specified API. openRtApi( api ); if ( rtapi_ ) return; // No compiled support for specified API value. Issue a debug // warning and continue as if no API was specified. std::cerr << "\nRtAudio: no compiled support for specified API argument!\n" << std::endl; } // Iterate through the compiled APIs and return as soon as we find // one with at least one device or we reach the end of the list. std::vector< RtAudio::Api > apis; getCompiledApi( apis ); for ( unsigned int i=0; igetDeviceCount() ) break; } if ( rtapi_ ) return; // It should not be possible to get here because the preprocessor // definition __RTAUDIO_DUMMY__ is automatically defined if no // API-specific definitions are passed to the compiler. But just in // case something weird happens, we'll thow an error. std::string errorText = "\nRtAudio: no compiled API support found ... critical error!!\n\n"; throw( RtAudioError( errorText, RtAudioError::UNSPECIFIED ) ); } RtAudio :: ~RtAudio() { if ( rtapi_ ) delete rtapi_; } void RtAudio :: openStream( RtAudio::StreamParameters *outputParameters, RtAudio::StreamParameters *inputParameters, RtAudioFormat format, unsigned int sampleRate, unsigned int *bufferFrames, RtAudioCallback callback, void *userData, RtAudio::StreamOptions *options, RtAudioErrorCallback errorCallback ) { return rtapi_->openStream( outputParameters, inputParameters, format, sampleRate, bufferFrames, callback, userData, options, errorCallback ); } // *************************************************** // // // Public RtApi definitions (see end of file for // private or protected utility functions). // // *************************************************** // RtApi :: RtApi() { stream_.state = STREAM_CLOSED; stream_.mode = UNINITIALIZED; stream_.apiHandle = 0; stream_.userBuffer[0] = 0; stream_.userBuffer[1] = 0; MUTEX_INITIALIZE( &stream_.mutex ); showWarnings_ = true; firstErrorOccurred_ = false; } RtApi :: ~RtApi() { MUTEX_DESTROY( &stream_.mutex ); } void RtApi :: openStream( RtAudio::StreamParameters *oParams, RtAudio::StreamParameters *iParams, RtAudioFormat format, unsigned int sampleRate, unsigned int *bufferFrames, RtAudioCallback callback, void *userData, RtAudio::StreamOptions *options, RtAudioErrorCallback errorCallback ) { if ( stream_.state != STREAM_CLOSED ) { errorText_ = "RtApi::openStream: a stream is already open!"; error( RtAudioError::INVALID_USE ); return; } // Clear stream information potentially left from a previously open stream. clearStreamInfo(); if ( oParams && oParams->nChannels < 1 ) { errorText_ = "RtApi::openStream: a non-NULL output StreamParameters structure cannot have an nChannels value less than one."; error( RtAudioError::INVALID_USE ); return; } if ( iParams && iParams->nChannels < 1 ) { errorText_ = "RtApi::openStream: a non-NULL input StreamParameters structure cannot have an nChannels value less than one."; error( RtAudioError::INVALID_USE ); return; } if ( oParams == NULL && iParams == NULL ) { errorText_ = "RtApi::openStream: input and output StreamParameters structures are both NULL!"; error( RtAudioError::INVALID_USE ); return; } if ( formatBytes(format) == 0 ) { errorText_ = "RtApi::openStream: 'format' parameter value is undefined."; error( RtAudioError::INVALID_USE ); return; } unsigned int nDevices = getDeviceCount(); unsigned int oChannels = 0; if ( oParams ) { oChannels = oParams->nChannels; if ( oParams->deviceId >= nDevices ) { errorText_ = "RtApi::openStream: output device parameter value is invalid."; error( RtAudioError::INVALID_USE ); return; } } unsigned int iChannels = 0; if ( iParams ) { iChannels = iParams->nChannels; if ( iParams->deviceId >= nDevices ) { errorText_ = "RtApi::openStream: input device parameter value is invalid."; error( RtAudioError::INVALID_USE ); return; } } bool result; if ( oChannels > 0 ) { result = probeDeviceOpen( oParams->deviceId, OUTPUT, oChannels, oParams->firstChannel, sampleRate, format, bufferFrames, options ); if ( result == false ) { error( RtAudioError::SYSTEM_ERROR ); return; } } if ( iChannels > 0 ) { result = probeDeviceOpen( iParams->deviceId, INPUT, iChannels, iParams->firstChannel, sampleRate, format, bufferFrames, options ); if ( result == false ) { if ( oChannels > 0 ) closeStream(); error( RtAudioError::SYSTEM_ERROR ); return; } } stream_.callbackInfo.callback = (void *) callback; stream_.callbackInfo.userData = userData; stream_.callbackInfo.errorCallback = (void *) errorCallback; if ( options ) options->numberOfBuffers = stream_.nBuffers; stream_.state = STREAM_STOPPED; } unsigned int RtApi :: getDefaultInputDevice( void ) { // Should be implemented in subclasses if possible. return 0; } unsigned int RtApi :: getDefaultOutputDevice( void ) { // Should be implemented in subclasses if possible. return 0; } void RtApi :: closeStream( void ) { // MUST be implemented in subclasses! return; } bool RtApi :: probeDeviceOpen( unsigned int /*device*/, StreamMode /*mode*/, unsigned int /*channels*/, unsigned int /*firstChannel*/, unsigned int /*sampleRate*/, RtAudioFormat /*format*/, unsigned int * /*bufferSize*/, RtAudio::StreamOptions * /*options*/ ) { // MUST be implemented in subclasses! return FAILURE; } void RtApi :: tickStreamTime( void ) { // Subclasses that do not provide their own implementation of // getStreamTime should call this function once per buffer I/O to // provide basic stream time support. stream_.streamTime += ( stream_.bufferSize * 1.0 / stream_.sampleRate ); #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif } long RtApi :: getStreamLatency( void ) { verifyStream(); long totalLatency = 0; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) totalLatency = stream_.latency[0]; if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) totalLatency += stream_.latency[1]; return totalLatency; } double RtApi :: getStreamTime( void ) { verifyStream(); #if defined( HAVE_GETTIMEOFDAY ) // Return a very accurate estimate of the stream time by // adding in the elapsed time since the last tick. struct timeval then; struct timeval now; if ( stream_.state != STREAM_RUNNING || stream_.streamTime == 0.0 ) return stream_.streamTime; gettimeofday( &now, NULL ); then = stream_.lastTickTimestamp; return stream_.streamTime + ((now.tv_sec + 0.000001 * now.tv_usec) - (then.tv_sec + 0.000001 * then.tv_usec)); #else return stream_.streamTime; #endif } void RtApi :: setStreamTime( double time ) { verifyStream(); if ( time >= 0.0 ) stream_.streamTime = time; #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif } unsigned int RtApi :: getStreamSampleRate( void ) { verifyStream(); return stream_.sampleRate; } // *************************************************** // // // OS/API-specific methods. // // *************************************************** // #if defined(__MACOSX_CORE__) // The OS X CoreAudio API is designed to use a separate callback // procedure for each of its audio devices. A single RtAudio duplex // stream using two different devices is supported here, though it // cannot be guaranteed to always behave correctly because we cannot // synchronize these two callbacks. // // A property listener is installed for over/underrun information. // However, no functionality is currently provided to allow property // listeners to trigger user handlers because it is unclear what could // be done if a critical stream parameter (buffer size, sample rate, // device disconnect) notification arrived. The listeners entail // quite a bit of extra code and most likely, a user program wouldn't // be prepared for the result anyway. However, we do provide a flag // to the client callback function to inform of an over/underrun. // A structure to hold various information related to the CoreAudio API // implementation. struct CoreHandle { AudioDeviceID id[2]; // device ids #if defined( MAC_OS_X_VERSION_10_5 ) && ( MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5 ) AudioDeviceIOProcID procId[2]; #endif UInt32 iStream[2]; // device stream index (or first if using multiple) UInt32 nStreams[2]; // number of streams to use bool xrun[2]; char *deviceBuffer; pthread_cond_t condition; int drainCounter; // Tracks callback counts when draining bool internalDrain; // Indicates if stop is initiated from callback or not. CoreHandle() :deviceBuffer(0), drainCounter(0), internalDrain(false) { nStreams[0] = 1; nStreams[1] = 1; id[0] = 0; id[1] = 0; xrun[0] = false; xrun[1] = false; } }; RtApiCore:: RtApiCore() { #if defined( AVAILABLE_MAC_OS_X_VERSION_10_6_AND_LATER ) // This is a largely undocumented but absolutely necessary // requirement starting with OS-X 10.6. If not called, queries and // updates to various audio device properties are not handled // correctly. CFRunLoopRef theRunLoop = NULL; AudioObjectPropertyAddress property = { kAudioHardwarePropertyRunLoop, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; OSStatus result = AudioObjectSetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, sizeof(CFRunLoopRef), &theRunLoop); if ( result != noErr ) { errorText_ = "RtApiCore::RtApiCore: error setting run loop property!"; error( RtAudioError::WARNING ); } #endif } RtApiCore :: ~RtApiCore() { // The subclass destructor gets called before the base class // destructor, so close an existing stream before deallocating // apiDeviceId memory. if ( stream_.state != STREAM_CLOSED ) closeStream(); } unsigned int RtApiCore :: getDeviceCount( void ) { // Find out how many audio devices there are, if any. UInt32 dataSize; AudioObjectPropertyAddress propertyAddress = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; OSStatus result = AudioObjectGetPropertyDataSize( kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize ); if ( result != noErr ) { errorText_ = "RtApiCore::getDeviceCount: OS-X error getting device info!"; error( RtAudioError::WARNING ); return 0; } return dataSize / sizeof( AudioDeviceID ); } unsigned int RtApiCore :: getDefaultInputDevice( void ) { unsigned int nDevices = getDeviceCount(); if ( nDevices <= 1 ) return 0; AudioDeviceID id; UInt32 dataSize = sizeof( AudioDeviceID ); AudioObjectPropertyAddress property = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; OSStatus result = AudioObjectGetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, &dataSize, &id ); if ( result != noErr ) { errorText_ = "RtApiCore::getDefaultInputDevice: OS-X system error getting device."; error( RtAudioError::WARNING ); return 0; } dataSize *= nDevices; AudioDeviceID deviceList[ nDevices ]; property.mSelector = kAudioHardwarePropertyDevices; result = AudioObjectGetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, &dataSize, (void *) &deviceList ); if ( result != noErr ) { errorText_ = "RtApiCore::getDefaultInputDevice: OS-X system error getting device IDs."; error( RtAudioError::WARNING ); return 0; } for ( unsigned int i=0; i= nDevices ) { errorText_ = "RtApiCore::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } AudioDeviceID deviceList[ nDevices ]; UInt32 dataSize = sizeof( AudioDeviceID ) * nDevices; AudioObjectPropertyAddress property = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; OSStatus result = AudioObjectGetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, &dataSize, (void *) &deviceList ); if ( result != noErr ) { errorText_ = "RtApiCore::getDeviceInfo: OS-X system error getting device IDs."; error( RtAudioError::WARNING ); return info; } AudioDeviceID id = deviceList[ device ]; // Get the device name. info.name.erase(); CFStringRef cfname; dataSize = sizeof( CFStringRef ); property.mSelector = kAudioObjectPropertyManufacturer; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &cfname ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceInfo: system error (" << getErrorCode( result ) << ") getting device manufacturer."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } //const char *mname = CFStringGetCStringPtr( cfname, CFStringGetSystemEncoding() ); int length = CFStringGetLength(cfname); char *mname = (char *)malloc(length * 3 + 1); #if defined( UNICODE ) || defined( _UNICODE ) CFStringGetCString(cfname, mname, length * 3 + 1, kCFStringEncodingUTF8); #else CFStringGetCString(cfname, mname, length * 3 + 1, CFStringGetSystemEncoding()); #endif info.name.append( (const char *)mname, strlen(mname) ); info.name.append( ": " ); CFRelease( cfname ); free(mname); property.mSelector = kAudioObjectPropertyName; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &cfname ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceInfo: system error (" << getErrorCode( result ) << ") getting device name."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } //const char *name = CFStringGetCStringPtr( cfname, CFStringGetSystemEncoding() ); length = CFStringGetLength(cfname); char *name = (char *)malloc(length * 3 + 1); #if defined( UNICODE ) || defined( _UNICODE ) CFStringGetCString(cfname, name, length * 3 + 1, kCFStringEncodingUTF8); #else CFStringGetCString(cfname, name, length * 3 + 1, CFStringGetSystemEncoding()); #endif info.name.append( (const char *)name, strlen(name) ); CFRelease( cfname ); free(name); // Get the output stream "configuration". AudioBufferList *bufferList = nil; property.mSelector = kAudioDevicePropertyStreamConfiguration; property.mScope = kAudioDevicePropertyScopeOutput; // property.mElement = kAudioObjectPropertyElementWildcard; dataSize = 0; result = AudioObjectGetPropertyDataSize( id, &property, 0, NULL, &dataSize ); if ( result != noErr || dataSize == 0 ) { errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting output stream configuration info for device (" << device << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Allocate the AudioBufferList. bufferList = (AudioBufferList *) malloc( dataSize ); if ( bufferList == NULL ) { errorText_ = "RtApiCore::getDeviceInfo: memory error allocating output AudioBufferList."; error( RtAudioError::WARNING ); return info; } result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, bufferList ); if ( result != noErr || dataSize == 0 ) { free( bufferList ); errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting output stream configuration for device (" << device << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Get output channel information. unsigned int i, nStreams = bufferList->mNumberBuffers; for ( i=0; imBuffers[i].mNumberChannels; free( bufferList ); // Get the input stream "configuration". property.mScope = kAudioDevicePropertyScopeInput; result = AudioObjectGetPropertyDataSize( id, &property, 0, NULL, &dataSize ); if ( result != noErr || dataSize == 0 ) { errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting input stream configuration info for device (" << device << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Allocate the AudioBufferList. bufferList = (AudioBufferList *) malloc( dataSize ); if ( bufferList == NULL ) { errorText_ = "RtApiCore::getDeviceInfo: memory error allocating input AudioBufferList."; error( RtAudioError::WARNING ); return info; } result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, bufferList ); if (result != noErr || dataSize == 0) { free( bufferList ); errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting input stream configuration for device (" << device << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Get input channel information. nStreams = bufferList->mNumberBuffers; for ( i=0; imBuffers[i].mNumberChannels; free( bufferList ); // If device opens for both playback and capture, we determine the channels. if ( info.outputChannels > 0 && info.inputChannels > 0 ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; // Probe the device sample rates. bool isInput = false; if ( info.outputChannels == 0 ) isInput = true; // Determine the supported sample rates. property.mSelector = kAudioDevicePropertyAvailableNominalSampleRates; if ( isInput == false ) property.mScope = kAudioDevicePropertyScopeOutput; result = AudioObjectGetPropertyDataSize( id, &property, 0, NULL, &dataSize ); if ( result != kAudioHardwareNoError || dataSize == 0 ) { errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting sample rate info."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } UInt32 nRanges = dataSize / sizeof( AudioValueRange ); AudioValueRange rangeList[ nRanges ]; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &rangeList ); if ( result != kAudioHardwareNoError ) { errorStream_ << "RtApiCore::getDeviceInfo: system error (" << getErrorCode( result ) << ") getting sample rates."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // The sample rate reporting mechanism is a bit of a mystery. It // seems that it can either return individual rates or a range of // rates. I assume that if the min / max range values are the same, // then that represents a single supported rate and if the min / max // range values are different, the device supports an arbitrary // range of values (though there might be multiple ranges, so we'll // use the most conservative range). Float64 minimumRate = 1.0, maximumRate = 10000000000.0; bool haveValueRange = false; info.sampleRates.clear(); for ( UInt32 i=0; i info.preferredSampleRate ) ) info.preferredSampleRate = tmpSr; } else { haveValueRange = true; if ( rangeList[i].mMinimum > minimumRate ) minimumRate = rangeList[i].mMinimum; if ( rangeList[i].mMaximum < maximumRate ) maximumRate = rangeList[i].mMaximum; } } if ( haveValueRange ) { for ( unsigned int k=0; k= (unsigned int) minimumRate && SAMPLE_RATES[k] <= (unsigned int) maximumRate ) { info.sampleRates.push_back( SAMPLE_RATES[k] ); if ( !info.preferredSampleRate || ( SAMPLE_RATES[k] <= 48000 && SAMPLE_RATES[k] > info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[k]; } } } // Sort and remove any redundant values std::sort( info.sampleRates.begin(), info.sampleRates.end() ); info.sampleRates.erase( unique( info.sampleRates.begin(), info.sampleRates.end() ), info.sampleRates.end() ); if ( info.sampleRates.size() == 0 ) { errorStream_ << "RtApiCore::probeDeviceInfo: No supported sample rates found for device (" << device << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // CoreAudio always uses 32-bit floating point data for PCM streams. // Thus, any other "physical" formats supported by the device are of // no interest to the client. info.nativeFormats = RTAUDIO_FLOAT32; if ( info.outputChannels > 0 ) if ( getDefaultOutputDevice() == device ) info.isDefaultOutput = true; if ( info.inputChannels > 0 ) if ( getDefaultInputDevice() == device ) info.isDefaultInput = true; info.probed = true; return info; } static OSStatus callbackHandler( AudioDeviceID inDevice, const AudioTimeStamp* /*inNow*/, const AudioBufferList* inInputData, const AudioTimeStamp* /*inInputTime*/, AudioBufferList* outOutputData, const AudioTimeStamp* /*inOutputTime*/, void* infoPointer ) { CallbackInfo *info = (CallbackInfo *) infoPointer; RtApiCore *object = (RtApiCore *) info->object; if ( object->callbackEvent( inDevice, inInputData, outOutputData ) == false ) return kAudioHardwareUnspecifiedError; else return kAudioHardwareNoError; } static OSStatus xrunListener( AudioObjectID /*inDevice*/, UInt32 nAddresses, const AudioObjectPropertyAddress properties[], void* handlePointer ) { CoreHandle *handle = (CoreHandle *) handlePointer; for ( UInt32 i=0; ixrun[1] = true; else handle->xrun[0] = true; } } return kAudioHardwareNoError; } static OSStatus rateListener( AudioObjectID inDevice, UInt32 /*nAddresses*/, const AudioObjectPropertyAddress /*properties*/[], void* ratePointer ) { Float64 *rate = (Float64 *) ratePointer; UInt32 dataSize = sizeof( Float64 ); AudioObjectPropertyAddress property = { kAudioDevicePropertyNominalSampleRate, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; AudioObjectGetPropertyData( inDevice, &property, 0, NULL, &dataSize, rate ); return kAudioHardwareNoError; } bool RtApiCore :: probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ) { // Get device ID unsigned int nDevices = getDeviceCount(); if ( nDevices == 0 ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiCore::probeDeviceOpen: no devices found!"; return FAILURE; } if ( device >= nDevices ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiCore::probeDeviceOpen: device ID is invalid!"; return FAILURE; } AudioDeviceID deviceList[ nDevices ]; UInt32 dataSize = sizeof( AudioDeviceID ) * nDevices; AudioObjectPropertyAddress property = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; OSStatus result = AudioObjectGetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, &dataSize, (void *) &deviceList ); if ( result != noErr ) { errorText_ = "RtApiCore::probeDeviceOpen: OS-X system error getting device IDs."; return FAILURE; } AudioDeviceID id = deviceList[ device ]; // Setup for stream mode. bool isInput = false; if ( mode == INPUT ) { isInput = true; property.mScope = kAudioDevicePropertyScopeInput; } else property.mScope = kAudioDevicePropertyScopeOutput; // Get the stream "configuration". AudioBufferList *bufferList = nil; dataSize = 0; property.mSelector = kAudioDevicePropertyStreamConfiguration; result = AudioObjectGetPropertyDataSize( id, &property, 0, NULL, &dataSize ); if ( result != noErr || dataSize == 0 ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting stream configuration info for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Allocate the AudioBufferList. bufferList = (AudioBufferList *) malloc( dataSize ); if ( bufferList == NULL ) { errorText_ = "RtApiCore::probeDeviceOpen: memory error allocating AudioBufferList."; return FAILURE; } result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, bufferList ); if (result != noErr || dataSize == 0) { free( bufferList ); errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting stream configuration for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Search for one or more streams that contain the desired number of // channels. CoreAudio devices can have an arbitrary number of // streams and each stream can have an arbitrary number of channels. // For each stream, a single buffer of interleaved samples is // provided. RtAudio prefers the use of one stream of interleaved // data or multiple consecutive single-channel streams. However, we // now support multiple consecutive multi-channel streams of // interleaved data as well. UInt32 iStream, offsetCounter = firstChannel; UInt32 nStreams = bufferList->mNumberBuffers; bool monoMode = false; bool foundStream = false; // First check that the device supports the requested number of // channels. UInt32 deviceChannels = 0; for ( iStream=0; iStreammBuffers[iStream].mNumberChannels; if ( deviceChannels < ( channels + firstChannel ) ) { free( bufferList ); errorStream_ << "RtApiCore::probeDeviceOpen: the device (" << device << ") does not support the requested channel count."; errorText_ = errorStream_.str(); return FAILURE; } // Look for a single stream meeting our needs. UInt32 firstStream, streamCount = 1, streamChannels = 0, channelOffset = 0; for ( iStream=0; iStreammBuffers[iStream].mNumberChannels; if ( streamChannels >= channels + offsetCounter ) { firstStream = iStream; channelOffset = offsetCounter; foundStream = true; break; } if ( streamChannels > offsetCounter ) break; offsetCounter -= streamChannels; } // If we didn't find a single stream above, then we should be able // to meet the channel specification with multiple streams. if ( foundStream == false ) { monoMode = true; offsetCounter = firstChannel; for ( iStream=0; iStreammBuffers[iStream].mNumberChannels; if ( streamChannels > offsetCounter ) break; offsetCounter -= streamChannels; } firstStream = iStream; channelOffset = offsetCounter; Int32 channelCounter = channels + offsetCounter - streamChannels; if ( streamChannels > 1 ) monoMode = false; while ( channelCounter > 0 ) { streamChannels = bufferList->mBuffers[++iStream].mNumberChannels; if ( streamChannels > 1 ) monoMode = false; channelCounter -= streamChannels; streamCount++; } } free( bufferList ); // Determine the buffer size. AudioValueRange bufferRange; dataSize = sizeof( AudioValueRange ); property.mSelector = kAudioDevicePropertyBufferFrameSizeRange; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &bufferRange ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting buffer size range for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } if ( bufferRange.mMinimum > *bufferSize ) *bufferSize = (unsigned long) bufferRange.mMinimum; else if ( bufferRange.mMaximum < *bufferSize ) *bufferSize = (unsigned long) bufferRange.mMaximum; if ( options && options->flags & RTAUDIO_MINIMIZE_LATENCY ) *bufferSize = (unsigned long) bufferRange.mMinimum; // Set the buffer size. For multiple streams, I'm assuming we only // need to make this setting for the master channel. UInt32 theSize = (UInt32) *bufferSize; dataSize = sizeof( UInt32 ); property.mSelector = kAudioDevicePropertyBufferFrameSize; result = AudioObjectSetPropertyData( id, &property, 0, NULL, dataSize, &theSize ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") setting the buffer size for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } // If attempting to setup a duplex stream, the bufferSize parameter // MUST be the same in both directions! *bufferSize = theSize; if ( stream_.mode == OUTPUT && mode == INPUT && *bufferSize != stream_.bufferSize ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error setting buffer size for duplex stream on device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.bufferSize = *bufferSize; stream_.nBuffers = 1; // Try to set "hog" mode ... it's not clear to me this is working. if ( options && options->flags & RTAUDIO_HOG_DEVICE ) { pid_t hog_pid; dataSize = sizeof( hog_pid ); property.mSelector = kAudioDevicePropertyHogMode; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &hog_pid ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting 'hog' state!"; errorText_ = errorStream_.str(); return FAILURE; } if ( hog_pid != getpid() ) { hog_pid = getpid(); result = AudioObjectSetPropertyData( id, &property, 0, NULL, dataSize, &hog_pid ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") setting 'hog' state!"; errorText_ = errorStream_.str(); return FAILURE; } } } // Check and if necessary, change the sample rate for the device. Float64 nominalRate; dataSize = sizeof( Float64 ); property.mSelector = kAudioDevicePropertyNominalSampleRate; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &nominalRate ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting current sample rate."; errorText_ = errorStream_.str(); return FAILURE; } // Only change the sample rate if off by more than 1 Hz. if ( fabs( nominalRate - (double)sampleRate ) > 1.0 ) { // Set a property listener for the sample rate change Float64 reportedRate = 0.0; AudioObjectPropertyAddress tmp = { kAudioDevicePropertyNominalSampleRate, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; result = AudioObjectAddPropertyListener( id, &tmp, rateListener, (void *) &reportedRate ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") setting sample rate property listener for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } nominalRate = (Float64) sampleRate; result = AudioObjectSetPropertyData( id, &property, 0, NULL, dataSize, &nominalRate ); if ( result != noErr ) { AudioObjectRemovePropertyListener( id, &tmp, rateListener, (void *) &reportedRate ); errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") setting sample rate for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Now wait until the reported nominal rate is what we just set. UInt32 microCounter = 0; while ( reportedRate != nominalRate ) { microCounter += 5000; if ( microCounter > 5000000 ) break; usleep( 5000 ); } // Remove the property listener. AudioObjectRemovePropertyListener( id, &tmp, rateListener, (void *) &reportedRate ); if ( microCounter > 5000000 ) { errorStream_ << "RtApiCore::probeDeviceOpen: timeout waiting for sample rate update for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } } // Now set the stream format for all streams. Also, check the // physical format of the device and change that if necessary. AudioStreamBasicDescription description; dataSize = sizeof( AudioStreamBasicDescription ); property.mSelector = kAudioStreamPropertyVirtualFormat; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &description ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting stream format for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Set the sample rate and data format id. However, only make the // change if the sample rate is not within 1.0 of the desired // rate and the format is not linear pcm. bool updateFormat = false; if ( fabs( description.mSampleRate - (Float64)sampleRate ) > 1.0 ) { description.mSampleRate = (Float64) sampleRate; updateFormat = true; } if ( description.mFormatID != kAudioFormatLinearPCM ) { description.mFormatID = kAudioFormatLinearPCM; updateFormat = true; } if ( updateFormat ) { result = AudioObjectSetPropertyData( id, &property, 0, NULL, dataSize, &description ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") setting sample rate or data format for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } } // Now check the physical format. property.mSelector = kAudioStreamPropertyPhysicalFormat; result = AudioObjectGetPropertyData( id, &property, 0, NULL, &dataSize, &description ); if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error (" << getErrorCode( result ) << ") getting stream physical format for device (" << device << ")."; errorText_ = errorStream_.str(); return FAILURE; } //std::cout << "Current physical stream format:" << std::endl; //std::cout << " mBitsPerChan = " << description.mBitsPerChannel << std::endl; //std::cout << " aligned high = " << (description.mFormatFlags & kAudioFormatFlagIsAlignedHigh) << ", isPacked = " << (description.mFormatFlags & kAudioFormatFlagIsPacked) << std::endl; //std::cout << " bytesPerFrame = " << description.mBytesPerFrame << std::endl; //std::cout << " sample rate = " << description.mSampleRate << std::endl; if ( description.mFormatID != kAudioFormatLinearPCM || description.mBitsPerChannel < 16 ) { description.mFormatID = kAudioFormatLinearPCM; //description.mSampleRate = (Float64) sampleRate; AudioStreamBasicDescription testDescription = description; UInt32 formatFlags; // We'll try higher bit rates first and then work our way down. std::vector< std::pair > physicalFormats; formatFlags = (description.mFormatFlags | kLinearPCMFormatFlagIsFloat) & ~kLinearPCMFormatFlagIsSignedInteger; physicalFormats.push_back( std::pair( 32, formatFlags ) ); formatFlags = (description.mFormatFlags | kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked) & ~kLinearPCMFormatFlagIsFloat; physicalFormats.push_back( std::pair( 32, formatFlags ) ); physicalFormats.push_back( std::pair( 24, formatFlags ) ); // 24-bit packed formatFlags &= ~( kAudioFormatFlagIsPacked | kAudioFormatFlagIsAlignedHigh ); physicalFormats.push_back( std::pair( 24.2, formatFlags ) ); // 24-bit in 4 bytes, aligned low formatFlags |= kAudioFormatFlagIsAlignedHigh; physicalFormats.push_back( std::pair( 24.4, formatFlags ) ); // 24-bit in 4 bytes, aligned high formatFlags = (description.mFormatFlags | kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked) & ~kLinearPCMFormatFlagIsFloat; physicalFormats.push_back( std::pair( 16, formatFlags ) ); physicalFormats.push_back( std::pair( 8, formatFlags ) ); bool setPhysicalFormat = false; for( unsigned int i=0; iflags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; stream_.deviceInterleaved[mode] = true; if ( monoMode == true ) stream_.deviceInterleaved[mode] = false; // Set flags for buffer conversion. stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.nUserChannels[mode] < stream_.nDeviceChannels[mode] ) stream_.doConvertBuffer[mode] = true; if ( streamCount == 1 ) { if ( stream_.nUserChannels[mode] > 1 && stream_.userInterleaved != stream_.deviceInterleaved[mode] ) stream_.doConvertBuffer[mode] = true; } else if ( monoMode && stream_.userInterleaved ) stream_.doConvertBuffer[mode] = true; // Allocate our CoreHandle structure for the stream. CoreHandle *handle = 0; if ( stream_.apiHandle == 0 ) { try { handle = new CoreHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiCore::probeDeviceOpen: error allocating CoreHandle memory."; goto error; } if ( pthread_cond_init( &handle->condition, NULL ) ) { errorText_ = "RtApiCore::probeDeviceOpen: error initializing pthread condition variable."; goto error; } stream_.apiHandle = (void *) handle; } else handle = (CoreHandle *) stream_.apiHandle; handle->iStream[mode] = firstStream; handle->nStreams[mode] = streamCount; handle->id[mode] = id; // Allocate necessary internal buffers. unsigned long bufferBytes; bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); // stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); stream_.userBuffer[mode] = (char *) malloc( bufferBytes * sizeof(char) ); memset( stream_.userBuffer[mode], 0, bufferBytes * sizeof(char) ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiCore::probeDeviceOpen: error allocating user buffer memory."; goto error; } // If possible, we will make use of the CoreAudio stream buffers as // "device buffers". However, we can't do this if using multiple // streams. if ( stream_.doConvertBuffer[mode] && handle->nStreams[mode] > 1 ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( mode == INPUT ) { if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiCore::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } stream_.sampleRate = sampleRate; stream_.device[mode] = device; stream_.state = STREAM_STOPPED; stream_.callbackInfo.object = (void *) this; // Setup the buffer conversion information structure. if ( stream_.doConvertBuffer[mode] ) { if ( streamCount > 1 ) setConvertInfo( mode, 0 ); else setConvertInfo( mode, channelOffset ); } if ( mode == INPUT && stream_.mode == OUTPUT && stream_.device[0] == device ) // Only one callback procedure per device. stream_.mode = DUPLEX; else { #if defined( MAC_OS_X_VERSION_10_5 ) && ( MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5 ) result = AudioDeviceCreateIOProcID( id, callbackHandler, (void *) &stream_.callbackInfo, &handle->procId[mode] ); #else // deprecated in favor of AudioDeviceCreateIOProcID() result = AudioDeviceAddIOProc( id, callbackHandler, (void *) &stream_.callbackInfo ); #endif if ( result != noErr ) { errorStream_ << "RtApiCore::probeDeviceOpen: system error setting callback for device (" << device << ")."; errorText_ = errorStream_.str(); goto error; } if ( stream_.mode == OUTPUT && mode == INPUT ) stream_.mode = DUPLEX; else stream_.mode = mode; } // Setup the device property listener for over/underload. property.mSelector = kAudioDeviceProcessorOverload; property.mScope = kAudioObjectPropertyScopeGlobal; result = AudioObjectAddPropertyListener( id, &property, xrunListener, (void *) handle ); return SUCCESS; error: if ( handle ) { pthread_cond_destroy( &handle->condition ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.state = STREAM_CLOSED; return FAILURE; } void RtApiCore :: closeStream( void ) { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiCore::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } CoreHandle *handle = (CoreHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if (handle) { AudioObjectPropertyAddress property = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; property.mSelector = kAudioDeviceProcessorOverload; property.mScope = kAudioObjectPropertyScopeGlobal; if (AudioObjectRemovePropertyListener( handle->id[0], &property, xrunListener, (void *) handle ) != noErr) { errorText_ = "RtApiCore::closeStream(): error removing property listener!"; error( RtAudioError::WARNING ); } } if ( stream_.state == STREAM_RUNNING ) AudioDeviceStop( handle->id[0], callbackHandler ); #if defined( MAC_OS_X_VERSION_10_5 ) && ( MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5 ) AudioDeviceDestroyIOProcID( handle->id[0], handle->procId[0] ); #else // deprecated in favor of AudioDeviceDestroyIOProcID() AudioDeviceRemoveIOProc( handle->id[0], callbackHandler ); #endif } if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && stream_.device[0] != stream_.device[1] ) ) { if (handle) { AudioObjectPropertyAddress property = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; property.mSelector = kAudioDeviceProcessorOverload; property.mScope = kAudioObjectPropertyScopeGlobal; if (AudioObjectRemovePropertyListener( handle->id[1], &property, xrunListener, (void *) handle ) != noErr) { errorText_ = "RtApiCore::closeStream(): error removing property listener!"; error( RtAudioError::WARNING ); } } if ( stream_.state == STREAM_RUNNING ) AudioDeviceStop( handle->id[1], callbackHandler ); #if defined( MAC_OS_X_VERSION_10_5 ) && ( MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5 ) AudioDeviceDestroyIOProcID( handle->id[1], handle->procId[1] ); #else // deprecated in favor of AudioDeviceDestroyIOProcID() AudioDeviceRemoveIOProc( handle->id[1], callbackHandler ); #endif } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } // Destroy pthread condition variable. pthread_cond_destroy( &handle->condition ); delete handle; stream_.apiHandle = 0; stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } void RtApiCore :: startStream( void ) { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiCore::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif OSStatus result = noErr; CoreHandle *handle = (CoreHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { result = AudioDeviceStart( handle->id[0], callbackHandler ); if ( result != noErr ) { errorStream_ << "RtApiCore::startStream: system error (" << getErrorCode( result ) << ") starting callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } } if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && stream_.device[0] != stream_.device[1] ) ) { result = AudioDeviceStart( handle->id[1], callbackHandler ); if ( result != noErr ) { errorStream_ << "RtApiCore::startStream: system error starting input callback procedure on device (" << stream_.device[1] << ")."; errorText_ = errorStream_.str(); goto unlock; } } handle->drainCounter = 0; handle->internalDrain = false; stream_.state = STREAM_RUNNING; unlock: if ( result == noErr ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiCore :: stopStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiCore::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } OSStatus result = noErr; CoreHandle *handle = (CoreHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( handle->drainCounter == 0 ) { handle->drainCounter = 2; pthread_cond_wait( &handle->condition, &stream_.mutex ); // block until signaled } result = AudioDeviceStop( handle->id[0], callbackHandler ); if ( result != noErr ) { errorStream_ << "RtApiCore::stopStream: system error (" << getErrorCode( result ) << ") stopping callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } } if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && stream_.device[0] != stream_.device[1] ) ) { result = AudioDeviceStop( handle->id[1], callbackHandler ); if ( result != noErr ) { errorStream_ << "RtApiCore::stopStream: system error (" << getErrorCode( result ) << ") stopping input callback procedure on device (" << stream_.device[1] << ")."; errorText_ = errorStream_.str(); goto unlock; } } stream_.state = STREAM_STOPPED; unlock: if ( result == noErr ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiCore :: abortStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiCore::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } CoreHandle *handle = (CoreHandle *) stream_.apiHandle; handle->drainCounter = 2; stopStream(); } // This function will be called by a spawned thread when the user // callback function signals that the stream should be stopped or // aborted. It is better to handle it this way because the // callbackEvent() function probably should return before the AudioDeviceStop() // function is called. static void *coreStopStream( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiCore *object = (RtApiCore *) info->object; object->stopStream(); pthread_exit( NULL ); } bool RtApiCore :: callbackEvent( AudioDeviceID deviceId, const AudioBufferList *inBufferList, const AudioBufferList *outBufferList ) { if ( stream_.state == STREAM_STOPPED || stream_.state == STREAM_STOPPING ) return SUCCESS; if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiCore::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return FAILURE; } CallbackInfo *info = (CallbackInfo *) &stream_.callbackInfo; CoreHandle *handle = (CoreHandle *) stream_.apiHandle; // Check if we were draining the stream and signal is finished. if ( handle->drainCounter > 3 ) { ThreadHandle threadId; stream_.state = STREAM_STOPPING; if ( handle->internalDrain == true ) pthread_create( &threadId, NULL, coreStopStream, info ); else // external call to stopStream() pthread_cond_signal( &handle->condition ); return SUCCESS; } AudioDeviceID outputDevice = handle->id[0]; // Invoke user callback to get fresh output data UNLESS we are // draining stream or duplex mode AND the input/output devices are // different AND this function is called for the input device. if ( handle->drainCounter == 0 && ( stream_.mode != DUPLEX || deviceId == outputDevice ) ) { RtAudioCallback callback = (RtAudioCallback) info->callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && handle->xrun[0] == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; handle->xrun[0] = false; } if ( stream_.mode != OUTPUT && handle->xrun[1] == true ) { status |= RTAUDIO_INPUT_OVERFLOW; handle->xrun[1] = false; } int cbReturnValue = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, info->userData ); if ( cbReturnValue == 2 ) { stream_.state = STREAM_STOPPING; handle->drainCounter = 2; abortStream(); return SUCCESS; } else if ( cbReturnValue == 1 ) { handle->drainCounter = 1; handle->internalDrain = true; } } if ( stream_.mode == OUTPUT || ( stream_.mode == DUPLEX && deviceId == outputDevice ) ) { if ( handle->drainCounter > 1 ) { // write zeros to the output stream if ( handle->nStreams[0] == 1 ) { memset( outBufferList->mBuffers[handle->iStream[0]].mData, 0, outBufferList->mBuffers[handle->iStream[0]].mDataByteSize ); } else { // fill multiple streams with zeros for ( unsigned int i=0; inStreams[0]; i++ ) { memset( outBufferList->mBuffers[handle->iStream[0]+i].mData, 0, outBufferList->mBuffers[handle->iStream[0]+i].mDataByteSize ); } } } else if ( handle->nStreams[0] == 1 ) { if ( stream_.doConvertBuffer[0] ) { // convert directly to CoreAudio stream buffer convertBuffer( (char *) outBufferList->mBuffers[handle->iStream[0]].mData, stream_.userBuffer[0], stream_.convertInfo[0] ); } else { // copy from user buffer memcpy( outBufferList->mBuffers[handle->iStream[0]].mData, stream_.userBuffer[0], outBufferList->mBuffers[handle->iStream[0]].mDataByteSize ); } } else { // fill multiple streams Float32 *inBuffer = (Float32 *) stream_.userBuffer[0]; if ( stream_.doConvertBuffer[0] ) { convertBuffer( stream_.deviceBuffer, stream_.userBuffer[0], stream_.convertInfo[0] ); inBuffer = (Float32 *) stream_.deviceBuffer; } if ( stream_.deviceInterleaved[0] == false ) { // mono mode UInt32 bufferBytes = outBufferList->mBuffers[handle->iStream[0]].mDataByteSize; for ( unsigned int i=0; imBuffers[handle->iStream[0]+i].mData, (void *)&inBuffer[i*stream_.bufferSize], bufferBytes ); } } else { // fill multiple multi-channel streams with interleaved data UInt32 streamChannels, channelsLeft, inJump, outJump, inOffset; Float32 *out, *in; bool inInterleaved = ( stream_.userInterleaved ) ? true : false; UInt32 inChannels = stream_.nUserChannels[0]; if ( stream_.doConvertBuffer[0] ) { inInterleaved = true; // device buffer will always be interleaved for nStreams > 1 and not mono mode inChannels = stream_.nDeviceChannels[0]; } if ( inInterleaved ) inOffset = 1; else inOffset = stream_.bufferSize; channelsLeft = inChannels; for ( unsigned int i=0; inStreams[0]; i++ ) { in = inBuffer; out = (Float32 *) outBufferList->mBuffers[handle->iStream[0]+i].mData; streamChannels = outBufferList->mBuffers[handle->iStream[0]+i].mNumberChannels; outJump = 0; // Account for possible channel offset in first stream if ( i == 0 && stream_.channelOffset[0] > 0 ) { streamChannels -= stream_.channelOffset[0]; outJump = stream_.channelOffset[0]; out += outJump; } // Account for possible unfilled channels at end of the last stream if ( streamChannels > channelsLeft ) { outJump = streamChannels - channelsLeft; streamChannels = channelsLeft; } // Determine input buffer offsets and skips if ( inInterleaved ) { inJump = inChannels; in += inChannels - channelsLeft; } else { inJump = 1; in += (inChannels - channelsLeft) * inOffset; } for ( unsigned int i=0; idrainCounter ) { handle->drainCounter++; goto unlock; } AudioDeviceID inputDevice; inputDevice = handle->id[1]; if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && deviceId == inputDevice ) ) { if ( handle->nStreams[1] == 1 ) { if ( stream_.doConvertBuffer[1] ) { // convert directly from CoreAudio stream buffer convertBuffer( stream_.userBuffer[1], (char *) inBufferList->mBuffers[handle->iStream[1]].mData, stream_.convertInfo[1] ); } else { // copy to user buffer memcpy( stream_.userBuffer[1], inBufferList->mBuffers[handle->iStream[1]].mData, inBufferList->mBuffers[handle->iStream[1]].mDataByteSize ); } } else { // read from multiple streams Float32 *outBuffer = (Float32 *) stream_.userBuffer[1]; if ( stream_.doConvertBuffer[1] ) outBuffer = (Float32 *) stream_.deviceBuffer; if ( stream_.deviceInterleaved[1] == false ) { // mono mode UInt32 bufferBytes = inBufferList->mBuffers[handle->iStream[1]].mDataByteSize; for ( unsigned int i=0; imBuffers[handle->iStream[1]+i].mData, bufferBytes ); } } else { // read from multiple multi-channel streams UInt32 streamChannels, channelsLeft, inJump, outJump, outOffset; Float32 *out, *in; bool outInterleaved = ( stream_.userInterleaved ) ? true : false; UInt32 outChannels = stream_.nUserChannels[1]; if ( stream_.doConvertBuffer[1] ) { outInterleaved = true; // device buffer will always be interleaved for nStreams > 1 and not mono mode outChannels = stream_.nDeviceChannels[1]; } if ( outInterleaved ) outOffset = 1; else outOffset = stream_.bufferSize; channelsLeft = outChannels; for ( unsigned int i=0; inStreams[1]; i++ ) { out = outBuffer; in = (Float32 *) inBufferList->mBuffers[handle->iStream[1]+i].mData; streamChannels = inBufferList->mBuffers[handle->iStream[1]+i].mNumberChannels; inJump = 0; // Account for possible channel offset in first stream if ( i == 0 && stream_.channelOffset[1] > 0 ) { streamChannels -= stream_.channelOffset[1]; inJump = stream_.channelOffset[1]; in += inJump; } // Account for possible unread channels at end of the last stream if ( streamChannels > channelsLeft ) { inJump = streamChannels - channelsLeft; streamChannels = channelsLeft; } // Determine output buffer offsets and skips if ( outInterleaved ) { outJump = outChannels; out += outChannels - channelsLeft; } else { outJump = 1; out += (outChannels - channelsLeft) * outOffset; } for ( unsigned int i=0; iid[0] != handle->id[1] && deviceId == handle->id[0] ) ) RtApi::tickStreamTime(); return SUCCESS; } const char* RtApiCore :: getErrorCode( OSStatus code ) { switch( code ) { case kAudioHardwareNotRunningError: return "kAudioHardwareNotRunningError"; case kAudioHardwareUnspecifiedError: return "kAudioHardwareUnspecifiedError"; case kAudioHardwareUnknownPropertyError: return "kAudioHardwareUnknownPropertyError"; case kAudioHardwareBadPropertySizeError: return "kAudioHardwareBadPropertySizeError"; case kAudioHardwareIllegalOperationError: return "kAudioHardwareIllegalOperationError"; case kAudioHardwareBadObjectError: return "kAudioHardwareBadObjectError"; case kAudioHardwareBadDeviceError: return "kAudioHardwareBadDeviceError"; case kAudioHardwareBadStreamError: return "kAudioHardwareBadStreamError"; case kAudioHardwareUnsupportedOperationError: return "kAudioHardwareUnsupportedOperationError"; case kAudioDeviceUnsupportedFormatError: return "kAudioDeviceUnsupportedFormatError"; case kAudioDevicePermissionsError: return "kAudioDevicePermissionsError"; default: return "CoreAudio unknown error"; } } //******************** End of __MACOSX_CORE__ *********************// #endif #if defined(__UNIX_JACK__) // JACK is a low-latency audio server, originally written for the // GNU/Linux operating system and now also ported to OS-X. It can // connect a number of different applications to an audio device, as // well as allowing them to share audio between themselves. // // When using JACK with RtAudio, "devices" refer to JACK clients that // have ports connected to the server. The JACK server is typically // started in a terminal as follows: // // .jackd -d alsa -d hw:0 // // or through an interface program such as qjackctl. Many of the // parameters normally set for a stream are fixed by the JACK server // and can be specified when the JACK server is started. In // particular, // // .jackd -d alsa -d hw:0 -r 44100 -p 512 -n 4 // // specifies a sample rate of 44100 Hz, a buffer size of 512 sample // frames, and number of buffers = 4. Once the server is running, it // is not possible to override these values. If the values are not // specified in the command-line, the JACK server uses default values. // // The JACK server does not have to be running when an instance of // RtApiJack is created, though the function getDeviceCount() will // report 0 devices found until JACK has been started. When no // devices are available (i.e., the JACK server is not running), a // stream cannot be opened. #include #include #include // A structure to hold various information related to the Jack API // implementation. struct JackHandle { jack_client_t *client; jack_port_t **ports[2]; std::string deviceName[2]; bool xrun[2]; pthread_cond_t condition; int drainCounter; // Tracks callback counts when draining bool internalDrain; // Indicates if stop is initiated from callback or not. JackHandle() :client(0), drainCounter(0), internalDrain(false) { ports[0] = 0; ports[1] = 0; xrun[0] = false; xrun[1] = false; } }; #if !defined(__RTAUDIO_DEBUG__) static void jackSilentError( const char * ) {}; #endif RtApiJack :: RtApiJack() :shouldAutoconnect_(true) { // Nothing to do here. #if !defined(__RTAUDIO_DEBUG__) // Turn off Jack's internal error reporting. jack_set_error_function( &jackSilentError ); #endif } RtApiJack :: ~RtApiJack() { if ( stream_.state != STREAM_CLOSED ) closeStream(); } unsigned int RtApiJack :: getDeviceCount( void ) { // See if we can become a jack client. jack_options_t options = (jack_options_t) ( JackNoStartServer ); //JackNullOption; jack_status_t *status = NULL; jack_client_t *client = jack_client_open( "RtApiJackCount", options, status ); if ( client == 0 ) return 0; const char **ports; std::string port, previousPort; unsigned int nChannels = 0, nDevices = 0; ports = jack_get_ports( client, NULL, JACK_DEFAULT_AUDIO_TYPE, 0 ); if ( ports ) { // Parse the port names up to the first colon (:). size_t iColon = 0; do { port = (char *) ports[ nChannels ]; iColon = port.find(":"); if ( iColon != std::string::npos ) { port = port.substr( 0, iColon + 1 ); if ( port != previousPort ) { nDevices++; previousPort = port; } } } while ( ports[++nChannels] ); free( ports ); } jack_client_close( client ); return nDevices; } RtAudio::DeviceInfo RtApiJack :: getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; info.probed = false; jack_options_t options = (jack_options_t) ( JackNoStartServer ); //JackNullOption jack_status_t *status = NULL; jack_client_t *client = jack_client_open( "RtApiJackInfo", options, status ); if ( client == 0 ) { errorText_ = "RtApiJack::getDeviceInfo: Jack server not found or connection error!"; error( RtAudioError::WARNING ); return info; } const char **ports; std::string port, previousPort; unsigned int nPorts = 0, nDevices = 0; ports = jack_get_ports( client, NULL, JACK_DEFAULT_AUDIO_TYPE, 0 ); if ( ports ) { // Parse the port names up to the first colon (:). size_t iColon = 0; do { port = (char *) ports[ nPorts ]; iColon = port.find(":"); if ( iColon != std::string::npos ) { port = port.substr( 0, iColon ); if ( port != previousPort ) { if ( nDevices == device ) info.name = port; nDevices++; previousPort = port; } } } while ( ports[++nPorts] ); free( ports ); } if ( device >= nDevices ) { jack_client_close( client ); errorText_ = "RtApiJack::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } // Get the current jack server sample rate. info.sampleRates.clear(); info.preferredSampleRate = jack_get_sample_rate( client ); info.sampleRates.push_back( info.preferredSampleRate ); // Count the available ports containing the client name as device // channels. Jack "input ports" equal RtAudio output channels. unsigned int nChannels = 0; ports = jack_get_ports( client, info.name.c_str(), JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput ); if ( ports ) { while ( ports[ nChannels ] ) nChannels++; free( ports ); info.outputChannels = nChannels; } // Jack "output ports" equal RtAudio input channels. nChannels = 0; ports = jack_get_ports( client, info.name.c_str(), JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput ); if ( ports ) { while ( ports[ nChannels ] ) nChannels++; free( ports ); info.inputChannels = nChannels; } if ( info.outputChannels == 0 && info.inputChannels == 0 ) { jack_client_close(client); errorText_ = "RtApiJack::getDeviceInfo: error determining Jack input/output channels!"; error( RtAudioError::WARNING ); return info; } // If device opens for both playback and capture, we determine the channels. if ( info.outputChannels > 0 && info.inputChannels > 0 ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; // Jack always uses 32-bit floats. info.nativeFormats = RTAUDIO_FLOAT32; // Jack doesn't provide default devices so we'll use the first available one. if ( device == 0 && info.outputChannels > 0 ) info.isDefaultOutput = true; if ( device == 0 && info.inputChannels > 0 ) info.isDefaultInput = true; jack_client_close(client); info.probed = true; return info; } static int jackCallbackHandler( jack_nframes_t nframes, void *infoPointer ) { CallbackInfo *info = (CallbackInfo *) infoPointer; RtApiJack *object = (RtApiJack *) info->object; if ( object->callbackEvent( (unsigned long) nframes ) == false ) return 1; return 0; } // This function will be called by a spawned thread when the Jack // server signals that it is shutting down. It is necessary to handle // it this way because the jackShutdown() function must return before // the jack_deactivate() function (in closeStream()) will return. static void *jackCloseStream( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiJack *object = (RtApiJack *) info->object; object->closeStream(); pthread_exit( NULL ); } static void jackShutdown( void *infoPointer ) { CallbackInfo *info = (CallbackInfo *) infoPointer; RtApiJack *object = (RtApiJack *) info->object; // Check current stream state. If stopped, then we'll assume this // was called as a result of a call to RtApiJack::stopStream (the // deactivation of a client handle causes this function to be called). // If not, we'll assume the Jack server is shutting down or some // other problem occurred and we should close the stream. if ( object->isStreamRunning() == false ) return; ThreadHandle threadId; pthread_create( &threadId, NULL, jackCloseStream, info ); std::cerr << "\nRtApiJack: the Jack server is shutting down this client ... stream stopped and closed!!\n" << std::endl; } static int jackXrun( void *infoPointer ) { JackHandle *handle = *((JackHandle **) infoPointer); if ( handle->ports[0] ) handle->xrun[0] = true; if ( handle->ports[1] ) handle->xrun[1] = true; return 0; } bool RtApiJack :: probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ) { JackHandle *handle = (JackHandle *) stream_.apiHandle; // Look for jack server and try to become a client (only do once per stream). jack_client_t *client = 0; if ( mode == OUTPUT || ( mode == INPUT && stream_.mode != OUTPUT ) ) { jack_options_t jackoptions = (jack_options_t) ( JackNoStartServer ); //JackNullOption; jack_status_t *status = NULL; if ( options && !options->streamName.empty() ) client = jack_client_open( options->streamName.c_str(), jackoptions, status ); else client = jack_client_open( "RtApiJack", jackoptions, status ); if ( client == 0 ) { errorText_ = "RtApiJack::probeDeviceOpen: Jack server not found or connection error!"; error( RtAudioError::WARNING ); return FAILURE; } } else { // The handle must have been created on an earlier pass. client = handle->client; } const char **ports; std::string port, previousPort, deviceName; unsigned int nPorts = 0, nDevices = 0; ports = jack_get_ports( client, NULL, JACK_DEFAULT_AUDIO_TYPE, 0 ); if ( ports ) { // Parse the port names up to the first colon (:). size_t iColon = 0; do { port = (char *) ports[ nPorts ]; iColon = port.find(":"); if ( iColon != std::string::npos ) { port = port.substr( 0, iColon ); if ( port != previousPort ) { if ( nDevices == device ) deviceName = port; nDevices++; previousPort = port; } } } while ( ports[++nPorts] ); free( ports ); } if ( device >= nDevices ) { errorText_ = "RtApiJack::probeDeviceOpen: device ID is invalid!"; return FAILURE; } unsigned long flag = JackPortIsInput; if ( mode == INPUT ) flag = JackPortIsOutput; if ( ! (options && (options->flags & RTAUDIO_JACK_DONT_CONNECT)) ) { // Count the available ports containing the client name as device // channels. Jack "input ports" equal RtAudio output channels. unsigned int nChannels = 0; ports = jack_get_ports( client, deviceName.c_str(), JACK_DEFAULT_AUDIO_TYPE, flag ); if ( ports ) { while ( ports[ nChannels ] ) nChannels++; free( ports ); } // Compare the jack ports for specified client to the requested number of channels. if ( nChannels < (channels + firstChannel) ) { errorStream_ << "RtApiJack::probeDeviceOpen: requested number of channels (" << channels << ") + offset (" << firstChannel << ") not found for specified device (" << device << ":" << deviceName << ")."; errorText_ = errorStream_.str(); return FAILURE; } } // Check the jack server sample rate. unsigned int jackRate = jack_get_sample_rate( client ); if ( sampleRate != jackRate ) { jack_client_close( client ); errorStream_ << "RtApiJack::probeDeviceOpen: the requested sample rate (" << sampleRate << ") is different than the JACK server rate (" << jackRate << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.sampleRate = jackRate; // Get the latency of the JACK port. ports = jack_get_ports( client, deviceName.c_str(), JACK_DEFAULT_AUDIO_TYPE, flag ); if ( ports[ firstChannel ] ) { // Added by Ge Wang jack_latency_callback_mode_t cbmode = (mode == INPUT ? JackCaptureLatency : JackPlaybackLatency); // the range (usually the min and max are equal) jack_latency_range_t latrange; latrange.min = latrange.max = 0; // get the latency range jack_port_get_latency_range( jack_port_by_name( client, ports[firstChannel] ), cbmode, &latrange ); // be optimistic, use the min! stream_.latency[mode] = latrange.min; //stream_.latency[mode] = jack_port_get_latency( jack_port_by_name( client, ports[ firstChannel ] ) ); } free( ports ); // The jack server always uses 32-bit floating-point data. stream_.deviceFormat[mode] = RTAUDIO_FLOAT32; stream_.userFormat = format; if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; // Jack always uses non-interleaved buffers. stream_.deviceInterleaved[mode] = false; // Jack always provides host byte-ordered data. stream_.doByteSwap[mode] = false; // Get the buffer size. The buffer size and number of buffers // (periods) is set when the jack server is started. stream_.bufferSize = (int) jack_get_buffer_size( client ); *bufferSize = stream_.bufferSize; stream_.nDeviceChannels[mode] = channels; stream_.nUserChannels[mode] = channels; // Set flags for buffer conversion. stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; // Allocate our JackHandle structure for the stream. if ( handle == 0 ) { try { handle = new JackHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiJack::probeDeviceOpen: error allocating JackHandle memory."; goto error; } if ( pthread_cond_init(&handle->condition, NULL) ) { errorText_ = "RtApiJack::probeDeviceOpen: error initializing pthread condition variable."; goto error; } stream_.apiHandle = (void *) handle; handle->client = client; } handle->deviceName[mode] = deviceName; // Allocate necessary internal buffers. unsigned long bufferBytes; bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiJack::probeDeviceOpen: error allocating user buffer memory."; goto error; } if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; if ( mode == OUTPUT ) bufferBytes = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); else { // mode == INPUT bufferBytes = stream_.nDeviceChannels[1] * formatBytes( stream_.deviceFormat[1] ); if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes(stream_.deviceFormat[0]); if ( bufferBytes < bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiJack::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } // Allocate memory for the Jack ports (channels) identifiers. handle->ports[mode] = (jack_port_t **) malloc ( sizeof (jack_port_t *) * channels ); if ( handle->ports[mode] == NULL ) { errorText_ = "RtApiJack::probeDeviceOpen: error allocating port memory."; goto error; } stream_.device[mode] = device; stream_.channelOffset[mode] = firstChannel; stream_.state = STREAM_STOPPED; stream_.callbackInfo.object = (void *) this; if ( stream_.mode == OUTPUT && mode == INPUT ) // We had already set up the stream for output. stream_.mode = DUPLEX; else { stream_.mode = mode; jack_set_process_callback( handle->client, jackCallbackHandler, (void *) &stream_.callbackInfo ); jack_set_xrun_callback( handle->client, jackXrun, (void *) &stream_.apiHandle ); jack_on_shutdown( handle->client, jackShutdown, (void *) &stream_.callbackInfo ); } // Register our ports. char label[64]; if ( mode == OUTPUT ) { for ( unsigned int i=0; iports[0][i] = jack_port_register( handle->client, (const char *)label, JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0 ); } } else { for ( unsigned int i=0; iports[1][i] = jack_port_register( handle->client, (const char *)label, JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0 ); } } // Setup the buffer conversion information structure. We don't use // buffers to do channel offsets, so we override that parameter // here. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, 0 ); if ( options && options->flags & RTAUDIO_JACK_DONT_CONNECT ) shouldAutoconnect_ = false; return SUCCESS; error: if ( handle ) { pthread_cond_destroy( &handle->condition ); jack_client_close( handle->client ); if ( handle->ports[0] ) free( handle->ports[0] ); if ( handle->ports[1] ) free( handle->ports[1] ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } return FAILURE; } void RtApiJack :: closeStream( void ) { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiJack::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } JackHandle *handle = (JackHandle *) stream_.apiHandle; if ( handle ) { if ( stream_.state == STREAM_RUNNING ) jack_deactivate( handle->client ); jack_client_close( handle->client ); } if ( handle ) { if ( handle->ports[0] ) free( handle->ports[0] ); if ( handle->ports[1] ) free( handle->ports[1] ); pthread_cond_destroy( &handle->condition ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } void RtApiJack :: startStream( void ) { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiJack::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif JackHandle *handle = (JackHandle *) stream_.apiHandle; int result = jack_activate( handle->client ); if ( result ) { errorText_ = "RtApiJack::startStream(): unable to activate JACK client!"; goto unlock; } const char **ports; // Get the list of available ports. if ( shouldAutoconnect_ && (stream_.mode == OUTPUT || stream_.mode == DUPLEX) ) { result = 1; ports = jack_get_ports( handle->client, handle->deviceName[0].c_str(), JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput); if ( ports == NULL) { errorText_ = "RtApiJack::startStream(): error determining available JACK input ports!"; goto unlock; } // Now make the port connections. Since RtAudio wasn't designed to // allow the user to select particular channels of a device, we'll // just open the first "nChannels" ports with offset. for ( unsigned int i=0; iclient, jack_port_name( handle->ports[0][i] ), ports[ stream_.channelOffset[0] + i ] ); if ( result ) { free( ports ); errorText_ = "RtApiJack::startStream(): error connecting output ports!"; goto unlock; } } free(ports); } if ( shouldAutoconnect_ && (stream_.mode == INPUT || stream_.mode == DUPLEX) ) { result = 1; ports = jack_get_ports( handle->client, handle->deviceName[1].c_str(), JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput ); if ( ports == NULL) { errorText_ = "RtApiJack::startStream(): error determining available JACK output ports!"; goto unlock; } // Now make the port connections. See note above. for ( unsigned int i=0; iclient, ports[ stream_.channelOffset[1] + i ], jack_port_name( handle->ports[1][i] ) ); if ( result ) { free( ports ); errorText_ = "RtApiJack::startStream(): error connecting input ports!"; goto unlock; } } free(ports); } handle->drainCounter = 0; handle->internalDrain = false; stream_.state = STREAM_RUNNING; unlock: if ( result == 0 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiJack :: stopStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiJack::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } JackHandle *handle = (JackHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( handle->drainCounter == 0 ) { handle->drainCounter = 2; pthread_cond_wait( &handle->condition, &stream_.mutex ); // block until signaled } } jack_deactivate( handle->client ); stream_.state = STREAM_STOPPED; } void RtApiJack :: abortStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiJack::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } JackHandle *handle = (JackHandle *) stream_.apiHandle; handle->drainCounter = 2; stopStream(); } // This function will be called by a spawned thread when the user // callback function signals that the stream should be stopped or // aborted. It is necessary to handle it this way because the // callbackEvent() function must return before the jack_deactivate() // function will return. static void *jackStopStream( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiJack *object = (RtApiJack *) info->object; object->stopStream(); pthread_exit( NULL ); } bool RtApiJack :: callbackEvent( unsigned long nframes ) { if ( stream_.state == STREAM_STOPPED || stream_.state == STREAM_STOPPING ) return SUCCESS; if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiCore::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return FAILURE; } if ( stream_.bufferSize != nframes ) { errorText_ = "RtApiCore::callbackEvent(): the JACK buffer size has changed ... cannot process!"; error( RtAudioError::WARNING ); return FAILURE; } CallbackInfo *info = (CallbackInfo *) &stream_.callbackInfo; JackHandle *handle = (JackHandle *) stream_.apiHandle; // Check if we were draining the stream and signal is finished. if ( handle->drainCounter > 3 ) { ThreadHandle threadId; stream_.state = STREAM_STOPPING; if ( handle->internalDrain == true ) pthread_create( &threadId, NULL, jackStopStream, info ); else pthread_cond_signal( &handle->condition ); return SUCCESS; } // Invoke user callback first, to get fresh output data. if ( handle->drainCounter == 0 ) { RtAudioCallback callback = (RtAudioCallback) info->callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && handle->xrun[0] == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; handle->xrun[0] = false; } if ( stream_.mode != OUTPUT && handle->xrun[1] == true ) { status |= RTAUDIO_INPUT_OVERFLOW; handle->xrun[1] = false; } int cbReturnValue = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, info->userData ); if ( cbReturnValue == 2 ) { stream_.state = STREAM_STOPPING; handle->drainCounter = 2; ThreadHandle id; pthread_create( &id, NULL, jackStopStream, info ); return SUCCESS; } else if ( cbReturnValue == 1 ) { handle->drainCounter = 1; handle->internalDrain = true; } } jack_default_audio_sample_t *jackbuffer; unsigned long bufferBytes = nframes * sizeof( jack_default_audio_sample_t ); if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( handle->drainCounter > 1 ) { // write zeros to the output stream for ( unsigned int i=0; iports[0][i], (jack_nframes_t) nframes ); memset( jackbuffer, 0, bufferBytes ); } } else if ( stream_.doConvertBuffer[0] ) { convertBuffer( stream_.deviceBuffer, stream_.userBuffer[0], stream_.convertInfo[0] ); for ( unsigned int i=0; iports[0][i], (jack_nframes_t) nframes ); memcpy( jackbuffer, &stream_.deviceBuffer[i*bufferBytes], bufferBytes ); } } else { // no buffer conversion for ( unsigned int i=0; iports[0][i], (jack_nframes_t) nframes ); memcpy( jackbuffer, &stream_.userBuffer[0][i*bufferBytes], bufferBytes ); } } } // Don't bother draining input if ( handle->drainCounter ) { handle->drainCounter++; goto unlock; } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { if ( stream_.doConvertBuffer[1] ) { for ( unsigned int i=0; iports[1][i], (jack_nframes_t) nframes ); memcpy( &stream_.deviceBuffer[i*bufferBytes], jackbuffer, bufferBytes ); } convertBuffer( stream_.userBuffer[1], stream_.deviceBuffer, stream_.convertInfo[1] ); } else { // no buffer conversion for ( unsigned int i=0; iports[1][i], (jack_nframes_t) nframes ); memcpy( &stream_.userBuffer[1][i*bufferBytes], jackbuffer, bufferBytes ); } } } unlock: RtApi::tickStreamTime(); return SUCCESS; } //******************** End of __UNIX_JACK__ *********************// #endif #if defined(__WINDOWS_ASIO__) // ASIO API on Windows // The ASIO API is designed around a callback scheme, so this // implementation is similar to that used for OS-X CoreAudio and Linux // Jack. The primary constraint with ASIO is that it only allows // access to a single driver at a time. Thus, it is not possible to // have more than one simultaneous RtAudio stream. // // This implementation also requires a number of external ASIO files // and a few global variables. The ASIO callback scheme does not // allow for the passing of user data, so we must create a global // pointer to our callbackInfo structure. // // On unix systems, we make use of a pthread condition variable. // Since there is no equivalent in Windows, I hacked something based // on information found in // http://www.cs.wustl.edu/~schmidt/win32-cv-1.html. #include "asiosys.h" #include "asio.h" #include "iasiothiscallresolver.h" #include "asiodrivers.h" #include static AsioDrivers drivers; static ASIOCallbacks asioCallbacks; static ASIODriverInfo driverInfo; static CallbackInfo *asioCallbackInfo; static bool asioXRun; struct AsioHandle { int drainCounter; // Tracks callback counts when draining bool internalDrain; // Indicates if stop is initiated from callback or not. ASIOBufferInfo *bufferInfos; HANDLE condition; AsioHandle() :drainCounter(0), internalDrain(false), bufferInfos(0) {} }; // Function declarations (definitions at end of section) static const char* getAsioErrorString( ASIOError result ); static void sampleRateChanged( ASIOSampleRate sRate ); static long asioMessages( long selector, long value, void* message, double* opt ); RtApiAsio :: RtApiAsio() { // ASIO cannot run on a multi-threaded appartment. You can call // CoInitialize beforehand, but it must be for appartment threading // (in which case, CoInitilialize will return S_FALSE here). coInitialized_ = false; HRESULT hr = CoInitialize( NULL ); if ( FAILED(hr) ) { errorText_ = "RtApiAsio::ASIO requires a single-threaded appartment. Call CoInitializeEx(0,COINIT_APARTMENTTHREADED)"; error( RtAudioError::WARNING ); } coInitialized_ = true; drivers.removeCurrentDriver(); driverInfo.asioVersion = 2; // See note in DirectSound implementation about GetDesktopWindow(). driverInfo.sysRef = GetForegroundWindow(); } RtApiAsio :: ~RtApiAsio() { if ( stream_.state != STREAM_CLOSED ) closeStream(); if ( coInitialized_ ) CoUninitialize(); } unsigned int RtApiAsio :: getDeviceCount( void ) { return (unsigned int) drivers.asioGetNumDev(); } RtAudio::DeviceInfo RtApiAsio :: getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; info.probed = false; // Get device ID unsigned int nDevices = getDeviceCount(); if ( nDevices == 0 ) { errorText_ = "RtApiAsio::getDeviceInfo: no devices found!"; error( RtAudioError::INVALID_USE ); return info; } if ( device >= nDevices ) { errorText_ = "RtApiAsio::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } // If a stream is already open, we cannot probe other devices. Thus, use the saved results. if ( stream_.state != STREAM_CLOSED ) { if ( device >= devices_.size() ) { errorText_ = "RtApiAsio::getDeviceInfo: device ID was not present before stream was opened."; error( RtAudioError::WARNING ); return info; } return devices_[ device ]; } char driverName[32]; ASIOError result = drivers.asioGetDriverName( (int) device, driverName, 32 ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::getDeviceInfo: unable to get driver name (" << getAsioErrorString( result ) << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } info.name = driverName; if ( !drivers.loadDriver( driverName ) ) { errorStream_ << "RtApiAsio::getDeviceInfo: unable to load driver (" << driverName << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } result = ASIOInit( &driverInfo ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::getDeviceInfo: error (" << getAsioErrorString( result ) << ") initializing driver (" << driverName << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Determine the device channel information. long inputChannels, outputChannels; result = ASIOGetChannels( &inputChannels, &outputChannels ); if ( result != ASE_OK ) { drivers.removeCurrentDriver(); errorStream_ << "RtApiAsio::getDeviceInfo: error (" << getAsioErrorString( result ) << ") getting channel count (" << driverName << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } info.outputChannels = outputChannels; info.inputChannels = inputChannels; if ( info.outputChannels > 0 && info.inputChannels > 0 ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; // Determine the supported sample rates. info.sampleRates.clear(); for ( unsigned int i=0; i info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[i]; } } // Determine supported data types ... just check first channel and assume rest are the same. ASIOChannelInfo channelInfo; channelInfo.channel = 0; channelInfo.isInput = true; if ( info.inputChannels <= 0 ) channelInfo.isInput = false; result = ASIOGetChannelInfo( &channelInfo ); if ( result != ASE_OK ) { drivers.removeCurrentDriver(); errorStream_ << "RtApiAsio::getDeviceInfo: error (" << getAsioErrorString( result ) << ") getting driver channel info (" << driverName << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } info.nativeFormats = 0; if ( channelInfo.type == ASIOSTInt16MSB || channelInfo.type == ASIOSTInt16LSB ) info.nativeFormats |= RTAUDIO_SINT16; else if ( channelInfo.type == ASIOSTInt32MSB || channelInfo.type == ASIOSTInt32LSB ) info.nativeFormats |= RTAUDIO_SINT32; else if ( channelInfo.type == ASIOSTFloat32MSB || channelInfo.type == ASIOSTFloat32LSB ) info.nativeFormats |= RTAUDIO_FLOAT32; else if ( channelInfo.type == ASIOSTFloat64MSB || channelInfo.type == ASIOSTFloat64LSB ) info.nativeFormats |= RTAUDIO_FLOAT64; else if ( channelInfo.type == ASIOSTInt24MSB || channelInfo.type == ASIOSTInt24LSB ) info.nativeFormats |= RTAUDIO_SINT24; if ( info.outputChannels > 0 ) if ( getDefaultOutputDevice() == device ) info.isDefaultOutput = true; if ( info.inputChannels > 0 ) if ( getDefaultInputDevice() == device ) info.isDefaultInput = true; info.probed = true; drivers.removeCurrentDriver(); return info; } static void bufferSwitch( long index, ASIOBool /*processNow*/ ) { RtApiAsio *object = (RtApiAsio *) asioCallbackInfo->object; object->callbackEvent( index ); } void RtApiAsio :: saveDeviceInfo( void ) { devices_.clear(); unsigned int nDevices = getDeviceCount(); devices_.resize( nDevices ); for ( unsigned int i=0; isaveDeviceInfo(); if ( !drivers.loadDriver( driverName ) ) { errorStream_ << "RtApiAsio::probeDeviceOpen: unable to load driver (" << driverName << ")."; errorText_ = errorStream_.str(); return FAILURE; } result = ASIOInit( &driverInfo ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: error (" << getAsioErrorString( result ) << ") initializing driver (" << driverName << ")."; errorText_ = errorStream_.str(); return FAILURE; } } // keep them before any "goto error", they are used for error cleanup + goto device boundary checks bool buffersAllocated = false; AsioHandle *handle = (AsioHandle *) stream_.apiHandle; unsigned int nChannels; // Check the device channel count. long inputChannels, outputChannels; result = ASIOGetChannels( &inputChannels, &outputChannels ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: error (" << getAsioErrorString( result ) << ") getting channel count (" << driverName << ")."; errorText_ = errorStream_.str(); goto error; } if ( ( mode == OUTPUT && (channels+firstChannel) > (unsigned int) outputChannels) || ( mode == INPUT && (channels+firstChannel) > (unsigned int) inputChannels) ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") does not support requested channel count (" << channels << ") + offset (" << firstChannel << ")."; errorText_ = errorStream_.str(); goto error; } stream_.nDeviceChannels[mode] = channels; stream_.nUserChannels[mode] = channels; stream_.channelOffset[mode] = firstChannel; // Verify the sample rate is supported. result = ASIOCanSampleRate( (ASIOSampleRate) sampleRate ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") does not support requested sample rate (" << sampleRate << ")."; errorText_ = errorStream_.str(); goto error; } // Get the current sample rate ASIOSampleRate currentRate; result = ASIOGetSampleRate( ¤tRate ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error getting sample rate."; errorText_ = errorStream_.str(); goto error; } // Set the sample rate only if necessary if ( currentRate != sampleRate ) { result = ASIOSetSampleRate( (ASIOSampleRate) sampleRate ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error setting sample rate (" << sampleRate << ")."; errorText_ = errorStream_.str(); goto error; } } // Determine the driver data type. ASIOChannelInfo channelInfo; channelInfo.channel = 0; if ( mode == OUTPUT ) channelInfo.isInput = false; else channelInfo.isInput = true; result = ASIOGetChannelInfo( &channelInfo ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error (" << getAsioErrorString( result ) << ") getting data format."; errorText_ = errorStream_.str(); goto error; } // Assuming WINDOWS host is always little-endian. stream_.doByteSwap[mode] = false; stream_.userFormat = format; stream_.deviceFormat[mode] = 0; if ( channelInfo.type == ASIOSTInt16MSB || channelInfo.type == ASIOSTInt16LSB ) { stream_.deviceFormat[mode] = RTAUDIO_SINT16; if ( channelInfo.type == ASIOSTInt16MSB ) stream_.doByteSwap[mode] = true; } else if ( channelInfo.type == ASIOSTInt32MSB || channelInfo.type == ASIOSTInt32LSB ) { stream_.deviceFormat[mode] = RTAUDIO_SINT32; if ( channelInfo.type == ASIOSTInt32MSB ) stream_.doByteSwap[mode] = true; } else if ( channelInfo.type == ASIOSTFloat32MSB || channelInfo.type == ASIOSTFloat32LSB ) { stream_.deviceFormat[mode] = RTAUDIO_FLOAT32; if ( channelInfo.type == ASIOSTFloat32MSB ) stream_.doByteSwap[mode] = true; } else if ( channelInfo.type == ASIOSTFloat64MSB || channelInfo.type == ASIOSTFloat64LSB ) { stream_.deviceFormat[mode] = RTAUDIO_FLOAT64; if ( channelInfo.type == ASIOSTFloat64MSB ) stream_.doByteSwap[mode] = true; } else if ( channelInfo.type == ASIOSTInt24MSB || channelInfo.type == ASIOSTInt24LSB ) { stream_.deviceFormat[mode] = RTAUDIO_SINT24; if ( channelInfo.type == ASIOSTInt24MSB ) stream_.doByteSwap[mode] = true; } if ( stream_.deviceFormat[mode] == 0 ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") data format not supported by RtAudio."; errorText_ = errorStream_.str(); goto error; } // Set the buffer size. For a duplex stream, this will end up // setting the buffer size based on the input constraints, which // should be ok. long minSize, maxSize, preferSize, granularity; result = ASIOGetBufferSize( &minSize, &maxSize, &preferSize, &granularity ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error (" << getAsioErrorString( result ) << ") getting buffer size."; errorText_ = errorStream_.str(); goto error; } if ( isDuplexInput ) { // When this is the duplex input (output was opened before), then we have to use the same // buffersize as the output, because it might use the preferred buffer size, which most // likely wasn't passed as input to this. The buffer sizes have to be identically anyway, // So instead of throwing an error, make them equal. The caller uses the reference // to the "bufferSize" param as usual to set up processing buffers. *bufferSize = stream_.bufferSize; } else { if ( *bufferSize == 0 ) *bufferSize = preferSize; else if ( *bufferSize < (unsigned int) minSize ) *bufferSize = (unsigned int) minSize; else if ( *bufferSize > (unsigned int) maxSize ) *bufferSize = (unsigned int) maxSize; else if ( granularity == -1 ) { // Make sure bufferSize is a power of two. int log2_of_min_size = 0; int log2_of_max_size = 0; for ( unsigned int i = 0; i < sizeof(long) * 8; i++ ) { if ( minSize & ((long)1 << i) ) log2_of_min_size = i; if ( maxSize & ((long)1 << i) ) log2_of_max_size = i; } long min_delta = std::abs( (long)*bufferSize - ((long)1 << log2_of_min_size) ); int min_delta_num = log2_of_min_size; for (int i = log2_of_min_size + 1; i <= log2_of_max_size; i++) { long current_delta = std::abs( (long)*bufferSize - ((long)1 << i) ); if (current_delta < min_delta) { min_delta = current_delta; min_delta_num = i; } } *bufferSize = ( (unsigned int)1 << min_delta_num ); if ( *bufferSize < (unsigned int) minSize ) *bufferSize = (unsigned int) minSize; else if ( *bufferSize > (unsigned int) maxSize ) *bufferSize = (unsigned int) maxSize; } else if ( granularity != 0 ) { // Set to an even multiple of granularity, rounding up. *bufferSize = (*bufferSize + granularity-1) / granularity * granularity; } } /* // we don't use it anymore, see above! // Just left it here for the case... if ( isDuplexInput && stream_.bufferSize != *bufferSize ) { errorText_ = "RtApiAsio::probeDeviceOpen: input/output buffersize discrepancy!"; goto error; } */ stream_.bufferSize = *bufferSize; stream_.nBuffers = 2; if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; // ASIO always uses non-interleaved buffers. stream_.deviceInterleaved[mode] = false; // Allocate, if necessary, our AsioHandle structure for the stream. if ( handle == 0 ) { try { handle = new AsioHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiAsio::probeDeviceOpen: error allocating AsioHandle memory."; goto error; } handle->bufferInfos = 0; // Create a manual-reset event. handle->condition = CreateEvent( NULL, // no security TRUE, // manual-reset FALSE, // non-signaled initially NULL ); // unnamed stream_.apiHandle = (void *) handle; } // Create the ASIO internal buffers. Since RtAudio sets up input // and output separately, we'll have to dispose of previously // created output buffers for a duplex stream. if ( mode == INPUT && stream_.mode == OUTPUT ) { ASIODisposeBuffers(); if ( handle->bufferInfos ) free( handle->bufferInfos ); } // Allocate, initialize, and save the bufferInfos in our stream callbackInfo structure. unsigned int i; nChannels = stream_.nDeviceChannels[0] + stream_.nDeviceChannels[1]; handle->bufferInfos = (ASIOBufferInfo *) malloc( nChannels * sizeof(ASIOBufferInfo) ); if ( handle->bufferInfos == NULL ) { errorStream_ << "RtApiAsio::probeDeviceOpen: error allocating bufferInfo memory for driver (" << driverName << ")."; errorText_ = errorStream_.str(); goto error; } ASIOBufferInfo *infos; infos = handle->bufferInfos; for ( i=0; iisInput = ASIOFalse; infos->channelNum = i + stream_.channelOffset[0]; infos->buffers[0] = infos->buffers[1] = 0; } for ( i=0; iisInput = ASIOTrue; infos->channelNum = i + stream_.channelOffset[1]; infos->buffers[0] = infos->buffers[1] = 0; } // prepare for callbacks stream_.sampleRate = sampleRate; stream_.device[mode] = device; stream_.mode = isDuplexInput ? DUPLEX : mode; // store this class instance before registering callbacks, that are going to use it asioCallbackInfo = &stream_.callbackInfo; stream_.callbackInfo.object = (void *) this; // Set up the ASIO callback structure and create the ASIO data buffers. asioCallbacks.bufferSwitch = &bufferSwitch; asioCallbacks.sampleRateDidChange = &sampleRateChanged; asioCallbacks.asioMessage = &asioMessages; asioCallbacks.bufferSwitchTimeInfo = NULL; result = ASIOCreateBuffers( handle->bufferInfos, nChannels, stream_.bufferSize, &asioCallbacks ); if ( result != ASE_OK ) { // Standard method failed. This can happen with strict/misbehaving drivers that return valid buffer size ranges // but only accept the preferred buffer size as parameter for ASIOCreateBuffers (e.g. Creative's ASIO driver). // In that case, let's be naïve and try that instead. *bufferSize = preferSize; stream_.bufferSize = *bufferSize; result = ASIOCreateBuffers( handle->bufferInfos, nChannels, stream_.bufferSize, &asioCallbacks ); } if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error (" << getAsioErrorString( result ) << ") creating buffers."; errorText_ = errorStream_.str(); goto error; } buffersAllocated = true; stream_.state = STREAM_STOPPED; // Set flags for buffer conversion. stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; // Allocate necessary internal buffers unsigned long bufferBytes; bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiAsio::probeDeviceOpen: error allocating user buffer memory."; goto error; } if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( isDuplexInput && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= bytesOut ) makeBuffer = false; } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiAsio::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } // Determine device latencies long inputLatency, outputLatency; result = ASIOGetLatencies( &inputLatency, &outputLatency ); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::probeDeviceOpen: driver (" << driverName << ") error (" << getAsioErrorString( result ) << ") getting latency."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING); // warn but don't fail } else { stream_.latency[0] = outputLatency; stream_.latency[1] = inputLatency; } // Setup the buffer conversion information structure. We don't use // buffers to do channel offsets, so we override that parameter // here. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, 0 ); return SUCCESS; error: if ( !isDuplexInput ) { // the cleanup for error in the duplex input, is done by RtApi::openStream // So we clean up for single channel only if ( buffersAllocated ) ASIODisposeBuffers(); drivers.removeCurrentDriver(); if ( handle ) { CloseHandle( handle->condition ); if ( handle->bufferInfos ) free( handle->bufferInfos ); delete handle; stream_.apiHandle = 0; } if ( stream_.userBuffer[mode] ) { free( stream_.userBuffer[mode] ); stream_.userBuffer[mode] = 0; } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } } return FAILURE; }//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// void RtApiAsio :: closeStream() { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiAsio::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } if ( stream_.state == STREAM_RUNNING ) { stream_.state = STREAM_STOPPED; ASIOStop(); } ASIODisposeBuffers(); drivers.removeCurrentDriver(); AsioHandle *handle = (AsioHandle *) stream_.apiHandle; if ( handle ) { CloseHandle( handle->condition ); if ( handle->bufferInfos ) free( handle->bufferInfos ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } bool stopThreadCalled = false; void RtApiAsio :: startStream() { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiAsio::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif AsioHandle *handle = (AsioHandle *) stream_.apiHandle; ASIOError result = ASIOStart(); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::startStream: error (" << getAsioErrorString( result ) << ") starting device."; errorText_ = errorStream_.str(); goto unlock; } handle->drainCounter = 0; handle->internalDrain = false; ResetEvent( handle->condition ); stream_.state = STREAM_RUNNING; asioXRun = false; unlock: stopThreadCalled = false; if ( result == ASE_OK ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiAsio :: stopStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiAsio::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } AsioHandle *handle = (AsioHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( handle->drainCounter == 0 ) { handle->drainCounter = 2; WaitForSingleObject( handle->condition, INFINITE ); // block until signaled } } stream_.state = STREAM_STOPPED; ASIOError result = ASIOStop(); if ( result != ASE_OK ) { errorStream_ << "RtApiAsio::stopStream: error (" << getAsioErrorString( result ) << ") stopping device."; errorText_ = errorStream_.str(); } if ( result == ASE_OK ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiAsio :: abortStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiAsio::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } // The following lines were commented-out because some behavior was // noted where the device buffers need to be zeroed to avoid // continuing sound, even when the device buffers are completely // disposed. So now, calling abort is the same as calling stop. // AsioHandle *handle = (AsioHandle *) stream_.apiHandle; // handle->drainCounter = 2; stopStream(); } // This function will be called by a spawned thread when the user // callback function signals that the stream should be stopped or // aborted. It is necessary to handle it this way because the // callbackEvent() function must return before the ASIOStop() // function will return. static unsigned __stdcall asioStopStream( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiAsio *object = (RtApiAsio *) info->object; object->stopStream(); _endthreadex( 0 ); return 0; } bool RtApiAsio :: callbackEvent( long bufferIndex ) { if ( stream_.state == STREAM_STOPPED || stream_.state == STREAM_STOPPING ) return SUCCESS; if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiAsio::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return FAILURE; } CallbackInfo *info = (CallbackInfo *) &stream_.callbackInfo; AsioHandle *handle = (AsioHandle *) stream_.apiHandle; // Check if we were draining the stream and signal if finished. if ( handle->drainCounter > 3 ) { stream_.state = STREAM_STOPPING; if ( handle->internalDrain == false ) SetEvent( handle->condition ); else { // spawn a thread to stop the stream unsigned threadId; stream_.callbackInfo.thread = _beginthreadex( NULL, 0, &asioStopStream, &stream_.callbackInfo, 0, &threadId ); } return SUCCESS; } // Invoke user callback to get fresh output data UNLESS we are // draining stream. if ( handle->drainCounter == 0 ) { RtAudioCallback callback = (RtAudioCallback) info->callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && asioXRun == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; asioXRun = false; } if ( stream_.mode != OUTPUT && asioXRun == true ) { status |= RTAUDIO_INPUT_OVERFLOW; asioXRun = false; } int cbReturnValue = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, info->userData ); if ( cbReturnValue == 2 ) { stream_.state = STREAM_STOPPING; handle->drainCounter = 2; unsigned threadId; stream_.callbackInfo.thread = _beginthreadex( NULL, 0, &asioStopStream, &stream_.callbackInfo, 0, &threadId ); return SUCCESS; } else if ( cbReturnValue == 1 ) { handle->drainCounter = 1; handle->internalDrain = true; } } unsigned int nChannels, bufferBytes, i, j; nChannels = stream_.nDeviceChannels[0] + stream_.nDeviceChannels[1]; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { bufferBytes = stream_.bufferSize * formatBytes( stream_.deviceFormat[0] ); if ( handle->drainCounter > 1 ) { // write zeros to the output stream for ( i=0, j=0; ibufferInfos[i].isInput != ASIOTrue ) memset( handle->bufferInfos[i].buffers[bufferIndex], 0, bufferBytes ); } } else if ( stream_.doConvertBuffer[0] ) { convertBuffer( stream_.deviceBuffer, stream_.userBuffer[0], stream_.convertInfo[0] ); if ( stream_.doByteSwap[0] ) byteSwapBuffer( stream_.deviceBuffer, stream_.bufferSize * stream_.nDeviceChannels[0], stream_.deviceFormat[0] ); for ( i=0, j=0; ibufferInfos[i].isInput != ASIOTrue ) memcpy( handle->bufferInfos[i].buffers[bufferIndex], &stream_.deviceBuffer[j++*bufferBytes], bufferBytes ); } } else { if ( stream_.doByteSwap[0] ) byteSwapBuffer( stream_.userBuffer[0], stream_.bufferSize * stream_.nUserChannels[0], stream_.userFormat ); for ( i=0, j=0; ibufferInfos[i].isInput != ASIOTrue ) memcpy( handle->bufferInfos[i].buffers[bufferIndex], &stream_.userBuffer[0][bufferBytes*j++], bufferBytes ); } } } // Don't bother draining input if ( handle->drainCounter ) { handle->drainCounter++; goto unlock; } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { bufferBytes = stream_.bufferSize * formatBytes(stream_.deviceFormat[1]); if (stream_.doConvertBuffer[1]) { // Always interleave ASIO input data. for ( i=0, j=0; ibufferInfos[i].isInput == ASIOTrue ) memcpy( &stream_.deviceBuffer[j++*bufferBytes], handle->bufferInfos[i].buffers[bufferIndex], bufferBytes ); } if ( stream_.doByteSwap[1] ) byteSwapBuffer( stream_.deviceBuffer, stream_.bufferSize * stream_.nDeviceChannels[1], stream_.deviceFormat[1] ); convertBuffer( stream_.userBuffer[1], stream_.deviceBuffer, stream_.convertInfo[1] ); } else { for ( i=0, j=0; ibufferInfos[i].isInput == ASIOTrue ) { memcpy( &stream_.userBuffer[1][bufferBytes*j++], handle->bufferInfos[i].buffers[bufferIndex], bufferBytes ); } } if ( stream_.doByteSwap[1] ) byteSwapBuffer( stream_.userBuffer[1], stream_.bufferSize * stream_.nUserChannels[1], stream_.userFormat ); } } unlock: // The following call was suggested by Malte Clasen. While the API // documentation indicates it should not be required, some device // drivers apparently do not function correctly without it. ASIOOutputReady(); RtApi::tickStreamTime(); return SUCCESS; } static void sampleRateChanged( ASIOSampleRate sRate ) { // The ASIO documentation says that this usually only happens during // external sync. Audio processing is not stopped by the driver, // actual sample rate might not have even changed, maybe only the // sample rate status of an AES/EBU or S/PDIF digital input at the // audio device. RtApi *object = (RtApi *) asioCallbackInfo->object; try { object->stopStream(); } catch ( RtAudioError &exception ) { std::cerr << "\nRtApiAsio: sampleRateChanged() error (" << exception.getMessage() << ")!\n" << std::endl; return; } std::cerr << "\nRtApiAsio: driver reports sample rate changed to " << sRate << " ... stream stopped!!!\n" << std::endl; } static long asioMessages( long selector, long value, void* /*message*/, double* /*opt*/ ) { long ret = 0; switch( selector ) { case kAsioSelectorSupported: if ( value == kAsioResetRequest || value == kAsioEngineVersion || value == kAsioResyncRequest || value == kAsioLatenciesChanged // The following three were added for ASIO 2.0, you don't // necessarily have to support them. || value == kAsioSupportsTimeInfo || value == kAsioSupportsTimeCode || value == kAsioSupportsInputMonitor) ret = 1L; break; case kAsioResetRequest: // Defer the task and perform the reset of the driver during the // next "safe" situation. You cannot reset the driver right now, // as this code is called from the driver. Reset the driver is // done by completely destruct is. I.e. ASIOStop(), // ASIODisposeBuffers(), Destruction Afterwards you initialize the // driver again. std::cerr << "\nRtApiAsio: driver reset requested!!!" << std::endl; ret = 1L; break; case kAsioResyncRequest: // This informs the application that the driver encountered some // non-fatal data loss. It is used for synchronization purposes // of different media. Added mainly to work around the Win16Mutex // problems in Windows 95/98 with the Windows Multimedia system, // which could lose data because the Mutex was held too long by // another thread. However a driver can issue it in other // situations, too. // std::cerr << "\nRtApiAsio: driver resync requested!!!" << std::endl; asioXRun = true; ret = 1L; break; case kAsioLatenciesChanged: // This will inform the host application that the drivers were // latencies changed. Beware, it this does not mean that the // buffer sizes have changed! You might need to update internal // delay data. std::cerr << "\nRtApiAsio: driver latency may have changed!!!" << std::endl; ret = 1L; break; case kAsioEngineVersion: // Return the supported ASIO version of the host application. If // a host application does not implement this selector, ASIO 1.0 // is assumed by the driver. ret = 2L; break; case kAsioSupportsTimeInfo: // Informs the driver whether the // asioCallbacks.bufferSwitchTimeInfo() callback is supported. // For compatibility with ASIO 1.0 drivers the host application // should always support the "old" bufferSwitch method, too. ret = 0; break; case kAsioSupportsTimeCode: // Informs the driver whether application is interested in time // code info. If an application does not need to know about time // code, the driver has less work to do. ret = 0; break; } return ret; } static const char* getAsioErrorString( ASIOError result ) { struct Messages { ASIOError value; const char*message; }; static const Messages m[] = { { ASE_NotPresent, "Hardware input or output is not present or available." }, { ASE_HWMalfunction, "Hardware is malfunctioning." }, { ASE_InvalidParameter, "Invalid input parameter." }, { ASE_InvalidMode, "Invalid mode." }, { ASE_SPNotAdvancing, "Sample position not advancing." }, { ASE_NoClock, "Sample clock or rate cannot be determined or is not present." }, { ASE_NoMemory, "Not enough memory to complete the request." } }; for ( unsigned int i = 0; i < sizeof(m)/sizeof(m[0]); ++i ) if ( m[i].value == result ) return m[i].message; return "Unknown error."; } //******************** End of __WINDOWS_ASIO__ *********************// #endif #if defined(__WINDOWS_WASAPI__) // Windows WASAPI API // Authored by Marcus Tomlinson , April 2014 // - Introduces support for the Windows WASAPI API // - Aims to deliver bit streams to and from hardware at the lowest possible latency, via the absolute minimum buffer sizes required // - Provides flexible stream configuration to an otherwise strict and inflexible WASAPI interface // - Includes automatic internal conversion of sample rate and buffer size between hardware and the user #ifndef INITGUID #define INITGUID #endif #include #include #include #include #include #include #include #include #include #ifndef MF_E_TRANSFORM_NEED_MORE_INPUT #define MF_E_TRANSFORM_NEED_MORE_INPUT _HRESULT_TYPEDEF_(0xc00d6d72) #endif #ifndef MFSTARTUP_NOSOCKET #define MFSTARTUP_NOSOCKET 0x1 #endif #ifdef _MSC_VER #pragma comment( lib, "ksuser" ) #pragma comment( lib, "mfplat.lib" ) #pragma comment( lib, "mfuuid.lib" ) #pragma comment( lib, "wmcodecdspuuid" ) #endif //============================================================================= #define SAFE_RELEASE( objectPtr )\ if ( objectPtr )\ {\ objectPtr->Release();\ objectPtr = NULL;\ } typedef HANDLE ( __stdcall *TAvSetMmThreadCharacteristicsPtr )( LPCWSTR TaskName, LPDWORD TaskIndex ); //----------------------------------------------------------------------------- // WASAPI dictates stream sample rate, format, channel count, and in some cases, buffer size. // Therefore we must perform all necessary conversions to user buffers in order to satisfy these // requirements. WasapiBuffer ring buffers are used between HwIn->UserIn and UserOut->HwOut to // provide intermediate storage for read / write synchronization. class WasapiBuffer { public: WasapiBuffer() : buffer_( NULL ), bufferSize_( 0 ), inIndex_( 0 ), outIndex_( 0 ) {} ~WasapiBuffer() { free( buffer_ ); } // sets the length of the internal ring buffer void setBufferSize( unsigned int bufferSize, unsigned int formatBytes ) { free( buffer_ ); buffer_ = ( char* ) calloc( bufferSize, formatBytes ); bufferSize_ = bufferSize; inIndex_ = 0; outIndex_ = 0; } // attempt to push a buffer into the ring buffer at the current "in" index bool pushBuffer( char* buffer, unsigned int bufferSize, RtAudioFormat format ) { if ( !buffer || // incoming buffer is NULL bufferSize == 0 || // incoming buffer has no data bufferSize > bufferSize_ ) // incoming buffer too large { return false; } unsigned int relOutIndex = outIndex_; unsigned int inIndexEnd = inIndex_ + bufferSize; if ( relOutIndex < inIndex_ && inIndexEnd >= bufferSize_ ) { relOutIndex += bufferSize_; } // the "IN" index CAN BEGIN at the "OUT" index // the "IN" index CANNOT END at the "OUT" index if ( inIndex_ < relOutIndex && inIndexEnd >= relOutIndex ) { return false; // not enough space between "in" index and "out" index } // copy buffer from external to internal int fromZeroSize = inIndex_ + bufferSize - bufferSize_; fromZeroSize = fromZeroSize < 0 ? 0 : fromZeroSize; int fromInSize = bufferSize - fromZeroSize; switch( format ) { case RTAUDIO_SINT8: memcpy( &( ( char* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( char ) ); memcpy( buffer_, &( ( char* ) buffer )[fromInSize], fromZeroSize * sizeof( char ) ); break; case RTAUDIO_SINT16: memcpy( &( ( short* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( short ) ); memcpy( buffer_, &( ( short* ) buffer )[fromInSize], fromZeroSize * sizeof( short ) ); break; case RTAUDIO_SINT24: memcpy( &( ( S24* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( S24 ) ); memcpy( buffer_, &( ( S24* ) buffer )[fromInSize], fromZeroSize * sizeof( S24 ) ); break; case RTAUDIO_SINT32: memcpy( &( ( int* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( int ) ); memcpy( buffer_, &( ( int* ) buffer )[fromInSize], fromZeroSize * sizeof( int ) ); break; case RTAUDIO_FLOAT32: memcpy( &( ( float* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( float ) ); memcpy( buffer_, &( ( float* ) buffer )[fromInSize], fromZeroSize * sizeof( float ) ); break; case RTAUDIO_FLOAT64: memcpy( &( ( double* ) buffer_ )[inIndex_], buffer, fromInSize * sizeof( double ) ); memcpy( buffer_, &( ( double* ) buffer )[fromInSize], fromZeroSize * sizeof( double ) ); break; } // update "in" index inIndex_ += bufferSize; inIndex_ %= bufferSize_; return true; } // attempt to pull a buffer from the ring buffer from the current "out" index bool pullBuffer( char* buffer, unsigned int bufferSize, RtAudioFormat format ) { if ( !buffer || // incoming buffer is NULL bufferSize == 0 || // incoming buffer has no data bufferSize > bufferSize_ ) // incoming buffer too large { return false; } unsigned int relInIndex = inIndex_; unsigned int outIndexEnd = outIndex_ + bufferSize; if ( relInIndex < outIndex_ && outIndexEnd >= bufferSize_ ) { relInIndex += bufferSize_; } // the "OUT" index CANNOT BEGIN at the "IN" index // the "OUT" index CAN END at the "IN" index if ( outIndex_ <= relInIndex && outIndexEnd > relInIndex ) { return false; // not enough space between "out" index and "in" index } // copy buffer from internal to external int fromZeroSize = outIndex_ + bufferSize - bufferSize_; fromZeroSize = fromZeroSize < 0 ? 0 : fromZeroSize; int fromOutSize = bufferSize - fromZeroSize; switch( format ) { case RTAUDIO_SINT8: memcpy( buffer, &( ( char* ) buffer_ )[outIndex_], fromOutSize * sizeof( char ) ); memcpy( &( ( char* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( char ) ); break; case RTAUDIO_SINT16: memcpy( buffer, &( ( short* ) buffer_ )[outIndex_], fromOutSize * sizeof( short ) ); memcpy( &( ( short* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( short ) ); break; case RTAUDIO_SINT24: memcpy( buffer, &( ( S24* ) buffer_ )[outIndex_], fromOutSize * sizeof( S24 ) ); memcpy( &( ( S24* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( S24 ) ); break; case RTAUDIO_SINT32: memcpy( buffer, &( ( int* ) buffer_ )[outIndex_], fromOutSize * sizeof( int ) ); memcpy( &( ( int* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( int ) ); break; case RTAUDIO_FLOAT32: memcpy( buffer, &( ( float* ) buffer_ )[outIndex_], fromOutSize * sizeof( float ) ); memcpy( &( ( float* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( float ) ); break; case RTAUDIO_FLOAT64: memcpy( buffer, &( ( double* ) buffer_ )[outIndex_], fromOutSize * sizeof( double ) ); memcpy( &( ( double* ) buffer )[fromOutSize], buffer_, fromZeroSize * sizeof( double ) ); break; } // update "out" index outIndex_ += bufferSize; outIndex_ %= bufferSize_; return true; } private: char* buffer_; unsigned int bufferSize_; unsigned int inIndex_; unsigned int outIndex_; }; //----------------------------------------------------------------------------- // In order to satisfy WASAPI's buffer requirements, we need a means of converting sample rate // between HW and the user. The WasapiResampler class is used to perform this conversion between // HwIn->UserIn and UserOut->HwOut during the stream callback loop. class WasapiResampler { public: WasapiResampler( bool isFloat, unsigned int bitsPerSample, unsigned int channelCount, unsigned int inSampleRate, unsigned int outSampleRate ) : _bytesPerSample( bitsPerSample / 8 ) , _channelCount( channelCount ) , _sampleRatio( ( float ) outSampleRate / inSampleRate ) , _transformUnk( NULL ) , _transform( NULL ) , _mediaType( NULL ) , _inputMediaType( NULL ) , _outputMediaType( NULL ) #ifdef __IWMResamplerProps_FWD_DEFINED__ , _resamplerProps( NULL ) #endif { // 1. Initialization MFStartup( MF_VERSION, MFSTARTUP_NOSOCKET ); // 2. Create Resampler Transform Object CoCreateInstance( CLSID_CResamplerMediaObject, NULL, CLSCTX_INPROC_SERVER, IID_IUnknown, ( void** ) &_transformUnk ); _transformUnk->QueryInterface( IID_PPV_ARGS( &_transform ) ); #ifdef __IWMResamplerProps_FWD_DEFINED__ _transformUnk->QueryInterface( IID_PPV_ARGS( &_resamplerProps ) ); _resamplerProps->SetHalfFilterLength( 60 ); // best conversion quality #endif // 3. Specify input / output format MFCreateMediaType( &_mediaType ); _mediaType->SetGUID( MF_MT_MAJOR_TYPE, MFMediaType_Audio ); _mediaType->SetGUID( MF_MT_SUBTYPE, isFloat ? MFAudioFormat_Float : MFAudioFormat_PCM ); _mediaType->SetUINT32( MF_MT_AUDIO_NUM_CHANNELS, channelCount ); _mediaType->SetUINT32( MF_MT_AUDIO_SAMPLES_PER_SECOND, inSampleRate ); _mediaType->SetUINT32( MF_MT_AUDIO_BLOCK_ALIGNMENT, _bytesPerSample * channelCount ); _mediaType->SetUINT32( MF_MT_AUDIO_AVG_BYTES_PER_SECOND, _bytesPerSample * channelCount * inSampleRate ); _mediaType->SetUINT32( MF_MT_AUDIO_BITS_PER_SAMPLE, bitsPerSample ); _mediaType->SetUINT32( MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE ); MFCreateMediaType( &_inputMediaType ); _mediaType->CopyAllItems( _inputMediaType ); _transform->SetInputType( 0, _inputMediaType, 0 ); MFCreateMediaType( &_outputMediaType ); _mediaType->CopyAllItems( _outputMediaType ); _outputMediaType->SetUINT32( MF_MT_AUDIO_SAMPLES_PER_SECOND, outSampleRate ); _outputMediaType->SetUINT32( MF_MT_AUDIO_AVG_BYTES_PER_SECOND, _bytesPerSample * channelCount * outSampleRate ); _transform->SetOutputType( 0, _outputMediaType, 0 ); // 4. Send stream start messages to Resampler _transform->ProcessMessage( MFT_MESSAGE_COMMAND_FLUSH, 0 ); _transform->ProcessMessage( MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, 0 ); _transform->ProcessMessage( MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0 ); } ~WasapiResampler() { // 8. Send stream stop messages to Resampler _transform->ProcessMessage( MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0 ); _transform->ProcessMessage( MFT_MESSAGE_NOTIFY_END_STREAMING, 0 ); // 9. Cleanup MFShutdown(); SAFE_RELEASE( _transformUnk ); SAFE_RELEASE( _transform ); SAFE_RELEASE( _mediaType ); SAFE_RELEASE( _inputMediaType ); SAFE_RELEASE( _outputMediaType ); #ifdef __IWMResamplerProps_FWD_DEFINED__ SAFE_RELEASE( _resamplerProps ); #endif } void Convert( char* outBuffer, const char* inBuffer, unsigned int inSampleCount, unsigned int& outSampleCount ) { unsigned int inputBufferSize = _bytesPerSample * _channelCount * inSampleCount; if ( _sampleRatio == 1 ) { // no sample rate conversion required memcpy( outBuffer, inBuffer, inputBufferSize ); outSampleCount = inSampleCount; return; } unsigned int outputBufferSize = ( unsigned int ) ceilf( inputBufferSize * _sampleRatio ) + ( _bytesPerSample * _channelCount ); IMFMediaBuffer* rInBuffer; IMFSample* rInSample; BYTE* rInByteBuffer = NULL; // 5. Create Sample object from input data MFCreateMemoryBuffer( inputBufferSize, &rInBuffer ); rInBuffer->Lock( &rInByteBuffer, NULL, NULL ); memcpy( rInByteBuffer, inBuffer, inputBufferSize ); rInBuffer->Unlock(); rInByteBuffer = NULL; rInBuffer->SetCurrentLength( inputBufferSize ); MFCreateSample( &rInSample ); rInSample->AddBuffer( rInBuffer ); // 6. Pass input data to Resampler _transform->ProcessInput( 0, rInSample, 0 ); SAFE_RELEASE( rInBuffer ); SAFE_RELEASE( rInSample ); // 7. Perform sample rate conversion IMFMediaBuffer* rOutBuffer = NULL; BYTE* rOutByteBuffer = NULL; MFT_OUTPUT_DATA_BUFFER rOutDataBuffer; DWORD rStatus; DWORD rBytes = outputBufferSize; // maximum bytes accepted per ProcessOutput // 7.1 Create Sample object for output data memset( &rOutDataBuffer, 0, sizeof rOutDataBuffer ); MFCreateSample( &( rOutDataBuffer.pSample ) ); MFCreateMemoryBuffer( rBytes, &rOutBuffer ); rOutDataBuffer.pSample->AddBuffer( rOutBuffer ); rOutDataBuffer.dwStreamID = 0; rOutDataBuffer.dwStatus = 0; rOutDataBuffer.pEvents = NULL; // 7.2 Get output data from Resampler if ( _transform->ProcessOutput( 0, 1, &rOutDataBuffer, &rStatus ) == MF_E_TRANSFORM_NEED_MORE_INPUT ) { outSampleCount = 0; SAFE_RELEASE( rOutBuffer ); SAFE_RELEASE( rOutDataBuffer.pSample ); return; } // 7.3 Write output data to outBuffer SAFE_RELEASE( rOutBuffer ); rOutDataBuffer.pSample->ConvertToContiguousBuffer( &rOutBuffer ); rOutBuffer->GetCurrentLength( &rBytes ); rOutBuffer->Lock( &rOutByteBuffer, NULL, NULL ); memcpy( outBuffer, rOutByteBuffer, rBytes ); rOutBuffer->Unlock(); rOutByteBuffer = NULL; outSampleCount = rBytes / _bytesPerSample / _channelCount; SAFE_RELEASE( rOutBuffer ); SAFE_RELEASE( rOutDataBuffer.pSample ); } private: unsigned int _bytesPerSample; unsigned int _channelCount; float _sampleRatio; IUnknown* _transformUnk; IMFTransform* _transform; IMFMediaType* _mediaType; IMFMediaType* _inputMediaType; IMFMediaType* _outputMediaType; #ifdef __IWMResamplerProps_FWD_DEFINED__ IWMResamplerProps* _resamplerProps; #endif }; //----------------------------------------------------------------------------- // A structure to hold various information related to the WASAPI implementation. struct WasapiHandle { IAudioClient* captureAudioClient; IAudioClient* renderAudioClient; IAudioCaptureClient* captureClient; IAudioRenderClient* renderClient; HANDLE captureEvent; HANDLE renderEvent; WasapiHandle() : captureAudioClient( NULL ), renderAudioClient( NULL ), captureClient( NULL ), renderClient( NULL ), captureEvent( NULL ), renderEvent( NULL ) {} }; //============================================================================= RtApiWasapi::RtApiWasapi() : coInitialized_( false ), deviceEnumerator_( NULL ) { // WASAPI can run either apartment or multi-threaded HRESULT hr = CoInitialize( NULL ); if ( !FAILED( hr ) ) coInitialized_ = true; // Instantiate device enumerator hr = CoCreateInstance( __uuidof( MMDeviceEnumerator ), NULL, CLSCTX_ALL, __uuidof( IMMDeviceEnumerator ), ( void** ) &deviceEnumerator_ ); // If this runs on an old Windows, it will fail. Ignore and proceed. if ( FAILED( hr ) ) deviceEnumerator_ = NULL; } //----------------------------------------------------------------------------- RtApiWasapi::~RtApiWasapi() { if ( stream_.state != STREAM_CLOSED ) closeStream(); SAFE_RELEASE( deviceEnumerator_ ); // If this object previously called CoInitialize() if ( coInitialized_ ) CoUninitialize(); } //============================================================================= unsigned int RtApiWasapi::getDeviceCount( void ) { unsigned int captureDeviceCount = 0; unsigned int renderDeviceCount = 0; IMMDeviceCollection* captureDevices = NULL; IMMDeviceCollection* renderDevices = NULL; if ( !deviceEnumerator_ ) return 0; // Count capture devices errorText_.clear(); HRESULT hr = deviceEnumerator_->EnumAudioEndpoints( eCapture, DEVICE_STATE_ACTIVE, &captureDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceCount: Unable to retrieve capture device collection."; goto Exit; } hr = captureDevices->GetCount( &captureDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceCount: Unable to retrieve capture device count."; goto Exit; } // Count render devices hr = deviceEnumerator_->EnumAudioEndpoints( eRender, DEVICE_STATE_ACTIVE, &renderDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceCount: Unable to retrieve render device collection."; goto Exit; } hr = renderDevices->GetCount( &renderDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceCount: Unable to retrieve render device count."; goto Exit; } Exit: // release all references SAFE_RELEASE( captureDevices ); SAFE_RELEASE( renderDevices ); if ( errorText_.empty() ) return captureDeviceCount + renderDeviceCount; error( RtAudioError::DRIVER_ERROR ); return 0; } //----------------------------------------------------------------------------- RtAudio::DeviceInfo RtApiWasapi::getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; unsigned int captureDeviceCount = 0; unsigned int renderDeviceCount = 0; std::string defaultDeviceName; bool isCaptureDevice = false; PROPVARIANT deviceNameProp; PROPVARIANT defaultDeviceNameProp; IMMDeviceCollection* captureDevices = NULL; IMMDeviceCollection* renderDevices = NULL; IMMDevice* devicePtr = NULL; IMMDevice* defaultDevicePtr = NULL; IAudioClient* audioClient = NULL; IPropertyStore* devicePropStore = NULL; IPropertyStore* defaultDevicePropStore = NULL; WAVEFORMATEX* deviceFormat = NULL; WAVEFORMATEX* closestMatchFormat = NULL; // probed info.probed = false; // Count capture devices errorText_.clear(); RtAudioError::Type errorType = RtAudioError::DRIVER_ERROR; HRESULT hr = deviceEnumerator_->EnumAudioEndpoints( eCapture, DEVICE_STATE_ACTIVE, &captureDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve capture device collection."; goto Exit; } hr = captureDevices->GetCount( &captureDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve capture device count."; goto Exit; } // Count render devices hr = deviceEnumerator_->EnumAudioEndpoints( eRender, DEVICE_STATE_ACTIVE, &renderDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve render device collection."; goto Exit; } hr = renderDevices->GetCount( &renderDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve render device count."; goto Exit; } // validate device index if ( device >= captureDeviceCount + renderDeviceCount ) { errorText_ = "RtApiWasapi::getDeviceInfo: Invalid device index."; errorType = RtAudioError::INVALID_USE; goto Exit; } // determine whether index falls within capture or render devices if ( device >= renderDeviceCount ) { hr = captureDevices->Item( device - renderDeviceCount, &devicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve capture device handle."; goto Exit; } isCaptureDevice = true; } else { hr = renderDevices->Item( device, &devicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve render device handle."; goto Exit; } isCaptureDevice = false; } // get default device name if ( isCaptureDevice ) { hr = deviceEnumerator_->GetDefaultAudioEndpoint( eCapture, eConsole, &defaultDevicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve default capture device handle."; goto Exit; } } else { hr = deviceEnumerator_->GetDefaultAudioEndpoint( eRender, eConsole, &defaultDevicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve default render device handle."; goto Exit; } } hr = defaultDevicePtr->OpenPropertyStore( STGM_READ, &defaultDevicePropStore ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to open default device property store."; goto Exit; } PropVariantInit( &defaultDeviceNameProp ); hr = defaultDevicePropStore->GetValue( PKEY_Device_FriendlyName, &defaultDeviceNameProp ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve default device property: PKEY_Device_FriendlyName."; goto Exit; } defaultDeviceName = convertCharPointerToStdString(defaultDeviceNameProp.pwszVal); // name hr = devicePtr->OpenPropertyStore( STGM_READ, &devicePropStore ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to open device property store."; goto Exit; } PropVariantInit( &deviceNameProp ); hr = devicePropStore->GetValue( PKEY_Device_FriendlyName, &deviceNameProp ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve device property: PKEY_Device_FriendlyName."; goto Exit; } info.name =convertCharPointerToStdString(deviceNameProp.pwszVal); // is default if ( isCaptureDevice ) { info.isDefaultInput = info.name == defaultDeviceName; info.isDefaultOutput = false; } else { info.isDefaultInput = false; info.isDefaultOutput = info.name == defaultDeviceName; } // channel count hr = devicePtr->Activate( __uuidof( IAudioClient ), CLSCTX_ALL, NULL, ( void** ) &audioClient ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve device audio client."; goto Exit; } hr = audioClient->GetMixFormat( &deviceFormat ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::getDeviceInfo: Unable to retrieve device mix format."; goto Exit; } if ( isCaptureDevice ) { info.inputChannels = deviceFormat->nChannels; info.outputChannels = 0; info.duplexChannels = 0; } else { info.inputChannels = 0; info.outputChannels = deviceFormat->nChannels; info.duplexChannels = 0; } // sample rates info.sampleRates.clear(); // allow support for all sample rates as we have a built-in sample rate converter for ( unsigned int i = 0; i < MAX_SAMPLE_RATES; i++ ) { info.sampleRates.push_back( SAMPLE_RATES[i] ); } info.preferredSampleRate = deviceFormat->nSamplesPerSec; // native format info.nativeFormats = 0; if ( deviceFormat->wFormatTag == WAVE_FORMAT_IEEE_FLOAT || ( deviceFormat->wFormatTag == WAVE_FORMAT_EXTENSIBLE && ( ( WAVEFORMATEXTENSIBLE* ) deviceFormat )->SubFormat == KSDATAFORMAT_SUBTYPE_IEEE_FLOAT ) ) { if ( deviceFormat->wBitsPerSample == 32 ) { info.nativeFormats |= RTAUDIO_FLOAT32; } else if ( deviceFormat->wBitsPerSample == 64 ) { info.nativeFormats |= RTAUDIO_FLOAT64; } } else if ( deviceFormat->wFormatTag == WAVE_FORMAT_PCM || ( deviceFormat->wFormatTag == WAVE_FORMAT_EXTENSIBLE && ( ( WAVEFORMATEXTENSIBLE* ) deviceFormat )->SubFormat == KSDATAFORMAT_SUBTYPE_PCM ) ) { if ( deviceFormat->wBitsPerSample == 8 ) { info.nativeFormats |= RTAUDIO_SINT8; } else if ( deviceFormat->wBitsPerSample == 16 ) { info.nativeFormats |= RTAUDIO_SINT16; } else if ( deviceFormat->wBitsPerSample == 24 ) { info.nativeFormats |= RTAUDIO_SINT24; } else if ( deviceFormat->wBitsPerSample == 32 ) { info.nativeFormats |= RTAUDIO_SINT32; } } // probed info.probed = true; Exit: // release all references PropVariantClear( &deviceNameProp ); PropVariantClear( &defaultDeviceNameProp ); SAFE_RELEASE( captureDevices ); SAFE_RELEASE( renderDevices ); SAFE_RELEASE( devicePtr ); SAFE_RELEASE( defaultDevicePtr ); SAFE_RELEASE( audioClient ); SAFE_RELEASE( devicePropStore ); SAFE_RELEASE( defaultDevicePropStore ); CoTaskMemFree( deviceFormat ); CoTaskMemFree( closestMatchFormat ); if ( !errorText_.empty() ) error( errorType ); return info; } //----------------------------------------------------------------------------- unsigned int RtApiWasapi::getDefaultOutputDevice( void ) { for ( unsigned int i = 0; i < getDeviceCount(); i++ ) { if ( getDeviceInfo( i ).isDefaultOutput ) { return i; } } return 0; } //----------------------------------------------------------------------------- unsigned int RtApiWasapi::getDefaultInputDevice( void ) { for ( unsigned int i = 0; i < getDeviceCount(); i++ ) { if ( getDeviceInfo( i ).isDefaultInput ) { return i; } } return 0; } //----------------------------------------------------------------------------- void RtApiWasapi::closeStream( void ) { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiWasapi::closeStream: No open stream to close."; error( RtAudioError::WARNING ); return; } if ( stream_.state != STREAM_STOPPED ) stopStream(); // clean up stream memory SAFE_RELEASE( ( ( WasapiHandle* ) stream_.apiHandle )->captureAudioClient ) SAFE_RELEASE( ( ( WasapiHandle* ) stream_.apiHandle )->renderAudioClient ) SAFE_RELEASE( ( ( WasapiHandle* ) stream_.apiHandle )->captureClient ) SAFE_RELEASE( ( ( WasapiHandle* ) stream_.apiHandle )->renderClient ) if ( ( ( WasapiHandle* ) stream_.apiHandle )->captureEvent ) CloseHandle( ( ( WasapiHandle* ) stream_.apiHandle )->captureEvent ); if ( ( ( WasapiHandle* ) stream_.apiHandle )->renderEvent ) CloseHandle( ( ( WasapiHandle* ) stream_.apiHandle )->renderEvent ); delete ( WasapiHandle* ) stream_.apiHandle; stream_.apiHandle = NULL; for ( int i = 0; i < 2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } // update stream state stream_.state = STREAM_CLOSED; } //----------------------------------------------------------------------------- void RtApiWasapi::startStream( void ) { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiWasapi::startStream: The stream is already running."; error( RtAudioError::WARNING ); return; } #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif // update stream state stream_.state = STREAM_RUNNING; // create WASAPI stream thread stream_.callbackInfo.thread = ( ThreadHandle ) CreateThread( NULL, 0, runWasapiThread, this, CREATE_SUSPENDED, NULL ); if ( !stream_.callbackInfo.thread ) { errorText_ = "RtApiWasapi::startStream: Unable to instantiate callback thread."; error( RtAudioError::THREAD_ERROR ); } else { SetThreadPriority( ( void* ) stream_.callbackInfo.thread, stream_.callbackInfo.priority ); ResumeThread( ( void* ) stream_.callbackInfo.thread ); } } //----------------------------------------------------------------------------- void RtApiWasapi::stopStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiWasapi::stopStream: The stream is already stopped."; error( RtAudioError::WARNING ); return; } // inform stream thread by setting stream state to STREAM_STOPPING stream_.state = STREAM_STOPPING; // wait until stream thread is stopped while( stream_.state != STREAM_STOPPED ) { Sleep( 1 ); } // Wait for the last buffer to play before stopping. Sleep( 1000 * stream_.bufferSize / stream_.sampleRate ); // close thread handle if ( stream_.callbackInfo.thread && !CloseHandle( ( void* ) stream_.callbackInfo.thread ) ) { errorText_ = "RtApiWasapi::stopStream: Unable to close callback thread."; error( RtAudioError::THREAD_ERROR ); return; } stream_.callbackInfo.thread = (ThreadHandle) NULL; } //----------------------------------------------------------------------------- void RtApiWasapi::abortStream( void ) { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiWasapi::abortStream: The stream is already stopped."; error( RtAudioError::WARNING ); return; } // inform stream thread by setting stream state to STREAM_STOPPING stream_.state = STREAM_STOPPING; // wait until stream thread is stopped while ( stream_.state != STREAM_STOPPED ) { Sleep( 1 ); } // close thread handle if ( stream_.callbackInfo.thread && !CloseHandle( ( void* ) stream_.callbackInfo.thread ) ) { errorText_ = "RtApiWasapi::abortStream: Unable to close callback thread."; error( RtAudioError::THREAD_ERROR ); return; } stream_.callbackInfo.thread = (ThreadHandle) NULL; } //----------------------------------------------------------------------------- bool RtApiWasapi::probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int* bufferSize, RtAudio::StreamOptions* options ) { bool methodResult = FAILURE; unsigned int captureDeviceCount = 0; unsigned int renderDeviceCount = 0; IMMDeviceCollection* captureDevices = NULL; IMMDeviceCollection* renderDevices = NULL; IMMDevice* devicePtr = NULL; WAVEFORMATEX* deviceFormat = NULL; unsigned int bufferBytes; stream_.state = STREAM_STOPPED; // create API Handle if not already created if ( !stream_.apiHandle ) stream_.apiHandle = ( void* ) new WasapiHandle(); // Count capture devices errorText_.clear(); RtAudioError::Type errorType = RtAudioError::DRIVER_ERROR; HRESULT hr = deviceEnumerator_->EnumAudioEndpoints( eCapture, DEVICE_STATE_ACTIVE, &captureDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve capture device collection."; goto Exit; } hr = captureDevices->GetCount( &captureDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve capture device count."; goto Exit; } // Count render devices hr = deviceEnumerator_->EnumAudioEndpoints( eRender, DEVICE_STATE_ACTIVE, &renderDevices ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device collection."; goto Exit; } hr = renderDevices->GetCount( &renderDeviceCount ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device count."; goto Exit; } // validate device index if ( device >= captureDeviceCount + renderDeviceCount ) { errorType = RtAudioError::INVALID_USE; errorText_ = "RtApiWasapi::probeDeviceOpen: Invalid device index."; goto Exit; } // if device index falls within capture devices if ( device >= renderDeviceCount ) { if ( mode != INPUT ) { errorType = RtAudioError::INVALID_USE; errorText_ = "RtApiWasapi::probeDeviceOpen: Capture device selected as output device."; goto Exit; } // retrieve captureAudioClient from devicePtr IAudioClient*& captureAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->captureAudioClient; hr = captureDevices->Item( device - renderDeviceCount, &devicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve capture device handle."; goto Exit; } hr = devicePtr->Activate( __uuidof( IAudioClient ), CLSCTX_ALL, NULL, ( void** ) &captureAudioClient ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve capture device audio client."; goto Exit; } hr = captureAudioClient->GetMixFormat( &deviceFormat ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve capture device mix format."; goto Exit; } stream_.nDeviceChannels[mode] = deviceFormat->nChannels; captureAudioClient->GetStreamLatency( ( long long* ) &stream_.latency[mode] ); } // if device index falls within render devices and is configured for loopback if ( device < renderDeviceCount && mode == INPUT ) { // if renderAudioClient is not initialised, initialise it now IAudioClient*& renderAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->renderAudioClient; if ( !renderAudioClient ) { probeDeviceOpen( device, OUTPUT, channels, firstChannel, sampleRate, format, bufferSize, options ); } // retrieve captureAudioClient from devicePtr IAudioClient*& captureAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->captureAudioClient; hr = renderDevices->Item( device, &devicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device handle."; goto Exit; } hr = devicePtr->Activate( __uuidof( IAudioClient ), CLSCTX_ALL, NULL, ( void** ) &captureAudioClient ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device audio client."; goto Exit; } hr = captureAudioClient->GetMixFormat( &deviceFormat ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device mix format."; goto Exit; } stream_.nDeviceChannels[mode] = deviceFormat->nChannels; captureAudioClient->GetStreamLatency( ( long long* ) &stream_.latency[mode] ); } // if device index falls within render devices and is configured for output if ( device < renderDeviceCount && mode == OUTPUT ) { // if renderAudioClient is already initialised, don't initialise it again IAudioClient*& renderAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->renderAudioClient; if ( renderAudioClient ) { methodResult = SUCCESS; goto Exit; } hr = renderDevices->Item( device, &devicePtr ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device handle."; goto Exit; } hr = devicePtr->Activate( __uuidof( IAudioClient ), CLSCTX_ALL, NULL, ( void** ) &renderAudioClient ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device audio client."; goto Exit; } hr = renderAudioClient->GetMixFormat( &deviceFormat ); if ( FAILED( hr ) ) { errorText_ = "RtApiWasapi::probeDeviceOpen: Unable to retrieve render device mix format."; goto Exit; } stream_.nDeviceChannels[mode] = deviceFormat->nChannels; renderAudioClient->GetStreamLatency( ( long long* ) &stream_.latency[mode] ); } // fill stream data if ( ( stream_.mode == OUTPUT && mode == INPUT ) || ( stream_.mode == INPUT && mode == OUTPUT ) ) { stream_.mode = DUPLEX; } else { stream_.mode = mode; } stream_.device[mode] = device; stream_.doByteSwap[mode] = false; stream_.sampleRate = sampleRate; stream_.bufferSize = *bufferSize; stream_.nBuffers = 1; stream_.nUserChannels[mode] = channels; stream_.channelOffset[mode] = firstChannel; stream_.userFormat = format; stream_.deviceFormat[mode] = getDeviceInfo( device ).nativeFormats; if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; stream_.deviceInterleaved[mode] = true; // Set flags for buffer conversion. stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] || stream_.nUserChannels[0] != stream_.nDeviceChannels[0] || stream_.nUserChannels[1] != stream_.nDeviceChannels[1] ) stream_.doConvertBuffer[mode] = true; else if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, 0 ); // Allocate necessary internal buffers bufferBytes = stream_.nUserChannels[mode] * stream_.bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = ( char* ) calloc( bufferBytes, 1 ); if ( !stream_.userBuffer[mode] ) { errorType = RtAudioError::MEMORY_ERROR; errorText_ = "RtApiWasapi::probeDeviceOpen: Error allocating user buffer memory."; goto Exit; } if ( options && options->flags & RTAUDIO_SCHEDULE_REALTIME ) stream_.callbackInfo.priority = 15; else stream_.callbackInfo.priority = 0; ///! TODO: RTAUDIO_MINIMIZE_LATENCY // Provide stream buffers directly to callback ///! TODO: RTAUDIO_HOG_DEVICE // Exclusive mode methodResult = SUCCESS; Exit: //clean up SAFE_RELEASE( captureDevices ); SAFE_RELEASE( renderDevices ); SAFE_RELEASE( devicePtr ); CoTaskMemFree( deviceFormat ); // if method failed, close the stream if ( methodResult == FAILURE ) closeStream(); if ( !errorText_.empty() ) error( errorType ); return methodResult; } //============================================================================= DWORD WINAPI RtApiWasapi::runWasapiThread( void* wasapiPtr ) { if ( wasapiPtr ) ( ( RtApiWasapi* ) wasapiPtr )->wasapiThread(); return 0; } DWORD WINAPI RtApiWasapi::stopWasapiThread( void* wasapiPtr ) { if ( wasapiPtr ) ( ( RtApiWasapi* ) wasapiPtr )->stopStream(); return 0; } DWORD WINAPI RtApiWasapi::abortWasapiThread( void* wasapiPtr ) { if ( wasapiPtr ) ( ( RtApiWasapi* ) wasapiPtr )->abortStream(); return 0; } //----------------------------------------------------------------------------- void RtApiWasapi::wasapiThread() { // as this is a new thread, we must CoInitialize it CoInitialize( NULL ); HRESULT hr; IAudioClient* captureAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->captureAudioClient; IAudioClient* renderAudioClient = ( ( WasapiHandle* ) stream_.apiHandle )->renderAudioClient; IAudioCaptureClient* captureClient = ( ( WasapiHandle* ) stream_.apiHandle )->captureClient; IAudioRenderClient* renderClient = ( ( WasapiHandle* ) stream_.apiHandle )->renderClient; HANDLE captureEvent = ( ( WasapiHandle* ) stream_.apiHandle )->captureEvent; HANDLE renderEvent = ( ( WasapiHandle* ) stream_.apiHandle )->renderEvent; WAVEFORMATEX* captureFormat = NULL; WAVEFORMATEX* renderFormat = NULL; float captureSrRatio = 0.0f; float renderSrRatio = 0.0f; WasapiBuffer captureBuffer; WasapiBuffer renderBuffer; WasapiResampler* captureResampler = NULL; WasapiResampler* renderResampler = NULL; // declare local stream variables RtAudioCallback callback = ( RtAudioCallback ) stream_.callbackInfo.callback; BYTE* streamBuffer = NULL; unsigned long captureFlags = 0; unsigned int bufferFrameCount = 0; unsigned int numFramesPadding = 0; unsigned int convBufferSize = 0; bool loopbackEnabled = stream_.device[INPUT] == stream_.device[OUTPUT]; bool callbackPushed = true; bool callbackPulled = false; bool callbackStopped = false; int callbackResult = 0; // convBuffer is used to store converted buffers between WASAPI and the user char* convBuffer = NULL; unsigned int convBuffSize = 0; unsigned int deviceBuffSize = 0; std::string errorText; RtAudioError::Type errorType = RtAudioError::DRIVER_ERROR; // Attempt to assign "Pro Audio" characteristic to thread HMODULE AvrtDll = LoadLibrary( (LPCTSTR) "AVRT.dll" ); if ( AvrtDll ) { DWORD taskIndex = 0; TAvSetMmThreadCharacteristicsPtr AvSetMmThreadCharacteristicsPtr = ( TAvSetMmThreadCharacteristicsPtr ) (void(*)()) GetProcAddress( AvrtDll, "AvSetMmThreadCharacteristicsW" ); AvSetMmThreadCharacteristicsPtr( L"Pro Audio", &taskIndex ); FreeLibrary( AvrtDll ); } // start capture stream if applicable if ( captureAudioClient ) { hr = captureAudioClient->GetMixFormat( &captureFormat ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve device mix format."; goto Exit; } // init captureResampler captureResampler = new WasapiResampler( stream_.deviceFormat[INPUT] == RTAUDIO_FLOAT32 || stream_.deviceFormat[INPUT] == RTAUDIO_FLOAT64, formatBytes( stream_.deviceFormat[INPUT] ) * 8, stream_.nDeviceChannels[INPUT], captureFormat->nSamplesPerSec, stream_.sampleRate ); captureSrRatio = ( ( float ) captureFormat->nSamplesPerSec / stream_.sampleRate ); if ( !captureClient ) { hr = captureAudioClient->Initialize( AUDCLNT_SHAREMODE_SHARED, loopbackEnabled ? AUDCLNT_STREAMFLAGS_LOOPBACK : AUDCLNT_STREAMFLAGS_EVENTCALLBACK, 0, 0, captureFormat, NULL ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to initialize capture audio client."; goto Exit; } hr = captureAudioClient->GetService( __uuidof( IAudioCaptureClient ), ( void** ) &captureClient ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve capture client handle."; goto Exit; } // don't configure captureEvent if in loopback mode if ( !loopbackEnabled ) { // configure captureEvent to trigger on every available capture buffer captureEvent = CreateEvent( NULL, FALSE, FALSE, NULL ); if ( !captureEvent ) { errorType = RtAudioError::SYSTEM_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to create capture event."; goto Exit; } hr = captureAudioClient->SetEventHandle( captureEvent ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to set capture event handle."; goto Exit; } ( ( WasapiHandle* ) stream_.apiHandle )->captureEvent = captureEvent; } ( ( WasapiHandle* ) stream_.apiHandle )->captureClient = captureClient; // reset the capture stream hr = captureAudioClient->Reset(); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to reset capture stream."; goto Exit; } // start the capture stream hr = captureAudioClient->Start(); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to start capture stream."; goto Exit; } } unsigned int inBufferSize = 0; hr = captureAudioClient->GetBufferSize( &inBufferSize ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to get capture buffer size."; goto Exit; } // scale outBufferSize according to stream->user sample rate ratio unsigned int outBufferSize = ( unsigned int ) ceilf( stream_.bufferSize * captureSrRatio ) * stream_.nDeviceChannels[INPUT]; inBufferSize *= stream_.nDeviceChannels[INPUT]; // set captureBuffer size captureBuffer.setBufferSize( inBufferSize + outBufferSize, formatBytes( stream_.deviceFormat[INPUT] ) ); } // start render stream if applicable if ( renderAudioClient ) { hr = renderAudioClient->GetMixFormat( &renderFormat ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve device mix format."; goto Exit; } // init renderResampler renderResampler = new WasapiResampler( stream_.deviceFormat[OUTPUT] == RTAUDIO_FLOAT32 || stream_.deviceFormat[OUTPUT] == RTAUDIO_FLOAT64, formatBytes( stream_.deviceFormat[OUTPUT] ) * 8, stream_.nDeviceChannels[OUTPUT], stream_.sampleRate, renderFormat->nSamplesPerSec ); renderSrRatio = ( ( float ) renderFormat->nSamplesPerSec / stream_.sampleRate ); if ( !renderClient ) { hr = renderAudioClient->Initialize( AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK, 0, 0, renderFormat, NULL ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to initialize render audio client."; goto Exit; } hr = renderAudioClient->GetService( __uuidof( IAudioRenderClient ), ( void** ) &renderClient ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve render client handle."; goto Exit; } // configure renderEvent to trigger on every available render buffer renderEvent = CreateEvent( NULL, FALSE, FALSE, NULL ); if ( !renderEvent ) { errorType = RtAudioError::SYSTEM_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to create render event."; goto Exit; } hr = renderAudioClient->SetEventHandle( renderEvent ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to set render event handle."; goto Exit; } ( ( WasapiHandle* ) stream_.apiHandle )->renderClient = renderClient; ( ( WasapiHandle* ) stream_.apiHandle )->renderEvent = renderEvent; // reset the render stream hr = renderAudioClient->Reset(); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to reset render stream."; goto Exit; } // start the render stream hr = renderAudioClient->Start(); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to start render stream."; goto Exit; } } unsigned int outBufferSize = 0; hr = renderAudioClient->GetBufferSize( &outBufferSize ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to get render buffer size."; goto Exit; } // scale inBufferSize according to user->stream sample rate ratio unsigned int inBufferSize = ( unsigned int ) ceilf( stream_.bufferSize * renderSrRatio ) * stream_.nDeviceChannels[OUTPUT]; outBufferSize *= stream_.nDeviceChannels[OUTPUT]; // set renderBuffer size renderBuffer.setBufferSize( inBufferSize + outBufferSize, formatBytes( stream_.deviceFormat[OUTPUT] ) ); } // malloc buffer memory if ( stream_.mode == INPUT ) { using namespace std; // for ceilf convBuffSize = ( size_t ) ( ceilf( stream_.bufferSize * captureSrRatio ) ) * stream_.nDeviceChannels[INPUT] * formatBytes( stream_.deviceFormat[INPUT] ); deviceBuffSize = stream_.bufferSize * stream_.nDeviceChannels[INPUT] * formatBytes( stream_.deviceFormat[INPUT] ); } else if ( stream_.mode == OUTPUT ) { convBuffSize = ( size_t ) ( ceilf( stream_.bufferSize * renderSrRatio ) ) * stream_.nDeviceChannels[OUTPUT] * formatBytes( stream_.deviceFormat[OUTPUT] ); deviceBuffSize = stream_.bufferSize * stream_.nDeviceChannels[OUTPUT] * formatBytes( stream_.deviceFormat[OUTPUT] ); } else if ( stream_.mode == DUPLEX ) { convBuffSize = std::max( ( size_t ) ( ceilf( stream_.bufferSize * captureSrRatio ) ) * stream_.nDeviceChannels[INPUT] * formatBytes( stream_.deviceFormat[INPUT] ), ( size_t ) ( ceilf( stream_.bufferSize * renderSrRatio ) ) * stream_.nDeviceChannels[OUTPUT] * formatBytes( stream_.deviceFormat[OUTPUT] ) ); deviceBuffSize = std::max( stream_.bufferSize * stream_.nDeviceChannels[INPUT] * formatBytes( stream_.deviceFormat[INPUT] ), stream_.bufferSize * stream_.nDeviceChannels[OUTPUT] * formatBytes( stream_.deviceFormat[OUTPUT] ) ); } convBuffSize *= 2; // allow overflow for *SrRatio remainders convBuffer = ( char* ) calloc( convBuffSize, 1 ); stream_.deviceBuffer = ( char* ) calloc( deviceBuffSize, 1 ); if ( !convBuffer || !stream_.deviceBuffer ) { errorType = RtAudioError::MEMORY_ERROR; errorText = "RtApiWasapi::wasapiThread: Error allocating device buffer memory."; goto Exit; } // stream process loop while ( stream_.state != STREAM_STOPPING ) { if ( !callbackPulled ) { // Callback Input // ============== // 1. Pull callback buffer from inputBuffer // 2. If 1. was successful: Convert callback buffer to user sample rate and channel count // Convert callback buffer to user format if ( captureAudioClient ) { int samplesToPull = ( unsigned int ) floorf( stream_.bufferSize * captureSrRatio ); if ( captureSrRatio != 1 ) { // account for remainders samplesToPull--; } convBufferSize = 0; while ( convBufferSize < stream_.bufferSize ) { // Pull callback buffer from inputBuffer callbackPulled = captureBuffer.pullBuffer( convBuffer, samplesToPull * stream_.nDeviceChannels[INPUT], stream_.deviceFormat[INPUT] ); if ( !callbackPulled ) { break; } // Convert callback buffer to user sample rate unsigned int deviceBufferOffset = convBufferSize * stream_.nDeviceChannels[INPUT] * formatBytes( stream_.deviceFormat[INPUT] ); unsigned int convSamples = 0; captureResampler->Convert( stream_.deviceBuffer + deviceBufferOffset, convBuffer, samplesToPull, convSamples ); convBufferSize += convSamples; samplesToPull = 1; // now pull one sample at a time until we have stream_.bufferSize samples } if ( callbackPulled ) { if ( stream_.doConvertBuffer[INPUT] ) { // Convert callback buffer to user format convertBuffer( stream_.userBuffer[INPUT], stream_.deviceBuffer, stream_.convertInfo[INPUT] ); } else { // no further conversion, simple copy deviceBuffer to userBuffer memcpy( stream_.userBuffer[INPUT], stream_.deviceBuffer, stream_.bufferSize * stream_.nUserChannels[INPUT] * formatBytes( stream_.userFormat ) ); } } } else { // if there is no capture stream, set callbackPulled flag callbackPulled = true; } // Execute Callback // ================ // 1. Execute user callback method // 2. Handle return value from callback // if callback has not requested the stream to stop if ( callbackPulled && !callbackStopped ) { // Execute user callback method callbackResult = callback( stream_.userBuffer[OUTPUT], stream_.userBuffer[INPUT], stream_.bufferSize, getStreamTime(), captureFlags & AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY ? RTAUDIO_INPUT_OVERFLOW : 0, stream_.callbackInfo.userData ); // tick stream time RtApi::tickStreamTime(); // Handle return value from callback if ( callbackResult == 1 ) { // instantiate a thread to stop this thread HANDLE threadHandle = CreateThread( NULL, 0, stopWasapiThread, this, 0, NULL ); if ( !threadHandle ) { errorType = RtAudioError::THREAD_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to instantiate stream stop thread."; goto Exit; } else if ( !CloseHandle( threadHandle ) ) { errorType = RtAudioError::THREAD_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to close stream stop thread handle."; goto Exit; } callbackStopped = true; } else if ( callbackResult == 2 ) { // instantiate a thread to stop this thread HANDLE threadHandle = CreateThread( NULL, 0, abortWasapiThread, this, 0, NULL ); if ( !threadHandle ) { errorType = RtAudioError::THREAD_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to instantiate stream abort thread."; goto Exit; } else if ( !CloseHandle( threadHandle ) ) { errorType = RtAudioError::THREAD_ERROR; errorText = "RtApiWasapi::wasapiThread: Unable to close stream abort thread handle."; goto Exit; } callbackStopped = true; } } } // Callback Output // =============== // 1. Convert callback buffer to stream format // 2. Convert callback buffer to stream sample rate and channel count // 3. Push callback buffer into outputBuffer if ( renderAudioClient && callbackPulled ) { // if the last call to renderBuffer.PushBuffer() was successful if ( callbackPushed || convBufferSize == 0 ) { if ( stream_.doConvertBuffer[OUTPUT] ) { // Convert callback buffer to stream format convertBuffer( stream_.deviceBuffer, stream_.userBuffer[OUTPUT], stream_.convertInfo[OUTPUT] ); } else { // no further conversion, simple copy userBuffer to deviceBuffer memcpy( stream_.deviceBuffer, stream_.userBuffer[OUTPUT], stream_.bufferSize * stream_.nUserChannels[OUTPUT] * formatBytes( stream_.userFormat ) ); } // Convert callback buffer to stream sample rate renderResampler->Convert( convBuffer, stream_.deviceBuffer, stream_.bufferSize, convBufferSize ); } // Push callback buffer into outputBuffer callbackPushed = renderBuffer.pushBuffer( convBuffer, convBufferSize * stream_.nDeviceChannels[OUTPUT], stream_.deviceFormat[OUTPUT] ); } else { // if there is no render stream, set callbackPushed flag callbackPushed = true; } // Stream Capture // ============== // 1. Get capture buffer from stream // 2. Push capture buffer into inputBuffer // 3. If 2. was successful: Release capture buffer if ( captureAudioClient ) { // if the callback input buffer was not pulled from captureBuffer, wait for next capture event if ( !callbackPulled ) { WaitForSingleObject( loopbackEnabled ? renderEvent : captureEvent, INFINITE ); } // Get capture buffer from stream hr = captureClient->GetBuffer( &streamBuffer, &bufferFrameCount, &captureFlags, NULL, NULL ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve capture buffer."; goto Exit; } if ( bufferFrameCount != 0 ) { // Push capture buffer into inputBuffer if ( captureBuffer.pushBuffer( ( char* ) streamBuffer, bufferFrameCount * stream_.nDeviceChannels[INPUT], stream_.deviceFormat[INPUT] ) ) { // Release capture buffer hr = captureClient->ReleaseBuffer( bufferFrameCount ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release capture buffer."; goto Exit; } } else { // Inform WASAPI that capture was unsuccessful hr = captureClient->ReleaseBuffer( 0 ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release capture buffer."; goto Exit; } } } else { // Inform WASAPI that capture was unsuccessful hr = captureClient->ReleaseBuffer( 0 ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release capture buffer."; goto Exit; } } } // Stream Render // ============= // 1. Get render buffer from stream // 2. Pull next buffer from outputBuffer // 3. If 2. was successful: Fill render buffer with next buffer // Release render buffer if ( renderAudioClient ) { // if the callback output buffer was not pushed to renderBuffer, wait for next render event if ( callbackPulled && !callbackPushed ) { WaitForSingleObject( renderEvent, INFINITE ); } // Get render buffer from stream hr = renderAudioClient->GetBufferSize( &bufferFrameCount ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve render buffer size."; goto Exit; } hr = renderAudioClient->GetCurrentPadding( &numFramesPadding ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve render buffer padding."; goto Exit; } bufferFrameCount -= numFramesPadding; if ( bufferFrameCount != 0 ) { hr = renderClient->GetBuffer( bufferFrameCount, &streamBuffer ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to retrieve render buffer."; goto Exit; } // Pull next buffer from outputBuffer // Fill render buffer with next buffer if ( renderBuffer.pullBuffer( ( char* ) streamBuffer, bufferFrameCount * stream_.nDeviceChannels[OUTPUT], stream_.deviceFormat[OUTPUT] ) ) { // Release render buffer hr = renderClient->ReleaseBuffer( bufferFrameCount, 0 ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release render buffer."; goto Exit; } } else { // Inform WASAPI that render was unsuccessful hr = renderClient->ReleaseBuffer( 0, 0 ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release render buffer."; goto Exit; } } } else { // Inform WASAPI that render was unsuccessful hr = renderClient->ReleaseBuffer( 0, 0 ); if ( FAILED( hr ) ) { errorText = "RtApiWasapi::wasapiThread: Unable to release render buffer."; goto Exit; } } } // if the callback buffer was pushed renderBuffer reset callbackPulled flag if ( callbackPushed ) { // unsetting the callbackPulled flag lets the stream know that // the audio device is ready for another callback output buffer. callbackPulled = false; } } Exit: // clean up CoTaskMemFree( captureFormat ); CoTaskMemFree( renderFormat ); free ( convBuffer ); delete renderResampler; delete captureResampler; CoUninitialize(); // update stream state stream_.state = STREAM_STOPPED; if ( !errorText.empty() ) { errorText_ = errorText; error( errorType ); } } //******************** End of __WINDOWS_WASAPI__ *********************// #endif #if defined(__WINDOWS_DS__) // Windows DirectSound API // Modified by Robin Davies, October 2005 // - Improvements to DirectX pointer chasing. // - Bug fix for non-power-of-two Asio granularity used by Edirol PCR-A30. // - Auto-call CoInitialize for DSOUND and ASIO platforms. // Various revisions for RtAudio 4.0 by Gary Scavone, April 2007 // Changed device query structure for RtAudio 4.0.7, January 2010 #include #include #include #include #include #include #include #if defined(__MINGW32__) // missing from latest mingw winapi #define WAVE_FORMAT_96M08 0x00010000 /* 96 kHz, Mono, 8-bit */ #define WAVE_FORMAT_96S08 0x00020000 /* 96 kHz, Stereo, 8-bit */ #define WAVE_FORMAT_96M16 0x00040000 /* 96 kHz, Mono, 16-bit */ #define WAVE_FORMAT_96S16 0x00080000 /* 96 kHz, Stereo, 16-bit */ #endif #define MINIMUM_DEVICE_BUFFER_SIZE 32768 #ifdef _MSC_VER // if Microsoft Visual C++ #pragma comment( lib, "winmm.lib" ) // then, auto-link winmm.lib. Otherwise, it has to be added manually. #endif static inline DWORD dsPointerBetween( DWORD pointer, DWORD laterPointer, DWORD earlierPointer, DWORD bufferSize ) { if ( pointer > bufferSize ) pointer -= bufferSize; if ( laterPointer < earlierPointer ) laterPointer += bufferSize; if ( pointer < earlierPointer ) pointer += bufferSize; return pointer >= earlierPointer && pointer < laterPointer; } // A structure to hold various information related to the DirectSound // API implementation. struct DsHandle { unsigned int drainCounter; // Tracks callback counts when draining bool internalDrain; // Indicates if stop is initiated from callback or not. void *id[2]; void *buffer[2]; bool xrun[2]; UINT bufferPointer[2]; DWORD dsBufferSize[2]; DWORD dsPointerLeadTime[2]; // the number of bytes ahead of the safe pointer to lead by. HANDLE condition; DsHandle() :drainCounter(0), internalDrain(false) { id[0] = 0; id[1] = 0; buffer[0] = 0; buffer[1] = 0; xrun[0] = false; xrun[1] = false; bufferPointer[0] = 0; bufferPointer[1] = 0; } }; // Declarations for utility functions, callbacks, and structures // specific to the DirectSound implementation. static BOOL CALLBACK deviceQueryCallback( LPGUID lpguid, LPCTSTR description, LPCTSTR module, LPVOID lpContext ); static const char* getErrorString( int code ); static unsigned __stdcall callbackHandler( void *ptr ); struct DsDevice { LPGUID id[2]; bool validId[2]; bool found; std::string name; DsDevice() : found(false) { validId[0] = false; validId[1] = false; } }; struct DsProbeData { bool isInput; std::vector* dsDevices; }; RtApiDs :: RtApiDs() { // Dsound will run both-threaded. If CoInitialize fails, then just // accept whatever the mainline chose for a threading model. coInitialized_ = false; HRESULT hr = CoInitialize( NULL ); if ( !FAILED( hr ) ) coInitialized_ = true; } RtApiDs :: ~RtApiDs() { if ( stream_.state != STREAM_CLOSED ) closeStream(); if ( coInitialized_ ) CoUninitialize(); // balanced call. } // The DirectSound default output is always the first device. unsigned int RtApiDs :: getDefaultOutputDevice( void ) { return 0; } // The DirectSound default input is always the first input device, // which is the first capture device enumerated. unsigned int RtApiDs :: getDefaultInputDevice( void ) { return 0; } unsigned int RtApiDs :: getDeviceCount( void ) { // Set query flag for previously found devices to false, so that we // can check for any devices that have disappeared. for ( unsigned int i=0; i(dsDevices.size()); } RtAudio::DeviceInfo RtApiDs :: getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; info.probed = false; if ( dsDevices.size() == 0 ) { // Force a query of all devices getDeviceCount(); if ( dsDevices.size() == 0 ) { errorText_ = "RtApiDs::getDeviceInfo: no devices found!"; error( RtAudioError::INVALID_USE ); return info; } } if ( device >= dsDevices.size() ) { errorText_ = "RtApiDs::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } HRESULT result; if ( dsDevices[ device ].validId[0] == false ) goto probeInput; LPDIRECTSOUND output; DSCAPS outCaps; result = DirectSoundCreate( dsDevices[ device ].id[0], &output, NULL ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::getDeviceInfo: error (" << getErrorString( result ) << ") opening output device (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto probeInput; } outCaps.dwSize = sizeof( outCaps ); result = output->GetCaps( &outCaps ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::getDeviceInfo: error (" << getErrorString( result ) << ") getting capabilities!"; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto probeInput; } // Get output channel information. info.outputChannels = ( outCaps.dwFlags & DSCAPS_PRIMARYSTEREO ) ? 2 : 1; // Get sample rate information. info.sampleRates.clear(); for ( unsigned int k=0; k= (unsigned int) outCaps.dwMinSecondarySampleRate && SAMPLE_RATES[k] <= (unsigned int) outCaps.dwMaxSecondarySampleRate ) { info.sampleRates.push_back( SAMPLE_RATES[k] ); if ( !info.preferredSampleRate || ( SAMPLE_RATES[k] <= 48000 && SAMPLE_RATES[k] > info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[k]; } } // Get format information. if ( outCaps.dwFlags & DSCAPS_PRIMARY16BIT ) info.nativeFormats |= RTAUDIO_SINT16; if ( outCaps.dwFlags & DSCAPS_PRIMARY8BIT ) info.nativeFormats |= RTAUDIO_SINT8; output->Release(); if ( getDefaultOutputDevice() == device ) info.isDefaultOutput = true; if ( dsDevices[ device ].validId[1] == false ) { info.name = dsDevices[ device ].name; info.probed = true; return info; } probeInput: LPDIRECTSOUNDCAPTURE input; result = DirectSoundCaptureCreate( dsDevices[ device ].id[1], &input, NULL ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::getDeviceInfo: error (" << getErrorString( result ) << ") opening input device (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } DSCCAPS inCaps; inCaps.dwSize = sizeof( inCaps ); result = input->GetCaps( &inCaps ); if ( FAILED( result ) ) { input->Release(); errorStream_ << "RtApiDs::getDeviceInfo: error (" << getErrorString( result ) << ") getting object capabilities (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Get input channel information. info.inputChannels = inCaps.dwChannels; // Get sample rate and format information. std::vector rates; if ( inCaps.dwChannels >= 2 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1S16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_2S16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_4S16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_96S16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_1S08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_2S08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_4S08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_96S08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( info.nativeFormats & RTAUDIO_SINT16 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1S16 ) rates.push_back( 11025 ); if ( inCaps.dwFormats & WAVE_FORMAT_2S16 ) rates.push_back( 22050 ); if ( inCaps.dwFormats & WAVE_FORMAT_4S16 ) rates.push_back( 44100 ); if ( inCaps.dwFormats & WAVE_FORMAT_96S16 ) rates.push_back( 96000 ); } else if ( info.nativeFormats & RTAUDIO_SINT8 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1S08 ) rates.push_back( 11025 ); if ( inCaps.dwFormats & WAVE_FORMAT_2S08 ) rates.push_back( 22050 ); if ( inCaps.dwFormats & WAVE_FORMAT_4S08 ) rates.push_back( 44100 ); if ( inCaps.dwFormats & WAVE_FORMAT_96S08 ) rates.push_back( 96000 ); } } else if ( inCaps.dwChannels == 1 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1M16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_2M16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_4M16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_96M16 ) info.nativeFormats |= RTAUDIO_SINT16; if ( inCaps.dwFormats & WAVE_FORMAT_1M08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_2M08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_4M08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( inCaps.dwFormats & WAVE_FORMAT_96M08 ) info.nativeFormats |= RTAUDIO_SINT8; if ( info.nativeFormats & RTAUDIO_SINT16 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1M16 ) rates.push_back( 11025 ); if ( inCaps.dwFormats & WAVE_FORMAT_2M16 ) rates.push_back( 22050 ); if ( inCaps.dwFormats & WAVE_FORMAT_4M16 ) rates.push_back( 44100 ); if ( inCaps.dwFormats & WAVE_FORMAT_96M16 ) rates.push_back( 96000 ); } else if ( info.nativeFormats & RTAUDIO_SINT8 ) { if ( inCaps.dwFormats & WAVE_FORMAT_1M08 ) rates.push_back( 11025 ); if ( inCaps.dwFormats & WAVE_FORMAT_2M08 ) rates.push_back( 22050 ); if ( inCaps.dwFormats & WAVE_FORMAT_4M08 ) rates.push_back( 44100 ); if ( inCaps.dwFormats & WAVE_FORMAT_96M08 ) rates.push_back( 96000 ); } } else info.inputChannels = 0; // technically, this would be an error input->Release(); if ( info.inputChannels == 0 ) return info; // Copy the supported rates to the info structure but avoid duplication. bool found; for ( unsigned int i=0; i 0 && info.inputChannels > 0 ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; if ( device == 0 ) info.isDefaultInput = true; // Copy name and return. info.name = dsDevices[ device ].name; info.probed = true; return info; } bool RtApiDs :: probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ) { if ( channels + firstChannel > 2 ) { errorText_ = "RtApiDs::probeDeviceOpen: DirectSound does not support more than 2 channels per device."; return FAILURE; } size_t nDevices = dsDevices.size(); if ( nDevices == 0 ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiDs::probeDeviceOpen: no devices found!"; return FAILURE; } if ( device >= nDevices ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiDs::probeDeviceOpen: device ID is invalid!"; return FAILURE; } if ( mode == OUTPUT ) { if ( dsDevices[ device ].validId[0] == false ) { errorStream_ << "RtApiDs::probeDeviceOpen: device (" << device << ") does not support output!"; errorText_ = errorStream_.str(); return FAILURE; } } else { // mode == INPUT if ( dsDevices[ device ].validId[1] == false ) { errorStream_ << "RtApiDs::probeDeviceOpen: device (" << device << ") does not support input!"; errorText_ = errorStream_.str(); return FAILURE; } } // According to a note in PortAudio, using GetDesktopWindow() // instead of GetForegroundWindow() is supposed to avoid problems // that occur when the application's window is not the foreground // window. Also, if the application window closes before the // DirectSound buffer, DirectSound can crash. In the past, I had // problems when using GetDesktopWindow() but it seems fine now // (January 2010). I'll leave it commented here. // HWND hWnd = GetForegroundWindow(); HWND hWnd = GetDesktopWindow(); // Check the numberOfBuffers parameter and limit the lowest value to // two. This is a judgement call and a value of two is probably too // low for capture, but it should work for playback. int nBuffers = 0; if ( options ) nBuffers = options->numberOfBuffers; if ( options && options->flags & RTAUDIO_MINIMIZE_LATENCY ) nBuffers = 2; if ( nBuffers < 2 ) nBuffers = 3; // Check the lower range of the user-specified buffer size and set // (arbitrarily) to a lower bound of 32. if ( *bufferSize < 32 ) *bufferSize = 32; // Create the wave format structure. The data format setting will // be determined later. WAVEFORMATEX waveFormat; ZeroMemory( &waveFormat, sizeof(WAVEFORMATEX) ); waveFormat.wFormatTag = WAVE_FORMAT_PCM; waveFormat.nChannels = channels + firstChannel; waveFormat.nSamplesPerSec = (unsigned long) sampleRate; // Determine the device buffer size. By default, we'll use the value // defined above (32K), but we will grow it to make allowances for // very large software buffer sizes. DWORD dsBufferSize = MINIMUM_DEVICE_BUFFER_SIZE; DWORD dsPointerLeadTime = 0; void *ohandle = 0, *bhandle = 0; HRESULT result; if ( mode == OUTPUT ) { LPDIRECTSOUND output; result = DirectSoundCreate( dsDevices[ device ].id[0], &output, NULL ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") opening output device (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } DSCAPS outCaps; outCaps.dwSize = sizeof( outCaps ); result = output->GetCaps( &outCaps ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") getting capabilities (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Check channel information. if ( channels + firstChannel == 2 && !( outCaps.dwFlags & DSCAPS_PRIMARYSTEREO ) ) { errorStream_ << "RtApiDs::getDeviceInfo: the output device (" << dsDevices[ device ].name << ") does not support stereo playback."; errorText_ = errorStream_.str(); return FAILURE; } // Check format information. Use 16-bit format unless not // supported or user requests 8-bit. if ( outCaps.dwFlags & DSCAPS_PRIMARY16BIT && !( format == RTAUDIO_SINT8 && outCaps.dwFlags & DSCAPS_PRIMARY8BIT ) ) { waveFormat.wBitsPerSample = 16; stream_.deviceFormat[mode] = RTAUDIO_SINT16; } else { waveFormat.wBitsPerSample = 8; stream_.deviceFormat[mode] = RTAUDIO_SINT8; } stream_.userFormat = format; // Update wave format structure and buffer information. waveFormat.nBlockAlign = waveFormat.nChannels * waveFormat.wBitsPerSample / 8; waveFormat.nAvgBytesPerSec = waveFormat.nSamplesPerSec * waveFormat.nBlockAlign; dsPointerLeadTime = nBuffers * (*bufferSize) * (waveFormat.wBitsPerSample / 8) * channels; // If the user wants an even bigger buffer, increase the device buffer size accordingly. while ( dsPointerLeadTime * 2U > dsBufferSize ) dsBufferSize *= 2; // Set cooperative level to DSSCL_EXCLUSIVE ... sound stops when window focus changes. // result = output->SetCooperativeLevel( hWnd, DSSCL_EXCLUSIVE ); // Set cooperative level to DSSCL_PRIORITY ... sound remains when window focus changes. result = output->SetCooperativeLevel( hWnd, DSSCL_PRIORITY ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") setting cooperative level (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Even though we will write to the secondary buffer, we need to // access the primary buffer to set the correct output format // (since the default is 8-bit, 22 kHz!). Setup the DS primary // buffer description. DSBUFFERDESC bufferDescription; ZeroMemory( &bufferDescription, sizeof( DSBUFFERDESC ) ); bufferDescription.dwSize = sizeof( DSBUFFERDESC ); bufferDescription.dwFlags = DSBCAPS_PRIMARYBUFFER; // Obtain the primary buffer LPDIRECTSOUNDBUFFER buffer; result = output->CreateSoundBuffer( &bufferDescription, &buffer, NULL ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") accessing primary buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Set the primary DS buffer sound format. result = buffer->SetFormat( &waveFormat ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") setting primary buffer format (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Setup the secondary DS buffer description. ZeroMemory( &bufferDescription, sizeof( DSBUFFERDESC ) ); bufferDescription.dwSize = sizeof( DSBUFFERDESC ); bufferDescription.dwFlags = ( DSBCAPS_STICKYFOCUS | DSBCAPS_GLOBALFOCUS | DSBCAPS_GETCURRENTPOSITION2 | DSBCAPS_LOCHARDWARE ); // Force hardware mixing bufferDescription.dwBufferBytes = dsBufferSize; bufferDescription.lpwfxFormat = &waveFormat; // Try to create the secondary DS buffer. If that doesn't work, // try to use software mixing. Otherwise, there's a problem. result = output->CreateSoundBuffer( &bufferDescription, &buffer, NULL ); if ( FAILED( result ) ) { bufferDescription.dwFlags = ( DSBCAPS_STICKYFOCUS | DSBCAPS_GLOBALFOCUS | DSBCAPS_GETCURRENTPOSITION2 | DSBCAPS_LOCSOFTWARE ); // Force software mixing result = output->CreateSoundBuffer( &bufferDescription, &buffer, NULL ); if ( FAILED( result ) ) { output->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") creating secondary buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } } // Get the buffer size ... might be different from what we specified. DSBCAPS dsbcaps; dsbcaps.dwSize = sizeof( DSBCAPS ); result = buffer->GetCaps( &dsbcaps ); if ( FAILED( result ) ) { output->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") getting buffer settings (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } dsBufferSize = dsbcaps.dwBufferBytes; // Lock the DS buffer LPVOID audioPtr; DWORD dataLen; result = buffer->Lock( 0, dsBufferSize, &audioPtr, &dataLen, NULL, NULL, 0 ); if ( FAILED( result ) ) { output->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") locking buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Zero the DS buffer ZeroMemory( audioPtr, dataLen ); // Unlock the DS buffer result = buffer->Unlock( audioPtr, dataLen, NULL, 0 ); if ( FAILED( result ) ) { output->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") unlocking buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } ohandle = (void *) output; bhandle = (void *) buffer; } if ( mode == INPUT ) { LPDIRECTSOUNDCAPTURE input; result = DirectSoundCaptureCreate( dsDevices[ device ].id[1], &input, NULL ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") opening input device (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } DSCCAPS inCaps; inCaps.dwSize = sizeof( inCaps ); result = input->GetCaps( &inCaps ); if ( FAILED( result ) ) { input->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") getting input capabilities (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Check channel information. if ( inCaps.dwChannels < channels + firstChannel ) { errorText_ = "RtApiDs::getDeviceInfo: the input device does not support requested input channels."; return FAILURE; } // Check format information. Use 16-bit format unless user // requests 8-bit. DWORD deviceFormats; if ( channels + firstChannel == 2 ) { deviceFormats = WAVE_FORMAT_1S08 | WAVE_FORMAT_2S08 | WAVE_FORMAT_4S08 | WAVE_FORMAT_96S08; if ( format == RTAUDIO_SINT8 && inCaps.dwFormats & deviceFormats ) { waveFormat.wBitsPerSample = 8; stream_.deviceFormat[mode] = RTAUDIO_SINT8; } else { // assume 16-bit is supported waveFormat.wBitsPerSample = 16; stream_.deviceFormat[mode] = RTAUDIO_SINT16; } } else { // channel == 1 deviceFormats = WAVE_FORMAT_1M08 | WAVE_FORMAT_2M08 | WAVE_FORMAT_4M08 | WAVE_FORMAT_96M08; if ( format == RTAUDIO_SINT8 && inCaps.dwFormats & deviceFormats ) { waveFormat.wBitsPerSample = 8; stream_.deviceFormat[mode] = RTAUDIO_SINT8; } else { // assume 16-bit is supported waveFormat.wBitsPerSample = 16; stream_.deviceFormat[mode] = RTAUDIO_SINT16; } } stream_.userFormat = format; // Update wave format structure and buffer information. waveFormat.nBlockAlign = waveFormat.nChannels * waveFormat.wBitsPerSample / 8; waveFormat.nAvgBytesPerSec = waveFormat.nSamplesPerSec * waveFormat.nBlockAlign; dsPointerLeadTime = nBuffers * (*bufferSize) * (waveFormat.wBitsPerSample / 8) * channels; // If the user wants an even bigger buffer, increase the device buffer size accordingly. while ( dsPointerLeadTime * 2U > dsBufferSize ) dsBufferSize *= 2; // Setup the secondary DS buffer description. DSCBUFFERDESC bufferDescription; ZeroMemory( &bufferDescription, sizeof( DSCBUFFERDESC ) ); bufferDescription.dwSize = sizeof( DSCBUFFERDESC ); bufferDescription.dwFlags = 0; bufferDescription.dwReserved = 0; bufferDescription.dwBufferBytes = dsBufferSize; bufferDescription.lpwfxFormat = &waveFormat; // Create the capture buffer. LPDIRECTSOUNDCAPTUREBUFFER buffer; result = input->CreateCaptureBuffer( &bufferDescription, &buffer, NULL ); if ( FAILED( result ) ) { input->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") creating input buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Get the buffer size ... might be different from what we specified. DSCBCAPS dscbcaps; dscbcaps.dwSize = sizeof( DSCBCAPS ); result = buffer->GetCaps( &dscbcaps ); if ( FAILED( result ) ) { input->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") getting buffer settings (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } dsBufferSize = dscbcaps.dwBufferBytes; // NOTE: We could have a problem here if this is a duplex stream // and the play and capture hardware buffer sizes are different // (I'm actually not sure if that is a problem or not). // Currently, we are not verifying that. // Lock the capture buffer LPVOID audioPtr; DWORD dataLen; result = buffer->Lock( 0, dsBufferSize, &audioPtr, &dataLen, NULL, NULL, 0 ); if ( FAILED( result ) ) { input->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") locking input buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } // Zero the buffer ZeroMemory( audioPtr, dataLen ); // Unlock the buffer result = buffer->Unlock( audioPtr, dataLen, NULL, 0 ); if ( FAILED( result ) ) { input->Release(); buffer->Release(); errorStream_ << "RtApiDs::probeDeviceOpen: error (" << getErrorString( result ) << ") unlocking input buffer (" << dsDevices[ device ].name << ")!"; errorText_ = errorStream_.str(); return FAILURE; } ohandle = (void *) input; bhandle = (void *) buffer; } // Set various stream parameters DsHandle *handle = 0; stream_.nDeviceChannels[mode] = channels + firstChannel; stream_.nUserChannels[mode] = channels; stream_.bufferSize = *bufferSize; stream_.channelOffset[mode] = firstChannel; stream_.deviceInterleaved[mode] = true; if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; // Set flag for buffer conversion stream_.doConvertBuffer[mode] = false; if (stream_.nUserChannels[mode] != stream_.nDeviceChannels[mode]) stream_.doConvertBuffer[mode] = true; if (stream_.userFormat != stream_.deviceFormat[mode]) stream_.doConvertBuffer[mode] = true; if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; // Allocate necessary internal buffers long bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiDs::probeDeviceOpen: error allocating user buffer memory."; goto error; } if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( mode == INPUT ) { if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= (long) bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiDs::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } // Allocate our DsHandle structures for the stream. if ( stream_.apiHandle == 0 ) { try { handle = new DsHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiDs::probeDeviceOpen: error allocating AsioHandle memory."; goto error; } // Create a manual-reset event. handle->condition = CreateEvent( NULL, // no security TRUE, // manual-reset FALSE, // non-signaled initially NULL ); // unnamed stream_.apiHandle = (void *) handle; } else handle = (DsHandle *) stream_.apiHandle; handle->id[mode] = ohandle; handle->buffer[mode] = bhandle; handle->dsBufferSize[mode] = dsBufferSize; handle->dsPointerLeadTime[mode] = dsPointerLeadTime; stream_.device[mode] = device; stream_.state = STREAM_STOPPED; if ( stream_.mode == OUTPUT && mode == INPUT ) // We had already set up an output stream. stream_.mode = DUPLEX; else stream_.mode = mode; stream_.nBuffers = nBuffers; stream_.sampleRate = sampleRate; // Setup the buffer conversion information structure. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, firstChannel ); // Setup the callback thread. if ( stream_.callbackInfo.isRunning == false ) { unsigned threadId; stream_.callbackInfo.isRunning = true; stream_.callbackInfo.object = (void *) this; stream_.callbackInfo.thread = _beginthreadex( NULL, 0, &callbackHandler, &stream_.callbackInfo, 0, &threadId ); if ( stream_.callbackInfo.thread == 0 ) { errorText_ = "RtApiDs::probeDeviceOpen: error creating callback thread!"; goto error; } // Boost DS thread priority SetThreadPriority( (HANDLE) stream_.callbackInfo.thread, THREAD_PRIORITY_HIGHEST ); } return SUCCESS; error: if ( handle ) { if ( handle->buffer[0] ) { // the object pointer can be NULL and valid LPDIRECTSOUND object = (LPDIRECTSOUND) handle->id[0]; LPDIRECTSOUNDBUFFER buffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; if ( buffer ) buffer->Release(); object->Release(); } if ( handle->buffer[1] ) { LPDIRECTSOUNDCAPTURE object = (LPDIRECTSOUNDCAPTURE) handle->id[1]; LPDIRECTSOUNDCAPTUREBUFFER buffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; if ( buffer ) buffer->Release(); object->Release(); } CloseHandle( handle->condition ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.state = STREAM_CLOSED; return FAILURE; } void RtApiDs :: closeStream() { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiDs::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } // Stop the callback thread. stream_.callbackInfo.isRunning = false; WaitForSingleObject( (HANDLE) stream_.callbackInfo.thread, INFINITE ); CloseHandle( (HANDLE) stream_.callbackInfo.thread ); DsHandle *handle = (DsHandle *) stream_.apiHandle; if ( handle ) { if ( handle->buffer[0] ) { // the object pointer can be NULL and valid LPDIRECTSOUND object = (LPDIRECTSOUND) handle->id[0]; LPDIRECTSOUNDBUFFER buffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; if ( buffer ) { buffer->Stop(); buffer->Release(); } object->Release(); } if ( handle->buffer[1] ) { LPDIRECTSOUNDCAPTURE object = (LPDIRECTSOUNDCAPTURE) handle->id[1]; LPDIRECTSOUNDCAPTUREBUFFER buffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; if ( buffer ) { buffer->Stop(); buffer->Release(); } object->Release(); } CloseHandle( handle->condition ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } void RtApiDs :: startStream() { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiDs::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif DsHandle *handle = (DsHandle *) stream_.apiHandle; // Increase scheduler frequency on lesser windows (a side-effect of // increasing timer accuracy). On greater windows (Win2K or later), // this is already in effect. timeBeginPeriod( 1 ); buffersRolling = false; duplexPrerollBytes = 0; if ( stream_.mode == DUPLEX ) { // 0.5 seconds of silence in DUPLEX mode while the devices spin up and synchronize. duplexPrerollBytes = (int) ( 0.5 * stream_.sampleRate * formatBytes( stream_.deviceFormat[1] ) * stream_.nDeviceChannels[1] ); } HRESULT result = 0; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { LPDIRECTSOUNDBUFFER buffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; result = buffer->Play( 0, 0, DSBPLAY_LOOPING ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::startStream: error (" << getErrorString( result ) << ") starting output buffer!"; errorText_ = errorStream_.str(); goto unlock; } } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { LPDIRECTSOUNDCAPTUREBUFFER buffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; result = buffer->Start( DSCBSTART_LOOPING ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::startStream: error (" << getErrorString( result ) << ") starting input buffer!"; errorText_ = errorStream_.str(); goto unlock; } } handle->drainCounter = 0; handle->internalDrain = false; ResetEvent( handle->condition ); stream_.state = STREAM_RUNNING; unlock: if ( FAILED( result ) ) error( RtAudioError::SYSTEM_ERROR ); } void RtApiDs :: stopStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiDs::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } HRESULT result = 0; LPVOID audioPtr; DWORD dataLen; DsHandle *handle = (DsHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( handle->drainCounter == 0 ) { handle->drainCounter = 2; WaitForSingleObject( handle->condition, INFINITE ); // block until signaled } stream_.state = STREAM_STOPPED; MUTEX_LOCK( &stream_.mutex ); // Stop the buffer and clear memory LPDIRECTSOUNDBUFFER buffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; result = buffer->Stop(); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") stopping output buffer!"; errorText_ = errorStream_.str(); goto unlock; } // Lock the buffer and clear it so that if we start to play again, // we won't have old data playing. result = buffer->Lock( 0, handle->dsBufferSize[0], &audioPtr, &dataLen, NULL, NULL, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") locking output buffer!"; errorText_ = errorStream_.str(); goto unlock; } // Zero the DS buffer ZeroMemory( audioPtr, dataLen ); // Unlock the DS buffer result = buffer->Unlock( audioPtr, dataLen, NULL, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") unlocking output buffer!"; errorText_ = errorStream_.str(); goto unlock; } // If we start playing again, we must begin at beginning of buffer. handle->bufferPointer[0] = 0; } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { LPDIRECTSOUNDCAPTUREBUFFER buffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; audioPtr = NULL; dataLen = 0; stream_.state = STREAM_STOPPED; if ( stream_.mode != DUPLEX ) MUTEX_LOCK( &stream_.mutex ); result = buffer->Stop(); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") stopping input buffer!"; errorText_ = errorStream_.str(); goto unlock; } // Lock the buffer and clear it so that if we start to play again, // we won't have old data playing. result = buffer->Lock( 0, handle->dsBufferSize[1], &audioPtr, &dataLen, NULL, NULL, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") locking input buffer!"; errorText_ = errorStream_.str(); goto unlock; } // Zero the DS buffer ZeroMemory( audioPtr, dataLen ); // Unlock the DS buffer result = buffer->Unlock( audioPtr, dataLen, NULL, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::stopStream: error (" << getErrorString( result ) << ") unlocking input buffer!"; errorText_ = errorStream_.str(); goto unlock; } // If we start recording again, we must begin at beginning of buffer. handle->bufferPointer[1] = 0; } unlock: timeEndPeriod( 1 ); // revert to normal scheduler frequency on lesser windows. MUTEX_UNLOCK( &stream_.mutex ); if ( FAILED( result ) ) error( RtAudioError::SYSTEM_ERROR ); } void RtApiDs :: abortStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiDs::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } DsHandle *handle = (DsHandle *) stream_.apiHandle; handle->drainCounter = 2; stopStream(); } void RtApiDs :: callbackEvent() { if ( stream_.state == STREAM_STOPPED || stream_.state == STREAM_STOPPING ) { Sleep( 50 ); // sleep 50 milliseconds return; } if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiDs::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return; } CallbackInfo *info = (CallbackInfo *) &stream_.callbackInfo; DsHandle *handle = (DsHandle *) stream_.apiHandle; // Check if we were draining the stream and signal is finished. if ( handle->drainCounter > stream_.nBuffers + 2 ) { stream_.state = STREAM_STOPPING; if ( handle->internalDrain == false ) SetEvent( handle->condition ); else stopStream(); return; } // Invoke user callback to get fresh output data UNLESS we are // draining stream. if ( handle->drainCounter == 0 ) { RtAudioCallback callback = (RtAudioCallback) info->callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && handle->xrun[0] == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; handle->xrun[0] = false; } if ( stream_.mode != OUTPUT && handle->xrun[1] == true ) { status |= RTAUDIO_INPUT_OVERFLOW; handle->xrun[1] = false; } int cbReturnValue = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, info->userData ); if ( cbReturnValue == 2 ) { stream_.state = STREAM_STOPPING; handle->drainCounter = 2; abortStream(); return; } else if ( cbReturnValue == 1 ) { handle->drainCounter = 1; handle->internalDrain = true; } } HRESULT result; DWORD currentWritePointer, safeWritePointer; DWORD currentReadPointer, safeReadPointer; UINT nextWritePointer; LPVOID buffer1 = NULL; LPVOID buffer2 = NULL; DWORD bufferSize1 = 0; DWORD bufferSize2 = 0; char *buffer; long bufferBytes; MUTEX_LOCK( &stream_.mutex ); if ( stream_.state == STREAM_STOPPED ) { MUTEX_UNLOCK( &stream_.mutex ); return; } if ( buffersRolling == false ) { if ( stream_.mode == DUPLEX ) { //assert( handle->dsBufferSize[0] == handle->dsBufferSize[1] ); // It takes a while for the devices to get rolling. As a result, // there's no guarantee that the capture and write device pointers // will move in lockstep. Wait here for both devices to start // rolling, and then set our buffer pointers accordingly. // e.g. Crystal Drivers: the capture buffer starts up 5700 to 9600 // bytes later than the write buffer. // Stub: a serious risk of having a pre-emptive scheduling round // take place between the two GetCurrentPosition calls... but I'm // really not sure how to solve the problem. Temporarily boost to // Realtime priority, maybe; but I'm not sure what priority the // DirectSound service threads run at. We *should* be roughly // within a ms or so of correct. LPDIRECTSOUNDBUFFER dsWriteBuffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; LPDIRECTSOUNDCAPTUREBUFFER dsCaptureBuffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; DWORD startSafeWritePointer, startSafeReadPointer; result = dsWriteBuffer->GetCurrentPosition( NULL, &startSafeWritePointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current write position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } result = dsCaptureBuffer->GetCurrentPosition( NULL, &startSafeReadPointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current read position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } while ( true ) { result = dsWriteBuffer->GetCurrentPosition( NULL, &safeWritePointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current write position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } result = dsCaptureBuffer->GetCurrentPosition( NULL, &safeReadPointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current read position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } if ( safeWritePointer != startSafeWritePointer && safeReadPointer != startSafeReadPointer ) break; Sleep( 1 ); } //assert( handle->dsBufferSize[0] == handle->dsBufferSize[1] ); handle->bufferPointer[0] = safeWritePointer + handle->dsPointerLeadTime[0]; if ( handle->bufferPointer[0] >= handle->dsBufferSize[0] ) handle->bufferPointer[0] -= handle->dsBufferSize[0]; handle->bufferPointer[1] = safeReadPointer; } else if ( stream_.mode == OUTPUT ) { // Set the proper nextWritePosition after initial startup. LPDIRECTSOUNDBUFFER dsWriteBuffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; result = dsWriteBuffer->GetCurrentPosition( ¤tWritePointer, &safeWritePointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current write position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } handle->bufferPointer[0] = safeWritePointer + handle->dsPointerLeadTime[0]; if ( handle->bufferPointer[0] >= handle->dsBufferSize[0] ) handle->bufferPointer[0] -= handle->dsBufferSize[0]; } buffersRolling = true; } if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { LPDIRECTSOUNDBUFFER dsBuffer = (LPDIRECTSOUNDBUFFER) handle->buffer[0]; if ( handle->drainCounter > 1 ) { // write zeros to the output stream bufferBytes = stream_.bufferSize * stream_.nUserChannels[0]; bufferBytes *= formatBytes( stream_.userFormat ); memset( stream_.userBuffer[0], 0, bufferBytes ); } // Setup parameters and do buffer conversion if necessary. if ( stream_.doConvertBuffer[0] ) { buffer = stream_.deviceBuffer; convertBuffer( buffer, stream_.userBuffer[0], stream_.convertInfo[0] ); bufferBytes = stream_.bufferSize * stream_.nDeviceChannels[0]; bufferBytes *= formatBytes( stream_.deviceFormat[0] ); } else { buffer = stream_.userBuffer[0]; bufferBytes = stream_.bufferSize * stream_.nUserChannels[0]; bufferBytes *= formatBytes( stream_.userFormat ); } // No byte swapping necessary in DirectSound implementation. // Ahhh ... windoze. 16-bit data is signed but 8-bit data is // unsigned. So, we need to convert our signed 8-bit data here to // unsigned. if ( stream_.deviceFormat[0] == RTAUDIO_SINT8 ) for ( int i=0; idsBufferSize[0]; nextWritePointer = handle->bufferPointer[0]; DWORD endWrite, leadPointer; while ( true ) { // Find out where the read and "safe write" pointers are. result = dsBuffer->GetCurrentPosition( ¤tWritePointer, &safeWritePointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current write position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } // We will copy our output buffer into the region between // safeWritePointer and leadPointer. If leadPointer is not // beyond the next endWrite position, wait until it is. leadPointer = safeWritePointer + handle->dsPointerLeadTime[0]; //std::cout << "safeWritePointer = " << safeWritePointer << ", leadPointer = " << leadPointer << ", nextWritePointer = " << nextWritePointer << std::endl; if ( leadPointer > dsBufferSize ) leadPointer -= dsBufferSize; if ( leadPointer < nextWritePointer ) leadPointer += dsBufferSize; // unwrap offset endWrite = nextWritePointer + bufferBytes; // Check whether the entire write region is behind the play pointer. if ( leadPointer >= endWrite ) break; // If we are here, then we must wait until the leadPointer advances // beyond the end of our next write region. We use the // Sleep() function to suspend operation until that happens. double millis = ( endWrite - leadPointer ) * 1000.0; millis /= ( formatBytes( stream_.deviceFormat[0]) * stream_.nDeviceChannels[0] * stream_.sampleRate); if ( millis < 1.0 ) millis = 1.0; Sleep( (DWORD) millis ); } if ( dsPointerBetween( nextWritePointer, safeWritePointer, currentWritePointer, dsBufferSize ) || dsPointerBetween( endWrite, safeWritePointer, currentWritePointer, dsBufferSize ) ) { // We've strayed into the forbidden zone ... resync the read pointer. handle->xrun[0] = true; nextWritePointer = safeWritePointer + handle->dsPointerLeadTime[0] - bufferBytes; if ( nextWritePointer >= dsBufferSize ) nextWritePointer -= dsBufferSize; handle->bufferPointer[0] = nextWritePointer; endWrite = nextWritePointer + bufferBytes; } // Lock free space in the buffer result = dsBuffer->Lock( nextWritePointer, bufferBytes, &buffer1, &bufferSize1, &buffer2, &bufferSize2, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") locking buffer during playback!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } // Copy our buffer into the DS buffer CopyMemory( buffer1, buffer, bufferSize1 ); if ( buffer2 != NULL ) CopyMemory( buffer2, buffer+bufferSize1, bufferSize2 ); // Update our buffer offset and unlock sound buffer dsBuffer->Unlock( buffer1, bufferSize1, buffer2, bufferSize2 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") unlocking buffer during playback!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } nextWritePointer = ( nextWritePointer + bufferSize1 + bufferSize2 ) % dsBufferSize; handle->bufferPointer[0] = nextWritePointer; } // Don't bother draining input if ( handle->drainCounter ) { handle->drainCounter++; goto unlock; } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { // Setup parameters. if ( stream_.doConvertBuffer[1] ) { buffer = stream_.deviceBuffer; bufferBytes = stream_.bufferSize * stream_.nDeviceChannels[1]; bufferBytes *= formatBytes( stream_.deviceFormat[1] ); } else { buffer = stream_.userBuffer[1]; bufferBytes = stream_.bufferSize * stream_.nUserChannels[1]; bufferBytes *= formatBytes( stream_.userFormat ); } LPDIRECTSOUNDCAPTUREBUFFER dsBuffer = (LPDIRECTSOUNDCAPTUREBUFFER) handle->buffer[1]; long nextReadPointer = handle->bufferPointer[1]; DWORD dsBufferSize = handle->dsBufferSize[1]; // Find out where the write and "safe read" pointers are. result = dsBuffer->GetCurrentPosition( ¤tReadPointer, &safeReadPointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current read position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } if ( safeReadPointer < (DWORD)nextReadPointer ) safeReadPointer += dsBufferSize; // unwrap offset DWORD endRead = nextReadPointer + bufferBytes; // Handling depends on whether we are INPUT or DUPLEX. // If we're in INPUT mode then waiting is a good thing. If we're in DUPLEX mode, // then a wait here will drag the write pointers into the forbidden zone. // // In DUPLEX mode, rather than wait, we will back off the read pointer until // it's in a safe position. This causes dropouts, but it seems to be the only // practical way to sync up the read and write pointers reliably, given the // the very complex relationship between phase and increment of the read and write // pointers. // // In order to minimize audible dropouts in DUPLEX mode, we will // provide a pre-roll period of 0.5 seconds in which we return // zeros from the read buffer while the pointers sync up. if ( stream_.mode == DUPLEX ) { if ( safeReadPointer < endRead ) { if ( duplexPrerollBytes <= 0 ) { // Pre-roll time over. Be more agressive. int adjustment = endRead-safeReadPointer; handle->xrun[1] = true; // Two cases: // - large adjustments: we've probably run out of CPU cycles, so just resync exactly, // and perform fine adjustments later. // - small adjustments: back off by twice as much. if ( adjustment >= 2*bufferBytes ) nextReadPointer = safeReadPointer-2*bufferBytes; else nextReadPointer = safeReadPointer-bufferBytes-adjustment; if ( nextReadPointer < 0 ) nextReadPointer += dsBufferSize; } else { // In pre=roll time. Just do it. nextReadPointer = safeReadPointer - bufferBytes; while ( nextReadPointer < 0 ) nextReadPointer += dsBufferSize; } endRead = nextReadPointer + bufferBytes; } } else { // mode == INPUT while ( safeReadPointer < endRead && stream_.callbackInfo.isRunning ) { // See comments for playback. double millis = (endRead - safeReadPointer) * 1000.0; millis /= ( formatBytes(stream_.deviceFormat[1]) * stream_.nDeviceChannels[1] * stream_.sampleRate); if ( millis < 1.0 ) millis = 1.0; Sleep( (DWORD) millis ); // Wake up and find out where we are now. result = dsBuffer->GetCurrentPosition( ¤tReadPointer, &safeReadPointer ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") getting current read position!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } if ( safeReadPointer < (DWORD)nextReadPointer ) safeReadPointer += dsBufferSize; // unwrap offset } } // Lock free space in the buffer result = dsBuffer->Lock( nextReadPointer, bufferBytes, &buffer1, &bufferSize1, &buffer2, &bufferSize2, 0 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") locking capture buffer!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } if ( duplexPrerollBytes <= 0 ) { // Copy our buffer into the DS buffer CopyMemory( buffer, buffer1, bufferSize1 ); if ( buffer2 != NULL ) CopyMemory( buffer+bufferSize1, buffer2, bufferSize2 ); } else { memset( buffer, 0, bufferSize1 ); if ( buffer2 != NULL ) memset( buffer + bufferSize1, 0, bufferSize2 ); duplexPrerollBytes -= bufferSize1 + bufferSize2; } // Update our buffer offset and unlock sound buffer nextReadPointer = ( nextReadPointer + bufferSize1 + bufferSize2 ) % dsBufferSize; dsBuffer->Unlock( buffer1, bufferSize1, buffer2, bufferSize2 ); if ( FAILED( result ) ) { errorStream_ << "RtApiDs::callbackEvent: error (" << getErrorString( result ) << ") unlocking capture buffer!"; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } handle->bufferPointer[1] = nextReadPointer; // No byte swapping necessary in DirectSound implementation. // If necessary, convert 8-bit data from unsigned to signed. if ( stream_.deviceFormat[1] == RTAUDIO_SINT8 ) for ( int j=0; jobject; bool* isRunning = &info->isRunning; while ( *isRunning == true ) { object->callbackEvent(); } _endthreadex( 0 ); return 0; } static BOOL CALLBACK deviceQueryCallback( LPGUID lpguid, LPCTSTR description, LPCTSTR /*module*/, LPVOID lpContext ) { struct DsProbeData& probeInfo = *(struct DsProbeData*) lpContext; std::vector& dsDevices = *probeInfo.dsDevices; HRESULT hr; bool validDevice = false; if ( probeInfo.isInput == true ) { DSCCAPS caps; LPDIRECTSOUNDCAPTURE object; hr = DirectSoundCaptureCreate( lpguid, &object, NULL ); if ( hr != DS_OK ) return TRUE; caps.dwSize = sizeof(caps); hr = object->GetCaps( &caps ); if ( hr == DS_OK ) { if ( caps.dwChannels > 0 && caps.dwFormats > 0 ) validDevice = true; } object->Release(); } else { DSCAPS caps; LPDIRECTSOUND object; hr = DirectSoundCreate( lpguid, &object, NULL ); if ( hr != DS_OK ) return TRUE; caps.dwSize = sizeof(caps); hr = object->GetCaps( &caps ); if ( hr == DS_OK ) { if ( caps.dwFlags & DSCAPS_PRIMARYMONO || caps.dwFlags & DSCAPS_PRIMARYSTEREO ) validDevice = true; } object->Release(); } // If good device, then save its name and guid. std::string name = convertCharPointerToStdString( description ); //if ( name == "Primary Sound Driver" || name == "Primary Sound Capture Driver" ) if ( lpguid == NULL ) name = "Default Device"; if ( validDevice ) { for ( unsigned int i=0; i #include // A structure to hold various information related to the ALSA API // implementation. struct AlsaHandle { snd_pcm_t *handles[2]; bool synchronized; bool xrun[2]; pthread_cond_t runnable_cv; bool runnable; AlsaHandle() :synchronized(false), runnable(false) { xrun[0] = false; xrun[1] = false; } }; static void *alsaCallbackHandler( void * ptr ); RtApiAlsa :: RtApiAlsa() { // Nothing to do here. } RtApiAlsa :: ~RtApiAlsa() { if ( stream_.state != STREAM_CLOSED ) closeStream(); } unsigned int RtApiAlsa :: getDeviceCount( void ) { unsigned nDevices = 0; int result, subdevice, card; char name[64]; snd_ctl_t *handle = 0; // Count cards and devices card = -1; snd_card_next( &card ); while ( card >= 0 ) { sprintf( name, "hw:%d", card ); result = snd_ctl_open( &handle, name, 0 ); if ( result < 0 ) { handle = 0; errorStream_ << "RtApiAlsa::getDeviceCount: control open, card = " << card << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto nextcard; } subdevice = -1; while( 1 ) { result = snd_ctl_pcm_next_device( handle, &subdevice ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::getDeviceCount: control next device, card = " << card << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); break; } if ( subdevice < 0 ) break; nDevices++; } nextcard: if ( handle ) snd_ctl_close( handle ); snd_card_next( &card ); } result = snd_ctl_open( &handle, "default", 0 ); if (result == 0) { nDevices++; snd_ctl_close( handle ); } return nDevices; } RtAudio::DeviceInfo RtApiAlsa :: getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; info.probed = false; unsigned nDevices = 0; int result, subdevice, card; char name[64]; snd_ctl_t *chandle = 0; // Count cards and devices card = -1; subdevice = -1; snd_card_next( &card ); while ( card >= 0 ) { sprintf( name, "hw:%d", card ); result = snd_ctl_open( &chandle, name, SND_CTL_NONBLOCK ); if ( result < 0 ) { chandle = 0; errorStream_ << "RtApiAlsa::getDeviceInfo: control open, card = " << card << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto nextcard; } subdevice = -1; while( 1 ) { result = snd_ctl_pcm_next_device( chandle, &subdevice ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::getDeviceInfo: control next device, card = " << card << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); break; } if ( subdevice < 0 ) break; if ( nDevices == device ) { sprintf( name, "hw:%d,%d", card, subdevice ); goto foundDevice; } nDevices++; } nextcard: if ( chandle ) snd_ctl_close( chandle ); snd_card_next( &card ); } result = snd_ctl_open( &chandle, "default", SND_CTL_NONBLOCK ); if ( result == 0 ) { if ( nDevices == device ) { strcpy( name, "default" ); goto foundDevice; } nDevices++; } if ( nDevices == 0 ) { errorText_ = "RtApiAlsa::getDeviceInfo: no devices found!"; error( RtAudioError::INVALID_USE ); return info; } if ( device >= nDevices ) { errorText_ = "RtApiAlsa::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } foundDevice: // If a stream is already open, we cannot probe the stream devices. // Thus, use the saved results. if ( stream_.state != STREAM_CLOSED && ( stream_.device[0] == device || stream_.device[1] == device ) ) { snd_ctl_close( chandle ); if ( device >= devices_.size() ) { errorText_ = "RtApiAlsa::getDeviceInfo: device ID was not present before stream was opened."; error( RtAudioError::WARNING ); return info; } return devices_[ device ]; } int openMode = SND_PCM_ASYNC; snd_pcm_stream_t stream; snd_pcm_info_t *pcminfo; snd_pcm_info_alloca( &pcminfo ); snd_pcm_t *phandle; snd_pcm_hw_params_t *params; snd_pcm_hw_params_alloca( ¶ms ); // First try for playback unless default device (which has subdev -1) stream = SND_PCM_STREAM_PLAYBACK; snd_pcm_info_set_stream( pcminfo, stream ); if ( subdevice != -1 ) { snd_pcm_info_set_device( pcminfo, subdevice ); snd_pcm_info_set_subdevice( pcminfo, 0 ); result = snd_ctl_pcm_info( chandle, pcminfo ); if ( result < 0 ) { // Device probably doesn't support playback. goto captureProbe; } } result = snd_pcm_open( &phandle, name, stream, openMode | SND_PCM_NONBLOCK ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_open error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto captureProbe; } // The device is open ... fill the parameter structure. result = snd_pcm_hw_params_any( phandle, params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_hw_params error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto captureProbe; } // Get output channel information. unsigned int value; result = snd_pcm_hw_params_get_channels_max( params, &value ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: error getting device (" << name << ") output channels, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); goto captureProbe; } info.outputChannels = value; snd_pcm_close( phandle ); captureProbe: stream = SND_PCM_STREAM_CAPTURE; snd_pcm_info_set_stream( pcminfo, stream ); // Now try for capture unless default device (with subdev = -1) if ( subdevice != -1 ) { result = snd_ctl_pcm_info( chandle, pcminfo ); snd_ctl_close( chandle ); if ( result < 0 ) { // Device probably doesn't support capture. if ( info.outputChannels == 0 ) return info; goto probeParameters; } } else snd_ctl_close( chandle ); result = snd_pcm_open( &phandle, name, stream, openMode | SND_PCM_NONBLOCK); if ( result < 0 ) { errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_open error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); if ( info.outputChannels == 0 ) return info; goto probeParameters; } // The device is open ... fill the parameter structure. result = snd_pcm_hw_params_any( phandle, params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_hw_params error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); if ( info.outputChannels == 0 ) return info; goto probeParameters; } result = snd_pcm_hw_params_get_channels_max( params, &value ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: error getting device (" << name << ") input channels, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); if ( info.outputChannels == 0 ) return info; goto probeParameters; } info.inputChannels = value; snd_pcm_close( phandle ); // If device opens for both playback and capture, we determine the channels. if ( info.outputChannels > 0 && info.inputChannels > 0 ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; // ALSA doesn't provide default devices so we'll use the first available one. if ( device == 0 && info.outputChannels > 0 ) info.isDefaultOutput = true; if ( device == 0 && info.inputChannels > 0 ) info.isDefaultInput = true; probeParameters: // At this point, we just need to figure out the supported data // formats and sample rates. We'll proceed by opening the device in // the direction with the maximum number of channels, or playback if // they are equal. This might limit our sample rate options, but so // be it. if ( info.outputChannels >= info.inputChannels ) stream = SND_PCM_STREAM_PLAYBACK; else stream = SND_PCM_STREAM_CAPTURE; snd_pcm_info_set_stream( pcminfo, stream ); result = snd_pcm_open( &phandle, name, stream, openMode | SND_PCM_NONBLOCK); if ( result < 0 ) { errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_open error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // The device is open ... fill the parameter structure. result = snd_pcm_hw_params_any( phandle, params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: snd_pcm_hw_params error for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Test our discrete set of sample rate values. info.sampleRates.clear(); for ( unsigned int i=0; i info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[i]; } } if ( info.sampleRates.size() == 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: no supported sample rates found for device (" << name << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Probe the supported data formats ... we don't care about endian-ness just yet snd_pcm_format_t format; info.nativeFormats = 0; format = SND_PCM_FORMAT_S8; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_SINT8; format = SND_PCM_FORMAT_S16; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_SINT16; format = SND_PCM_FORMAT_S24; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_SINT24; format = SND_PCM_FORMAT_S32; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_SINT32; format = SND_PCM_FORMAT_FLOAT; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_FLOAT32; format = SND_PCM_FORMAT_FLOAT64; if ( snd_pcm_hw_params_test_format( phandle, params, format ) == 0 ) info.nativeFormats |= RTAUDIO_FLOAT64; // Check that we have at least one supported format if ( info.nativeFormats == 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::getDeviceInfo: pcm device (" << name << ") data format not supported by RtAudio."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Get the device name char *cardname; result = snd_card_get_name( card, &cardname ); if ( result >= 0 ) { sprintf( name, "hw:%s,%d", cardname, subdevice ); free( cardname ); } info.name = name; // That's all ... close the device and return snd_pcm_close( phandle ); info.probed = true; return info; } void RtApiAlsa :: saveDeviceInfo( void ) { devices_.clear(); unsigned int nDevices = getDeviceCount(); devices_.resize( nDevices ); for ( unsigned int i=0; iflags & RTAUDIO_ALSA_USE_DEFAULT ) snprintf(name, sizeof(name), "%s", "default"); else { // Count cards and devices card = -1; snd_card_next( &card ); while ( card >= 0 ) { sprintf( name, "hw:%d", card ); result = snd_ctl_open( &chandle, name, SND_CTL_NONBLOCK ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::probeDeviceOpen: control open, card = " << card << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } subdevice = -1; while( 1 ) { result = snd_ctl_pcm_next_device( chandle, &subdevice ); if ( result < 0 ) break; if ( subdevice < 0 ) break; if ( nDevices == device ) { sprintf( name, "hw:%d,%d", card, subdevice ); snd_ctl_close( chandle ); goto foundDevice; } nDevices++; } snd_ctl_close( chandle ); snd_card_next( &card ); } result = snd_ctl_open( &chandle, "default", SND_CTL_NONBLOCK ); if ( result == 0 ) { if ( nDevices == device ) { strcpy( name, "default" ); snd_ctl_close( chandle ); goto foundDevice; } nDevices++; } snd_ctl_close( chandle ); if ( nDevices == 0 ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiAlsa::probeDeviceOpen: no devices found!"; return FAILURE; } if ( device >= nDevices ) { // This should not happen because a check is made before this function is called. errorText_ = "RtApiAlsa::probeDeviceOpen: device ID is invalid!"; return FAILURE; } } foundDevice: // The getDeviceInfo() function will not work for a device that is // already open. Thus, we'll probe the system before opening a // stream and save the results for use by getDeviceInfo(). if ( mode == OUTPUT || ( mode == INPUT && stream_.mode != OUTPUT ) ) // only do once this->saveDeviceInfo(); snd_pcm_stream_t stream; if ( mode == OUTPUT ) stream = SND_PCM_STREAM_PLAYBACK; else stream = SND_PCM_STREAM_CAPTURE; snd_pcm_t *phandle; int openMode = SND_PCM_ASYNC; result = snd_pcm_open( &phandle, name, stream, openMode ); if ( result < 0 ) { if ( mode == OUTPUT ) errorStream_ << "RtApiAlsa::probeDeviceOpen: pcm device (" << name << ") won't open for output."; else errorStream_ << "RtApiAlsa::probeDeviceOpen: pcm device (" << name << ") won't open for input."; errorText_ = errorStream_.str(); return FAILURE; } // Fill the parameter structure. snd_pcm_hw_params_t *hw_params; snd_pcm_hw_params_alloca( &hw_params ); result = snd_pcm_hw_params_any( phandle, hw_params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error getting pcm device (" << name << ") parameters, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } #if defined(__RTAUDIO_DEBUG__) fprintf( stderr, "\nRtApiAlsa: dump hardware params just after device open:\n\n" ); snd_pcm_hw_params_dump( hw_params, out ); #endif // Set access ... check user preference. if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) { stream_.userInterleaved = false; result = snd_pcm_hw_params_set_access( phandle, hw_params, SND_PCM_ACCESS_RW_NONINTERLEAVED ); if ( result < 0 ) { result = snd_pcm_hw_params_set_access( phandle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED ); stream_.deviceInterleaved[mode] = true; } else stream_.deviceInterleaved[mode] = false; } else { stream_.userInterleaved = true; result = snd_pcm_hw_params_set_access( phandle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED ); if ( result < 0 ) { result = snd_pcm_hw_params_set_access( phandle, hw_params, SND_PCM_ACCESS_RW_NONINTERLEAVED ); stream_.deviceInterleaved[mode] = false; } else stream_.deviceInterleaved[mode] = true; } if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting pcm device (" << name << ") access, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } // Determine how to set the device format. stream_.userFormat = format; snd_pcm_format_t deviceFormat = SND_PCM_FORMAT_UNKNOWN; if ( format == RTAUDIO_SINT8 ) deviceFormat = SND_PCM_FORMAT_S8; else if ( format == RTAUDIO_SINT16 ) deviceFormat = SND_PCM_FORMAT_S16; else if ( format == RTAUDIO_SINT24 ) deviceFormat = SND_PCM_FORMAT_S24; else if ( format == RTAUDIO_SINT32 ) deviceFormat = SND_PCM_FORMAT_S32; else if ( format == RTAUDIO_FLOAT32 ) deviceFormat = SND_PCM_FORMAT_FLOAT; else if ( format == RTAUDIO_FLOAT64 ) deviceFormat = SND_PCM_FORMAT_FLOAT64; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat) == 0) { stream_.deviceFormat[mode] = format; goto setFormat; } // The user requested format is not natively supported by the device. deviceFormat = SND_PCM_FORMAT_FLOAT64; if ( snd_pcm_hw_params_test_format( phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_FLOAT64; goto setFormat; } deviceFormat = SND_PCM_FORMAT_FLOAT; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_FLOAT32; goto setFormat; } deviceFormat = SND_PCM_FORMAT_S32; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_SINT32; goto setFormat; } deviceFormat = SND_PCM_FORMAT_S24; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_SINT24; goto setFormat; } deviceFormat = SND_PCM_FORMAT_S16; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_SINT16; goto setFormat; } deviceFormat = SND_PCM_FORMAT_S8; if ( snd_pcm_hw_params_test_format(phandle, hw_params, deviceFormat ) == 0 ) { stream_.deviceFormat[mode] = RTAUDIO_SINT8; goto setFormat; } // If we get here, no supported format was found. snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: pcm device " << device << " data format not supported by RtAudio."; errorText_ = errorStream_.str(); return FAILURE; setFormat: result = snd_pcm_hw_params_set_format( phandle, hw_params, deviceFormat ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting pcm device (" << name << ") data format, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } // Determine whether byte-swaping is necessary. stream_.doByteSwap[mode] = false; if ( deviceFormat != SND_PCM_FORMAT_S8 ) { result = snd_pcm_format_cpu_endian( deviceFormat ); if ( result == 0 ) stream_.doByteSwap[mode] = true; else if (result < 0) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error getting pcm device (" << name << ") endian-ness, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } } // Set the sample rate. result = snd_pcm_hw_params_set_rate_near( phandle, hw_params, (unsigned int*) &sampleRate, 0 ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting sample rate on device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } // Determine the number of channels for this device. We support a possible // minimum device channel number > than the value requested by the user. stream_.nUserChannels[mode] = channels; unsigned int value; result = snd_pcm_hw_params_get_channels_max( hw_params, &value ); unsigned int deviceChannels = value; if ( result < 0 || deviceChannels < channels + firstChannel ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: requested channel parameters not supported by device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } result = snd_pcm_hw_params_get_channels_min( hw_params, &value ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error getting minimum channels for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } deviceChannels = value; if ( deviceChannels < channels + firstChannel ) deviceChannels = channels + firstChannel; stream_.nDeviceChannels[mode] = deviceChannels; // Set the device channels. result = snd_pcm_hw_params_set_channels( phandle, hw_params, deviceChannels ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting channels for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } // Set the buffer (or period) size. int dir = 0; snd_pcm_uframes_t periodSize = *bufferSize; result = snd_pcm_hw_params_set_period_size_near( phandle, hw_params, &periodSize, &dir ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting period size for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } *bufferSize = periodSize; // Set the buffer number, which in ALSA is referred to as the "period". unsigned int periods = 0; if ( options && options->flags & RTAUDIO_MINIMIZE_LATENCY ) periods = 2; if ( options && options->numberOfBuffers > 0 ) periods = options->numberOfBuffers; if ( periods < 2 ) periods = 4; // a fairly safe default value result = snd_pcm_hw_params_set_periods_near( phandle, hw_params, &periods, &dir ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error setting periods for device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } // If attempting to setup a duplex stream, the bufferSize parameter // MUST be the same in both directions! if ( stream_.mode == OUTPUT && mode == INPUT && *bufferSize != stream_.bufferSize ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: system error setting buffer size for duplex stream on device (" << name << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.bufferSize = *bufferSize; // Install the hardware configuration result = snd_pcm_hw_params( phandle, hw_params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error installing hardware configuration on device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } #if defined(__RTAUDIO_DEBUG__) fprintf(stderr, "\nRtApiAlsa: dump hardware params after installation:\n\n"); snd_pcm_hw_params_dump( hw_params, out ); #endif // Set the software configuration to fill buffers with zeros and prevent device stopping on xruns. snd_pcm_sw_params_t *sw_params = NULL; snd_pcm_sw_params_alloca( &sw_params ); snd_pcm_sw_params_current( phandle, sw_params ); snd_pcm_sw_params_set_start_threshold( phandle, sw_params, *bufferSize ); snd_pcm_sw_params_set_stop_threshold( phandle, sw_params, ULONG_MAX ); snd_pcm_sw_params_set_silence_threshold( phandle, sw_params, 0 ); // The following two settings were suggested by Theo Veenker //snd_pcm_sw_params_set_avail_min( phandle, sw_params, *bufferSize ); //snd_pcm_sw_params_set_xfer_align( phandle, sw_params, 1 ); // here are two options for a fix //snd_pcm_sw_params_set_silence_size( phandle, sw_params, ULONG_MAX ); snd_pcm_uframes_t val; snd_pcm_sw_params_get_boundary( sw_params, &val ); snd_pcm_sw_params_set_silence_size( phandle, sw_params, val ); result = snd_pcm_sw_params( phandle, sw_params ); if ( result < 0 ) { snd_pcm_close( phandle ); errorStream_ << "RtApiAlsa::probeDeviceOpen: error installing software configuration on device (" << name << "), " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); return FAILURE; } #if defined(__RTAUDIO_DEBUG__) fprintf(stderr, "\nRtApiAlsa: dump software params after installation:\n\n"); snd_pcm_sw_params_dump( sw_params, out ); #endif // Set flags for buffer conversion stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.nUserChannels[mode] < stream_.nDeviceChannels[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; // Allocate the ApiHandle if necessary and then save. AlsaHandle *apiInfo = 0; if ( stream_.apiHandle == 0 ) { try { apiInfo = (AlsaHandle *) new AlsaHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiAlsa::probeDeviceOpen: error allocating AlsaHandle memory."; goto error; } if ( pthread_cond_init( &apiInfo->runnable_cv, NULL ) ) { errorText_ = "RtApiAlsa::probeDeviceOpen: error initializing pthread condition variable."; goto error; } stream_.apiHandle = (void *) apiInfo; apiInfo->handles[0] = 0; apiInfo->handles[1] = 0; } else { apiInfo = (AlsaHandle *) stream_.apiHandle; } apiInfo->handles[mode] = phandle; phandle = 0; // Allocate necessary internal buffers. unsigned long bufferBytes; bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiAlsa::probeDeviceOpen: error allocating user buffer memory."; goto error; } if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( mode == INPUT ) { if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiAlsa::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } stream_.sampleRate = sampleRate; stream_.nBuffers = periods; stream_.device[mode] = device; stream_.state = STREAM_STOPPED; // Setup the buffer conversion information structure. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, firstChannel ); // Setup thread if necessary. if ( stream_.mode == OUTPUT && mode == INPUT ) { // We had already set up an output stream. stream_.mode = DUPLEX; // Link the streams if possible. apiInfo->synchronized = false; if ( snd_pcm_link( apiInfo->handles[0], apiInfo->handles[1] ) == 0 ) apiInfo->synchronized = true; else { errorText_ = "RtApiAlsa::probeDeviceOpen: unable to synchronize input and output devices."; error( RtAudioError::WARNING ); } } else { stream_.mode = mode; // Setup callback thread. stream_.callbackInfo.object = (void *) this; // Set the thread attributes for joinable and realtime scheduling // priority (optional). The higher priority will only take affect // if the program is run as root or suid. Note, under Linux // processes with CAP_SYS_NICE privilege, a user can change // scheduling policy and priority (thus need not be root). See // POSIX "capabilities". pthread_attr_t attr; pthread_attr_init( &attr ); pthread_attr_setdetachstate( &attr, PTHREAD_CREATE_JOINABLE ); #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if ( options && options->flags & RTAUDIO_SCHEDULE_REALTIME ) { stream_.callbackInfo.doRealtime = true; struct sched_param param; int priority = options->priority; int min = sched_get_priority_min( SCHED_RR ); int max = sched_get_priority_max( SCHED_RR ); if ( priority < min ) priority = min; else if ( priority > max ) priority = max; param.sched_priority = priority; // Set the policy BEFORE the priority. Otherwise it fails. pthread_attr_setschedpolicy(&attr, SCHED_RR); pthread_attr_setscope (&attr, PTHREAD_SCOPE_SYSTEM); // This is definitely required. Otherwise it fails. pthread_attr_setinheritsched(&attr, PTHREAD_EXPLICIT_SCHED); pthread_attr_setschedparam(&attr, ¶m); } else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #endif stream_.callbackInfo.isRunning = true; result = pthread_create( &stream_.callbackInfo.thread, &attr, alsaCallbackHandler, &stream_.callbackInfo ); pthread_attr_destroy( &attr ); if ( result ) { // Failed. Try instead with default attributes. result = pthread_create( &stream_.callbackInfo.thread, NULL, alsaCallbackHandler, &stream_.callbackInfo ); if ( result ) { stream_.callbackInfo.isRunning = false; errorText_ = "RtApiAlsa::error creating callback thread!"; goto error; } } } return SUCCESS; error: if ( apiInfo ) { pthread_cond_destroy( &apiInfo->runnable_cv ); if ( apiInfo->handles[0] ) snd_pcm_close( apiInfo->handles[0] ); if ( apiInfo->handles[1] ) snd_pcm_close( apiInfo->handles[1] ); delete apiInfo; stream_.apiHandle = 0; } if ( phandle) snd_pcm_close( phandle ); for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.state = STREAM_CLOSED; return FAILURE; } void RtApiAlsa :: closeStream() { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiAlsa::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } AlsaHandle *apiInfo = (AlsaHandle *) stream_.apiHandle; stream_.callbackInfo.isRunning = false; MUTEX_LOCK( &stream_.mutex ); if ( stream_.state == STREAM_STOPPED ) { apiInfo->runnable = true; pthread_cond_signal( &apiInfo->runnable_cv ); } MUTEX_UNLOCK( &stream_.mutex ); pthread_join( stream_.callbackInfo.thread, NULL ); if ( stream_.state == STREAM_RUNNING ) { stream_.state = STREAM_STOPPED; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) snd_pcm_drop( apiInfo->handles[0] ); if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) snd_pcm_drop( apiInfo->handles[1] ); } if ( apiInfo ) { pthread_cond_destroy( &apiInfo->runnable_cv ); if ( apiInfo->handles[0] ) snd_pcm_close( apiInfo->handles[0] ); if ( apiInfo->handles[1] ) snd_pcm_close( apiInfo->handles[1] ); delete apiInfo; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } void RtApiAlsa :: startStream() { // This method calls snd_pcm_prepare if the device isn't already in that state. verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiAlsa::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } MUTEX_LOCK( &stream_.mutex ); #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif int result = 0; snd_pcm_state_t state; AlsaHandle *apiInfo = (AlsaHandle *) stream_.apiHandle; snd_pcm_t **handle = (snd_pcm_t **) apiInfo->handles; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { state = snd_pcm_state( handle[0] ); if ( state != SND_PCM_STATE_PREPARED ) { result = snd_pcm_prepare( handle[0] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::startStream: error preparing output pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } } if ( ( stream_.mode == INPUT || stream_.mode == DUPLEX ) && !apiInfo->synchronized ) { result = snd_pcm_drop(handle[1]); // fix to remove stale data received since device has been open state = snd_pcm_state( handle[1] ); if ( state != SND_PCM_STATE_PREPARED ) { result = snd_pcm_prepare( handle[1] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::startStream: error preparing input pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } } stream_.state = STREAM_RUNNING; unlock: apiInfo->runnable = true; pthread_cond_signal( &apiInfo->runnable_cv ); MUTEX_UNLOCK( &stream_.mutex ); if ( result >= 0 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiAlsa :: stopStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiAlsa::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } stream_.state = STREAM_STOPPED; MUTEX_LOCK( &stream_.mutex ); int result = 0; AlsaHandle *apiInfo = (AlsaHandle *) stream_.apiHandle; snd_pcm_t **handle = (snd_pcm_t **) apiInfo->handles; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( apiInfo->synchronized ) result = snd_pcm_drop( handle[0] ); else result = snd_pcm_drain( handle[0] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::stopStream: error draining output pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } if ( ( stream_.mode == INPUT || stream_.mode == DUPLEX ) && !apiInfo->synchronized ) { result = snd_pcm_drop( handle[1] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::stopStream: error stopping input pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } unlock: apiInfo->runnable = false; // fixes high CPU usage when stopped MUTEX_UNLOCK( &stream_.mutex ); if ( result >= 0 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiAlsa :: abortStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiAlsa::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } stream_.state = STREAM_STOPPED; MUTEX_LOCK( &stream_.mutex ); int result = 0; AlsaHandle *apiInfo = (AlsaHandle *) stream_.apiHandle; snd_pcm_t **handle = (snd_pcm_t **) apiInfo->handles; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { result = snd_pcm_drop( handle[0] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::abortStream: error aborting output pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } if ( ( stream_.mode == INPUT || stream_.mode == DUPLEX ) && !apiInfo->synchronized ) { result = snd_pcm_drop( handle[1] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::abortStream: error aborting input pcm device, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); goto unlock; } } unlock: apiInfo->runnable = false; // fixes high CPU usage when stopped MUTEX_UNLOCK( &stream_.mutex ); if ( result >= 0 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiAlsa :: callbackEvent() { AlsaHandle *apiInfo = (AlsaHandle *) stream_.apiHandle; if ( stream_.state == STREAM_STOPPED ) { MUTEX_LOCK( &stream_.mutex ); while ( !apiInfo->runnable ) pthread_cond_wait( &apiInfo->runnable_cv, &stream_.mutex ); if ( stream_.state != STREAM_RUNNING ) { MUTEX_UNLOCK( &stream_.mutex ); return; } MUTEX_UNLOCK( &stream_.mutex ); } if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiAlsa::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return; } int doStopStream = 0; RtAudioCallback callback = (RtAudioCallback) stream_.callbackInfo.callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && apiInfo->xrun[0] == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; apiInfo->xrun[0] = false; } if ( stream_.mode != OUTPUT && apiInfo->xrun[1] == true ) { status |= RTAUDIO_INPUT_OVERFLOW; apiInfo->xrun[1] = false; } doStopStream = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, stream_.callbackInfo.userData ); if ( doStopStream == 2 ) { abortStream(); return; } MUTEX_LOCK( &stream_.mutex ); // The state might change while waiting on a mutex. if ( stream_.state == STREAM_STOPPED ) goto unlock; int result; char *buffer; int channels; snd_pcm_t **handle; snd_pcm_sframes_t frames; RtAudioFormat format; handle = (snd_pcm_t **) apiInfo->handles; if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { // Setup parameters. if ( stream_.doConvertBuffer[1] ) { buffer = stream_.deviceBuffer; channels = stream_.nDeviceChannels[1]; format = stream_.deviceFormat[1]; } else { buffer = stream_.userBuffer[1]; channels = stream_.nUserChannels[1]; format = stream_.userFormat; } // Read samples from device in interleaved/non-interleaved format. if ( stream_.deviceInterleaved[1] ) result = snd_pcm_readi( handle[1], buffer, stream_.bufferSize ); else { void *bufs[channels]; size_t offset = stream_.bufferSize * formatBytes( format ); for ( int i=0; ixrun[1] = true; result = snd_pcm_prepare( handle[1] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::callbackEvent: error preparing device after overrun, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } } else { errorStream_ << "RtApiAlsa::callbackEvent: error, current state is " << snd_pcm_state_name( state ) << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } } else { errorStream_ << "RtApiAlsa::callbackEvent: audio read error, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } error( RtAudioError::WARNING ); goto tryOutput; } // Do byte swapping if necessary. if ( stream_.doByteSwap[1] ) byteSwapBuffer( buffer, stream_.bufferSize * channels, format ); // Do buffer conversion if necessary. if ( stream_.doConvertBuffer[1] ) convertBuffer( stream_.userBuffer[1], stream_.deviceBuffer, stream_.convertInfo[1] ); // Check stream latency result = snd_pcm_delay( handle[1], &frames ); if ( result == 0 && frames > 0 ) stream_.latency[1] = frames; } tryOutput: if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { // Setup parameters and do buffer conversion if necessary. if ( stream_.doConvertBuffer[0] ) { buffer = stream_.deviceBuffer; convertBuffer( buffer, stream_.userBuffer[0], stream_.convertInfo[0] ); channels = stream_.nDeviceChannels[0]; format = stream_.deviceFormat[0]; } else { buffer = stream_.userBuffer[0]; channels = stream_.nUserChannels[0]; format = stream_.userFormat; } // Do byte swapping if necessary. if ( stream_.doByteSwap[0] ) byteSwapBuffer(buffer, stream_.bufferSize * channels, format); // Write samples to device in interleaved/non-interleaved format. if ( stream_.deviceInterleaved[0] ) result = snd_pcm_writei( handle[0], buffer, stream_.bufferSize ); else { void *bufs[channels]; size_t offset = stream_.bufferSize * formatBytes( format ); for ( int i=0; ixrun[0] = true; result = snd_pcm_prepare( handle[0] ); if ( result < 0 ) { errorStream_ << "RtApiAlsa::callbackEvent: error preparing device after underrun, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } else errorText_ = "RtApiAlsa::callbackEvent: audio write error, underrun."; } else { errorStream_ << "RtApiAlsa::callbackEvent: error, current state is " << snd_pcm_state_name( state ) << ", " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } } else { errorStream_ << "RtApiAlsa::callbackEvent: audio write error, " << snd_strerror( result ) << "."; errorText_ = errorStream_.str(); } error( RtAudioError::WARNING ); goto unlock; } // Check stream latency result = snd_pcm_delay( handle[0], &frames ); if ( result == 0 && frames > 0 ) stream_.latency[0] = frames; } unlock: MUTEX_UNLOCK( &stream_.mutex ); RtApi::tickStreamTime(); if ( doStopStream == 1 ) this->stopStream(); } static void *alsaCallbackHandler( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiAlsa *object = (RtApiAlsa *) info->object; bool *isRunning = &info->isRunning; #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if ( info->doRealtime ) { std::cerr << "RtAudio alsa: " << (sched_getscheduler(0) == SCHED_RR ? "" : "_NOT_ ") << "running realtime scheduling" << std::endl; } #endif while ( *isRunning == true ) { pthread_testcancel(); object->callbackEvent(); } pthread_exit( NULL ); } //******************** End of __LINUX_ALSA__ *********************// #endif #if defined(__LINUX_PULSE__) // Code written by Peter Meerwald, pmeerw@pmeerw.net // and Tristan Matthews. #include #include #include static const unsigned int SUPPORTED_SAMPLERATES[] = { 8000, 16000, 22050, 32000, 44100, 48000, 96000, 0}; struct rtaudio_pa_format_mapping_t { RtAudioFormat rtaudio_format; pa_sample_format_t pa_format; }; static const rtaudio_pa_format_mapping_t supported_sampleformats[] = { {RTAUDIO_SINT16, PA_SAMPLE_S16LE}, {RTAUDIO_SINT32, PA_SAMPLE_S32LE}, {RTAUDIO_FLOAT32, PA_SAMPLE_FLOAT32LE}, {0, PA_SAMPLE_INVALID}}; struct PulseAudioHandle { pa_simple *s_play; pa_simple *s_rec; pthread_t thread; pthread_cond_t runnable_cv; bool runnable; PulseAudioHandle() : s_play(0), s_rec(0), runnable(false) { } }; RtApiPulse::~RtApiPulse() { if ( stream_.state != STREAM_CLOSED ) closeStream(); } unsigned int RtApiPulse::getDeviceCount( void ) { return 1; } RtAudio::DeviceInfo RtApiPulse::getDeviceInfo( unsigned int /*device*/ ) { RtAudio::DeviceInfo info; info.probed = true; info.name = "PulseAudio"; info.outputChannels = 2; info.inputChannels = 2; info.duplexChannels = 2; info.isDefaultOutput = true; info.isDefaultInput = true; for ( const unsigned int *sr = SUPPORTED_SAMPLERATES; *sr; ++sr ) info.sampleRates.push_back( *sr ); info.preferredSampleRate = 48000; info.nativeFormats = RTAUDIO_SINT16 | RTAUDIO_SINT32 | RTAUDIO_FLOAT32; return info; } static void *pulseaudio_callback( void * user ) { CallbackInfo *cbi = static_cast( user ); RtApiPulse *context = static_cast( cbi->object ); volatile bool *isRunning = &cbi->isRunning; #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if (cbi->doRealtime) { std::cerr << "RtAudio pulse: " << (sched_getscheduler(0) == SCHED_RR ? "" : "_NOT_ ") << "running realtime scheduling" << std::endl; } #endif while ( *isRunning ) { pthread_testcancel(); context->callbackEvent(); } pthread_exit( NULL ); } void RtApiPulse::closeStream( void ) { PulseAudioHandle *pah = static_cast( stream_.apiHandle ); stream_.callbackInfo.isRunning = false; if ( pah ) { MUTEX_LOCK( &stream_.mutex ); if ( stream_.state == STREAM_STOPPED ) { pah->runnable = true; pthread_cond_signal( &pah->runnable_cv ); } MUTEX_UNLOCK( &stream_.mutex ); pthread_join( pah->thread, 0 ); if ( pah->s_play ) { pa_simple_flush( pah->s_play, NULL ); pa_simple_free( pah->s_play ); } if ( pah->s_rec ) pa_simple_free( pah->s_rec ); pthread_cond_destroy( &pah->runnable_cv ); delete pah; stream_.apiHandle = 0; } if ( stream_.userBuffer[0] ) { free( stream_.userBuffer[0] ); stream_.userBuffer[0] = 0; } if ( stream_.userBuffer[1] ) { free( stream_.userBuffer[1] ); stream_.userBuffer[1] = 0; } stream_.state = STREAM_CLOSED; stream_.mode = UNINITIALIZED; } void RtApiPulse::callbackEvent( void ) { PulseAudioHandle *pah = static_cast( stream_.apiHandle ); if ( stream_.state == STREAM_STOPPED ) { MUTEX_LOCK( &stream_.mutex ); while ( !pah->runnable ) pthread_cond_wait( &pah->runnable_cv, &stream_.mutex ); if ( stream_.state != STREAM_RUNNING ) { MUTEX_UNLOCK( &stream_.mutex ); return; } MUTEX_UNLOCK( &stream_.mutex ); } if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiPulse::callbackEvent(): the stream is closed ... " "this shouldn't happen!"; error( RtAudioError::WARNING ); return; } RtAudioCallback callback = (RtAudioCallback) stream_.callbackInfo.callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; int doStopStream = callback( stream_.userBuffer[OUTPUT], stream_.userBuffer[INPUT], stream_.bufferSize, streamTime, status, stream_.callbackInfo.userData ); if ( doStopStream == 2 ) { abortStream(); return; } MUTEX_LOCK( &stream_.mutex ); void *pulse_in = stream_.doConvertBuffer[INPUT] ? stream_.deviceBuffer : stream_.userBuffer[INPUT]; void *pulse_out = stream_.doConvertBuffer[OUTPUT] ? stream_.deviceBuffer : stream_.userBuffer[OUTPUT]; if ( stream_.state != STREAM_RUNNING ) goto unlock; int pa_error; size_t bytes; if (stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { if ( stream_.doConvertBuffer[OUTPUT] ) { convertBuffer( stream_.deviceBuffer, stream_.userBuffer[OUTPUT], stream_.convertInfo[OUTPUT] ); bytes = stream_.nDeviceChannels[OUTPUT] * stream_.bufferSize * formatBytes( stream_.deviceFormat[OUTPUT] ); } else bytes = stream_.nUserChannels[OUTPUT] * stream_.bufferSize * formatBytes( stream_.userFormat ); if ( pa_simple_write( pah->s_play, pulse_out, bytes, &pa_error ) < 0 ) { errorStream_ << "RtApiPulse::callbackEvent: audio write error, " << pa_strerror( pa_error ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); } } if ( stream_.mode == INPUT || stream_.mode == DUPLEX) { if ( stream_.doConvertBuffer[INPUT] ) bytes = stream_.nDeviceChannels[INPUT] * stream_.bufferSize * formatBytes( stream_.deviceFormat[INPUT] ); else bytes = stream_.nUserChannels[INPUT] * stream_.bufferSize * formatBytes( stream_.userFormat ); if ( pa_simple_read( pah->s_rec, pulse_in, bytes, &pa_error ) < 0 ) { errorStream_ << "RtApiPulse::callbackEvent: audio read error, " << pa_strerror( pa_error ) << "."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); } if ( stream_.doConvertBuffer[INPUT] ) { convertBuffer( stream_.userBuffer[INPUT], stream_.deviceBuffer, stream_.convertInfo[INPUT] ); } } unlock: MUTEX_UNLOCK( &stream_.mutex ); RtApi::tickStreamTime(); if ( doStopStream == 1 ) stopStream(); } void RtApiPulse::startStream( void ) { PulseAudioHandle *pah = static_cast( stream_.apiHandle ); if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiPulse::startStream(): the stream is not open!"; error( RtAudioError::INVALID_USE ); return; } if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiPulse::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } MUTEX_LOCK( &stream_.mutex ); #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif stream_.state = STREAM_RUNNING; pah->runnable = true; pthread_cond_signal( &pah->runnable_cv ); MUTEX_UNLOCK( &stream_.mutex ); } void RtApiPulse::stopStream( void ) { PulseAudioHandle *pah = static_cast( stream_.apiHandle ); if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiPulse::stopStream(): the stream is not open!"; error( RtAudioError::INVALID_USE ); return; } if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiPulse::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } stream_.state = STREAM_STOPPED; MUTEX_LOCK( &stream_.mutex ); if ( pah && pah->s_play ) { int pa_error; if ( pa_simple_drain( pah->s_play, &pa_error ) < 0 ) { errorStream_ << "RtApiPulse::stopStream: error draining output device, " << pa_strerror( pa_error ) << "."; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } } stream_.state = STREAM_STOPPED; MUTEX_UNLOCK( &stream_.mutex ); } void RtApiPulse::abortStream( void ) { PulseAudioHandle *pah = static_cast( stream_.apiHandle ); if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiPulse::abortStream(): the stream is not open!"; error( RtAudioError::INVALID_USE ); return; } if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiPulse::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } stream_.state = STREAM_STOPPED; MUTEX_LOCK( &stream_.mutex ); if ( pah && pah->s_play ) { int pa_error; if ( pa_simple_flush( pah->s_play, &pa_error ) < 0 ) { errorStream_ << "RtApiPulse::abortStream: error flushing output device, " << pa_strerror( pa_error ) << "."; errorText_ = errorStream_.str(); MUTEX_UNLOCK( &stream_.mutex ); error( RtAudioError::SYSTEM_ERROR ); return; } } stream_.state = STREAM_STOPPED; MUTEX_UNLOCK( &stream_.mutex ); } bool RtApiPulse::probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ) { PulseAudioHandle *pah = 0; unsigned long bufferBytes = 0; pa_sample_spec ss; if ( device != 0 ) return false; if ( mode != INPUT && mode != OUTPUT ) return false; if ( channels != 1 && channels != 2 ) { errorText_ = "RtApiPulse::probeDeviceOpen: unsupported number of channels."; return false; } ss.channels = channels; if ( firstChannel != 0 ) return false; bool sr_found = false; for ( const unsigned int *sr = SUPPORTED_SAMPLERATES; *sr; ++sr ) { if ( sampleRate == *sr ) { sr_found = true; stream_.sampleRate = sampleRate; ss.rate = sampleRate; break; } } if ( !sr_found ) { errorText_ = "RtApiPulse::probeDeviceOpen: unsupported sample rate."; return false; } bool sf_found = 0; for ( const rtaudio_pa_format_mapping_t *sf = supported_sampleformats; sf->rtaudio_format && sf->pa_format != PA_SAMPLE_INVALID; ++sf ) { if ( format == sf->rtaudio_format ) { sf_found = true; stream_.userFormat = sf->rtaudio_format; stream_.deviceFormat[mode] = stream_.userFormat; ss.format = sf->pa_format; break; } } if ( !sf_found ) { // Use internal data format conversion. stream_.userFormat = format; stream_.deviceFormat[mode] = RTAUDIO_FLOAT32; ss.format = PA_SAMPLE_FLOAT32LE; } // Set other stream parameters. if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; else stream_.userInterleaved = true; stream_.deviceInterleaved[mode] = true; stream_.nBuffers = 1; stream_.doByteSwap[mode] = false; stream_.nUserChannels[mode] = channels; stream_.nDeviceChannels[mode] = channels + firstChannel; stream_.channelOffset[mode] = 0; std::string streamName = "RtAudio"; // Set flags for buffer conversion. stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.nUserChannels[mode] < stream_.nDeviceChannels[mode] ) stream_.doConvertBuffer[mode] = true; // Allocate necessary internal buffers. bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiPulse::probeDeviceOpen: error allocating user buffer memory."; goto error; } stream_.bufferSize = *bufferSize; if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( mode == INPUT ) { if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiPulse::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } stream_.device[mode] = device; // Setup the buffer conversion information structure. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, firstChannel ); if ( !stream_.apiHandle ) { PulseAudioHandle *pah = new PulseAudioHandle; if ( !pah ) { errorText_ = "RtApiPulse::probeDeviceOpen: error allocating memory for handle."; goto error; } stream_.apiHandle = pah; if ( pthread_cond_init( &pah->runnable_cv, NULL ) != 0 ) { errorText_ = "RtApiPulse::probeDeviceOpen: error creating condition variable."; goto error; } } pah = static_cast( stream_.apiHandle ); int error; if ( options && !options->streamName.empty() ) streamName = options->streamName; switch ( mode ) { case INPUT: pa_buffer_attr buffer_attr; buffer_attr.fragsize = bufferBytes; buffer_attr.maxlength = -1; pah->s_rec = pa_simple_new( NULL, streamName.c_str(), PA_STREAM_RECORD, NULL, "Record", &ss, NULL, &buffer_attr, &error ); if ( !pah->s_rec ) { errorText_ = "RtApiPulse::probeDeviceOpen: error connecting input to PulseAudio server."; goto error; } break; case OUTPUT: pah->s_play = pa_simple_new( NULL, streamName.c_str(), PA_STREAM_PLAYBACK, NULL, "Playback", &ss, NULL, NULL, &error ); if ( !pah->s_play ) { errorText_ = "RtApiPulse::probeDeviceOpen: error connecting output to PulseAudio server."; goto error; } break; default: goto error; } if ( stream_.mode == UNINITIALIZED ) stream_.mode = mode; else if ( stream_.mode == mode ) goto error; else stream_.mode = DUPLEX; if ( !stream_.callbackInfo.isRunning ) { stream_.callbackInfo.object = this; stream_.state = STREAM_STOPPED; // Set the thread attributes for joinable and realtime scheduling // priority (optional). The higher priority will only take affect // if the program is run as root or suid. Note, under Linux // processes with CAP_SYS_NICE privilege, a user can change // scheduling policy and priority (thus need not be root). See // POSIX "capabilities". pthread_attr_t attr; pthread_attr_init( &attr ); pthread_attr_setdetachstate( &attr, PTHREAD_CREATE_JOINABLE ); #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if ( options && options->flags & RTAUDIO_SCHEDULE_REALTIME ) { stream_.callbackInfo.doRealtime = true; struct sched_param param; int priority = options->priority; int min = sched_get_priority_min( SCHED_RR ); int max = sched_get_priority_max( SCHED_RR ); if ( priority < min ) priority = min; else if ( priority > max ) priority = max; param.sched_priority = priority; // Set the policy BEFORE the priority. Otherwise it fails. pthread_attr_setschedpolicy(&attr, SCHED_RR); pthread_attr_setscope (&attr, PTHREAD_SCOPE_SYSTEM); // This is definitely required. Otherwise it fails. pthread_attr_setinheritsched(&attr, PTHREAD_EXPLICIT_SCHED); pthread_attr_setschedparam(&attr, ¶m); } else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #endif stream_.callbackInfo.isRunning = true; int result = pthread_create( &pah->thread, &attr, pulseaudio_callback, (void *)&stream_.callbackInfo); pthread_attr_destroy(&attr); if(result != 0) { // Failed. Try instead with default attributes. result = pthread_create( &pah->thread, NULL, pulseaudio_callback, (void *)&stream_.callbackInfo); if(result != 0) { stream_.callbackInfo.isRunning = false; errorText_ = "RtApiPulse::probeDeviceOpen: error creating thread."; goto error; } } } return SUCCESS; error: if ( pah && stream_.callbackInfo.isRunning ) { pthread_cond_destroy( &pah->runnable_cv ); delete pah; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.state = STREAM_CLOSED; return FAILURE; } //******************** End of __LINUX_PULSE__ *********************// #endif #if defined(__LINUX_OSS__) #include #include #include #include #include #include #include static void *ossCallbackHandler(void * ptr); // A structure to hold various information related to the OSS API // implementation. struct OssHandle { int id[2]; // device ids bool xrun[2]; bool triggered; pthread_cond_t runnable; OssHandle() :triggered(false) { id[0] = 0; id[1] = 0; xrun[0] = false; xrun[1] = false; } }; RtApiOss :: RtApiOss() { // Nothing to do here. } RtApiOss :: ~RtApiOss() { if ( stream_.state != STREAM_CLOSED ) closeStream(); } unsigned int RtApiOss :: getDeviceCount( void ) { int mixerfd = open( "/dev/mixer", O_RDWR, 0 ); if ( mixerfd == -1 ) { errorText_ = "RtApiOss::getDeviceCount: error opening '/dev/mixer'."; error( RtAudioError::WARNING ); return 0; } oss_sysinfo sysinfo; if ( ioctl( mixerfd, SNDCTL_SYSINFO, &sysinfo ) == -1 ) { close( mixerfd ); errorText_ = "RtApiOss::getDeviceCount: error getting sysinfo, OSS version >= 4.0 is required."; error( RtAudioError::WARNING ); return 0; } close( mixerfd ); return sysinfo.numaudios; } RtAudio::DeviceInfo RtApiOss :: getDeviceInfo( unsigned int device ) { RtAudio::DeviceInfo info; info.probed = false; int mixerfd = open( "/dev/mixer", O_RDWR, 0 ); if ( mixerfd == -1 ) { errorText_ = "RtApiOss::getDeviceInfo: error opening '/dev/mixer'."; error( RtAudioError::WARNING ); return info; } oss_sysinfo sysinfo; int result = ioctl( mixerfd, SNDCTL_SYSINFO, &sysinfo ); if ( result == -1 ) { close( mixerfd ); errorText_ = "RtApiOss::getDeviceInfo: error getting sysinfo, OSS version >= 4.0 is required."; error( RtAudioError::WARNING ); return info; } unsigned nDevices = sysinfo.numaudios; if ( nDevices == 0 ) { close( mixerfd ); errorText_ = "RtApiOss::getDeviceInfo: no devices found!"; error( RtAudioError::INVALID_USE ); return info; } if ( device >= nDevices ) { close( mixerfd ); errorText_ = "RtApiOss::getDeviceInfo: device ID is invalid!"; error( RtAudioError::INVALID_USE ); return info; } oss_audioinfo ainfo; ainfo.dev = device; result = ioctl( mixerfd, SNDCTL_AUDIOINFO, &ainfo ); close( mixerfd ); if ( result == -1 ) { errorStream_ << "RtApiOss::getDeviceInfo: error getting device (" << ainfo.name << ") info."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Probe channels if ( ainfo.caps & PCM_CAP_OUTPUT ) info.outputChannels = ainfo.max_channels; if ( ainfo.caps & PCM_CAP_INPUT ) info.inputChannels = ainfo.max_channels; if ( ainfo.caps & PCM_CAP_DUPLEX ) { if ( info.outputChannels > 0 && info.inputChannels > 0 && ainfo.caps & PCM_CAP_DUPLEX ) info.duplexChannels = (info.outputChannels > info.inputChannels) ? info.inputChannels : info.outputChannels; } // Probe data formats ... do for input unsigned long mask = ainfo.iformats; if ( mask & AFMT_S16_LE || mask & AFMT_S16_BE ) info.nativeFormats |= RTAUDIO_SINT16; if ( mask & AFMT_S8 ) info.nativeFormats |= RTAUDIO_SINT8; if ( mask & AFMT_S32_LE || mask & AFMT_S32_BE ) info.nativeFormats |= RTAUDIO_SINT32; #ifdef AFMT_FLOAT if ( mask & AFMT_FLOAT ) info.nativeFormats |= RTAUDIO_FLOAT32; #endif if ( mask & AFMT_S24_LE || mask & AFMT_S24_BE ) info.nativeFormats |= RTAUDIO_SINT24; // Check that we have at least one supported format if ( info.nativeFormats == 0 ) { errorStream_ << "RtApiOss::getDeviceInfo: device (" << ainfo.name << ") data format not supported by RtAudio."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); return info; } // Probe the supported sample rates. info.sampleRates.clear(); if ( ainfo.nrates ) { for ( unsigned int i=0; i info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[k]; break; } } } } else { // Check min and max rate values; for ( unsigned int k=0; k= (int) SAMPLE_RATES[k] ) { info.sampleRates.push_back( SAMPLE_RATES[k] ); if ( !info.preferredSampleRate || ( SAMPLE_RATES[k] <= 48000 && SAMPLE_RATES[k] > info.preferredSampleRate ) ) info.preferredSampleRate = SAMPLE_RATES[k]; } } } if ( info.sampleRates.size() == 0 ) { errorStream_ << "RtApiOss::getDeviceInfo: no supported sample rates found for device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); error( RtAudioError::WARNING ); } else { info.probed = true; info.name = ainfo.name; } return info; } bool RtApiOss :: probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ) { int mixerfd = open( "/dev/mixer", O_RDWR, 0 ); if ( mixerfd == -1 ) { errorText_ = "RtApiOss::probeDeviceOpen: error opening '/dev/mixer'."; return FAILURE; } oss_sysinfo sysinfo; int result = ioctl( mixerfd, SNDCTL_SYSINFO, &sysinfo ); if ( result == -1 ) { close( mixerfd ); errorText_ = "RtApiOss::probeDeviceOpen: error getting sysinfo, OSS version >= 4.0 is required."; return FAILURE; } unsigned nDevices = sysinfo.numaudios; if ( nDevices == 0 ) { // This should not happen because a check is made before this function is called. close( mixerfd ); errorText_ = "RtApiOss::probeDeviceOpen: no devices found!"; return FAILURE; } if ( device >= nDevices ) { // This should not happen because a check is made before this function is called. close( mixerfd ); errorText_ = "RtApiOss::probeDeviceOpen: device ID is invalid!"; return FAILURE; } oss_audioinfo ainfo; ainfo.dev = device; result = ioctl( mixerfd, SNDCTL_AUDIOINFO, &ainfo ); close( mixerfd ); if ( result == -1 ) { errorStream_ << "RtApiOss::getDeviceInfo: error getting device (" << ainfo.name << ") info."; errorText_ = errorStream_.str(); return FAILURE; } // Check if device supports input or output if ( ( mode == OUTPUT && !( ainfo.caps & PCM_CAP_OUTPUT ) ) || ( mode == INPUT && !( ainfo.caps & PCM_CAP_INPUT ) ) ) { if ( mode == OUTPUT ) errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") does not support output."; else errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") does not support input."; errorText_ = errorStream_.str(); return FAILURE; } int flags = 0; OssHandle *handle = (OssHandle *) stream_.apiHandle; if ( mode == OUTPUT ) flags |= O_WRONLY; else { // mode == INPUT if (stream_.mode == OUTPUT && stream_.device[0] == device) { // We just set the same device for playback ... close and reopen for duplex (OSS only). close( handle->id[0] ); handle->id[0] = 0; if ( !( ainfo.caps & PCM_CAP_DUPLEX ) ) { errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") does not support duplex mode."; errorText_ = errorStream_.str(); return FAILURE; } // Check that the number previously set channels is the same. if ( stream_.nUserChannels[0] != channels ) { errorStream_ << "RtApiOss::probeDeviceOpen: input/output channels must be equal for OSS duplex device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } flags |= O_RDWR; } else flags |= O_RDONLY; } // Set exclusive access if specified. if ( options && options->flags & RTAUDIO_HOG_DEVICE ) flags |= O_EXCL; // Try to open the device. int fd; fd = open( ainfo.devnode, flags, 0 ); if ( fd == -1 ) { if ( errno == EBUSY ) errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") is busy."; else errorStream_ << "RtApiOss::probeDeviceOpen: error opening device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } // For duplex operation, specifically set this mode (this doesn't seem to work). /* if ( flags | O_RDWR ) { result = ioctl( fd, SNDCTL_DSP_SETDUPLEX, NULL ); if ( result == -1) { errorStream_ << "RtApiOss::probeDeviceOpen: error setting duplex mode for device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } } */ // Check the device channel support. stream_.nUserChannels[mode] = channels; if ( ainfo.max_channels < (int)(channels + firstChannel) ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: the device (" << ainfo.name << ") does not support requested channel parameters."; errorText_ = errorStream_.str(); return FAILURE; } // Set the number of channels. int deviceChannels = channels + firstChannel; result = ioctl( fd, SNDCTL_DSP_CHANNELS, &deviceChannels ); if ( result == -1 || deviceChannels < (int)(channels + firstChannel) ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: error setting channel parameters on device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.nDeviceChannels[mode] = deviceChannels; // Get the data format mask int mask; result = ioctl( fd, SNDCTL_DSP_GETFMTS, &mask ); if ( result == -1 ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: error getting device (" << ainfo.name << ") data formats."; errorText_ = errorStream_.str(); return FAILURE; } // Determine how to set the device format. stream_.userFormat = format; int deviceFormat = -1; stream_.doByteSwap[mode] = false; if ( format == RTAUDIO_SINT8 ) { if ( mask & AFMT_S8 ) { deviceFormat = AFMT_S8; stream_.deviceFormat[mode] = RTAUDIO_SINT8; } } else if ( format == RTAUDIO_SINT16 ) { if ( mask & AFMT_S16_NE ) { deviceFormat = AFMT_S16_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT16; } else if ( mask & AFMT_S16_OE ) { deviceFormat = AFMT_S16_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT16; stream_.doByteSwap[mode] = true; } } else if ( format == RTAUDIO_SINT24 ) { if ( mask & AFMT_S24_NE ) { deviceFormat = AFMT_S24_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT24; } else if ( mask & AFMT_S24_OE ) { deviceFormat = AFMT_S24_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT24; stream_.doByteSwap[mode] = true; } } else if ( format == RTAUDIO_SINT32 ) { if ( mask & AFMT_S32_NE ) { deviceFormat = AFMT_S32_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT32; } else if ( mask & AFMT_S32_OE ) { deviceFormat = AFMT_S32_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT32; stream_.doByteSwap[mode] = true; } } if ( deviceFormat == -1 ) { // The user requested format is not natively supported by the device. if ( mask & AFMT_S16_NE ) { deviceFormat = AFMT_S16_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT16; } else if ( mask & AFMT_S32_NE ) { deviceFormat = AFMT_S32_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT32; } else if ( mask & AFMT_S24_NE ) { deviceFormat = AFMT_S24_NE; stream_.deviceFormat[mode] = RTAUDIO_SINT24; } else if ( mask & AFMT_S16_OE ) { deviceFormat = AFMT_S16_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT16; stream_.doByteSwap[mode] = true; } else if ( mask & AFMT_S32_OE ) { deviceFormat = AFMT_S32_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT32; stream_.doByteSwap[mode] = true; } else if ( mask & AFMT_S24_OE ) { deviceFormat = AFMT_S24_OE; stream_.deviceFormat[mode] = RTAUDIO_SINT24; stream_.doByteSwap[mode] = true; } else if ( mask & AFMT_S8) { deviceFormat = AFMT_S8; stream_.deviceFormat[mode] = RTAUDIO_SINT8; } } if ( stream_.deviceFormat[mode] == 0 ) { // This really shouldn't happen ... close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") data format not supported by RtAudio."; errorText_ = errorStream_.str(); return FAILURE; } // Set the data format. int temp = deviceFormat; result = ioctl( fd, SNDCTL_DSP_SETFMT, &deviceFormat ); if ( result == -1 || deviceFormat != temp ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: error setting data format on device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Attempt to set the buffer size. According to OSS, the minimum // number of buffers is two. The supposed minimum buffer size is 16 // bytes, so that will be our lower bound. The argument to this // call is in the form 0xMMMMSSSS (hex), where the buffer size (in // bytes) is given as 2^SSSS and the number of buffers as 2^MMMM. // We'll check the actual value used near the end of the setup // procedure. int ossBufferBytes = *bufferSize * formatBytes( stream_.deviceFormat[mode] ) * deviceChannels; if ( ossBufferBytes < 16 ) ossBufferBytes = 16; int buffers = 0; if ( options ) buffers = options->numberOfBuffers; if ( options && options->flags & RTAUDIO_MINIMIZE_LATENCY ) buffers = 2; if ( buffers < 2 ) buffers = 3; temp = ((int) buffers << 16) + (int)( log10( (double)ossBufferBytes ) / log10( 2.0 ) ); result = ioctl( fd, SNDCTL_DSP_SETFRAGMENT, &temp ); if ( result == -1 ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: error setting buffer size on device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.nBuffers = buffers; // Save buffer size (in sample frames). *bufferSize = ossBufferBytes / ( formatBytes(stream_.deviceFormat[mode]) * deviceChannels ); stream_.bufferSize = *bufferSize; // Set the sample rate. int srate = sampleRate; result = ioctl( fd, SNDCTL_DSP_SPEED, &srate ); if ( result == -1 ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: error setting sample rate (" << sampleRate << ") on device (" << ainfo.name << ")."; errorText_ = errorStream_.str(); return FAILURE; } // Verify the sample rate setup worked. if ( abs( srate - (int)sampleRate ) > 100 ) { close( fd ); errorStream_ << "RtApiOss::probeDeviceOpen: device (" << ainfo.name << ") does not support sample rate (" << sampleRate << ")."; errorText_ = errorStream_.str(); return FAILURE; } stream_.sampleRate = sampleRate; if ( mode == INPUT && stream_.mode == OUTPUT && stream_.device[0] == device) { // We're doing duplex setup here. stream_.deviceFormat[0] = stream_.deviceFormat[1]; stream_.nDeviceChannels[0] = deviceChannels; } // Set interleaving parameters. stream_.userInterleaved = true; stream_.deviceInterleaved[mode] = true; if ( options && options->flags & RTAUDIO_NONINTERLEAVED ) stream_.userInterleaved = false; // Set flags for buffer conversion stream_.doConvertBuffer[mode] = false; if ( stream_.userFormat != stream_.deviceFormat[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.nUserChannels[mode] < stream_.nDeviceChannels[mode] ) stream_.doConvertBuffer[mode] = true; if ( stream_.userInterleaved != stream_.deviceInterleaved[mode] && stream_.nUserChannels[mode] > 1 ) stream_.doConvertBuffer[mode] = true; // Allocate the stream handles if necessary and then save. if ( stream_.apiHandle == 0 ) { try { handle = new OssHandle; } catch ( std::bad_alloc& ) { errorText_ = "RtApiOss::probeDeviceOpen: error allocating OssHandle memory."; goto error; } if ( pthread_cond_init( &handle->runnable, NULL ) ) { errorText_ = "RtApiOss::probeDeviceOpen: error initializing pthread condition variable."; goto error; } stream_.apiHandle = (void *) handle; } else { handle = (OssHandle *) stream_.apiHandle; } handle->id[mode] = fd; // Allocate necessary internal buffers. unsigned long bufferBytes; bufferBytes = stream_.nUserChannels[mode] * *bufferSize * formatBytes( stream_.userFormat ); stream_.userBuffer[mode] = (char *) calloc( bufferBytes, 1 ); if ( stream_.userBuffer[mode] == NULL ) { errorText_ = "RtApiOss::probeDeviceOpen: error allocating user buffer memory."; goto error; } if ( stream_.doConvertBuffer[mode] ) { bool makeBuffer = true; bufferBytes = stream_.nDeviceChannels[mode] * formatBytes( stream_.deviceFormat[mode] ); if ( mode == INPUT ) { if ( stream_.mode == OUTPUT && stream_.deviceBuffer ) { unsigned long bytesOut = stream_.nDeviceChannels[0] * formatBytes( stream_.deviceFormat[0] ); if ( bufferBytes <= bytesOut ) makeBuffer = false; } } if ( makeBuffer ) { bufferBytes *= *bufferSize; if ( stream_.deviceBuffer ) free( stream_.deviceBuffer ); stream_.deviceBuffer = (char *) calloc( bufferBytes, 1 ); if ( stream_.deviceBuffer == NULL ) { errorText_ = "RtApiOss::probeDeviceOpen: error allocating device buffer memory."; goto error; } } } stream_.device[mode] = device; stream_.state = STREAM_STOPPED; // Setup the buffer conversion information structure. if ( stream_.doConvertBuffer[mode] ) setConvertInfo( mode, firstChannel ); // Setup thread if necessary. if ( stream_.mode == OUTPUT && mode == INPUT ) { // We had already set up an output stream. stream_.mode = DUPLEX; if ( stream_.device[0] == device ) handle->id[0] = fd; } else { stream_.mode = mode; // Setup callback thread. stream_.callbackInfo.object = (void *) this; // Set the thread attributes for joinable and realtime scheduling // priority. The higher priority will only take affect if the // program is run as root or suid. pthread_attr_t attr; pthread_attr_init( &attr ); pthread_attr_setdetachstate( &attr, PTHREAD_CREATE_JOINABLE ); #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if ( options && options->flags & RTAUDIO_SCHEDULE_REALTIME ) { stream_.callbackInfo.doRealtime = true; struct sched_param param; int priority = options->priority; int min = sched_get_priority_min( SCHED_RR ); int max = sched_get_priority_max( SCHED_RR ); if ( priority < min ) priority = min; else if ( priority > max ) priority = max; param.sched_priority = priority; // Set the policy BEFORE the priority. Otherwise it fails. pthread_attr_setschedpolicy(&attr, SCHED_RR); pthread_attr_setscope (&attr, PTHREAD_SCOPE_SYSTEM); // This is definitely required. Otherwise it fails. pthread_attr_setinheritsched(&attr, PTHREAD_EXPLICIT_SCHED); pthread_attr_setschedparam(&attr, ¶m); } else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #else pthread_attr_setschedpolicy( &attr, SCHED_OTHER ); #endif stream_.callbackInfo.isRunning = true; result = pthread_create( &stream_.callbackInfo.thread, &attr, ossCallbackHandler, &stream_.callbackInfo ); pthread_attr_destroy( &attr ); if ( result ) { // Failed. Try instead with default attributes. result = pthread_create( &stream_.callbackInfo.thread, NULL, ossCallbackHandler, &stream_.callbackInfo ); if ( result ) { stream_.callbackInfo.isRunning = false; errorText_ = "RtApiOss::error creating callback thread!"; goto error; } } } return SUCCESS; error: if ( handle ) { pthread_cond_destroy( &handle->runnable ); if ( handle->id[0] ) close( handle->id[0] ); if ( handle->id[1] ) close( handle->id[1] ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.state = STREAM_CLOSED; return FAILURE; } void RtApiOss :: closeStream() { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiOss::closeStream(): no open stream to close!"; error( RtAudioError::WARNING ); return; } OssHandle *handle = (OssHandle *) stream_.apiHandle; stream_.callbackInfo.isRunning = false; MUTEX_LOCK( &stream_.mutex ); if ( stream_.state == STREAM_STOPPED ) pthread_cond_signal( &handle->runnable ); MUTEX_UNLOCK( &stream_.mutex ); pthread_join( stream_.callbackInfo.thread, NULL ); if ( stream_.state == STREAM_RUNNING ) { if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) ioctl( handle->id[0], SNDCTL_DSP_HALT, 0 ); else ioctl( handle->id[1], SNDCTL_DSP_HALT, 0 ); stream_.state = STREAM_STOPPED; } if ( handle ) { pthread_cond_destroy( &handle->runnable ); if ( handle->id[0] ) close( handle->id[0] ); if ( handle->id[1] ) close( handle->id[1] ); delete handle; stream_.apiHandle = 0; } for ( int i=0; i<2; i++ ) { if ( stream_.userBuffer[i] ) { free( stream_.userBuffer[i] ); stream_.userBuffer[i] = 0; } } if ( stream_.deviceBuffer ) { free( stream_.deviceBuffer ); stream_.deviceBuffer = 0; } stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; } void RtApiOss :: startStream() { verifyStream(); if ( stream_.state == STREAM_RUNNING ) { errorText_ = "RtApiOss::startStream(): the stream is already running!"; error( RtAudioError::WARNING ); return; } MUTEX_LOCK( &stream_.mutex ); #if defined( HAVE_GETTIMEOFDAY ) gettimeofday( &stream_.lastTickTimestamp, NULL ); #endif stream_.state = STREAM_RUNNING; // No need to do anything else here ... OSS automatically starts // when fed samples. MUTEX_UNLOCK( &stream_.mutex ); OssHandle *handle = (OssHandle *) stream_.apiHandle; pthread_cond_signal( &handle->runnable ); } void RtApiOss :: stopStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiOss::stopStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } MUTEX_LOCK( &stream_.mutex ); // The state might change while waiting on a mutex. if ( stream_.state == STREAM_STOPPED ) { MUTEX_UNLOCK( &stream_.mutex ); return; } int result = 0; OssHandle *handle = (OssHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { // Flush the output with zeros a few times. char *buffer; int samples; RtAudioFormat format; if ( stream_.doConvertBuffer[0] ) { buffer = stream_.deviceBuffer; samples = stream_.bufferSize * stream_.nDeviceChannels[0]; format = stream_.deviceFormat[0]; } else { buffer = stream_.userBuffer[0]; samples = stream_.bufferSize * stream_.nUserChannels[0]; format = stream_.userFormat; } memset( buffer, 0, samples * formatBytes(format) ); for ( unsigned int i=0; iid[0], buffer, samples * formatBytes(format) ); if ( result == -1 ) { errorText_ = "RtApiOss::stopStream: audio write error."; error( RtAudioError::WARNING ); } } result = ioctl( handle->id[0], SNDCTL_DSP_HALT, 0 ); if ( result == -1 ) { errorStream_ << "RtApiOss::stopStream: system error stopping callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } handle->triggered = false; } if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && handle->id[0] != handle->id[1] ) ) { result = ioctl( handle->id[1], SNDCTL_DSP_HALT, 0 ); if ( result == -1 ) { errorStream_ << "RtApiOss::stopStream: system error stopping input callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } } unlock: stream_.state = STREAM_STOPPED; MUTEX_UNLOCK( &stream_.mutex ); if ( result != -1 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiOss :: abortStream() { verifyStream(); if ( stream_.state == STREAM_STOPPED ) { errorText_ = "RtApiOss::abortStream(): the stream is already stopped!"; error( RtAudioError::WARNING ); return; } MUTEX_LOCK( &stream_.mutex ); // The state might change while waiting on a mutex. if ( stream_.state == STREAM_STOPPED ) { MUTEX_UNLOCK( &stream_.mutex ); return; } int result = 0; OssHandle *handle = (OssHandle *) stream_.apiHandle; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { result = ioctl( handle->id[0], SNDCTL_DSP_HALT, 0 ); if ( result == -1 ) { errorStream_ << "RtApiOss::abortStream: system error stopping callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } handle->triggered = false; } if ( stream_.mode == INPUT || ( stream_.mode == DUPLEX && handle->id[0] != handle->id[1] ) ) { result = ioctl( handle->id[1], SNDCTL_DSP_HALT, 0 ); if ( result == -1 ) { errorStream_ << "RtApiOss::abortStream: system error stopping input callback procedure on device (" << stream_.device[0] << ")."; errorText_ = errorStream_.str(); goto unlock; } } unlock: stream_.state = STREAM_STOPPED; MUTEX_UNLOCK( &stream_.mutex ); if ( result != -1 ) return; error( RtAudioError::SYSTEM_ERROR ); } void RtApiOss :: callbackEvent() { OssHandle *handle = (OssHandle *) stream_.apiHandle; if ( stream_.state == STREAM_STOPPED ) { MUTEX_LOCK( &stream_.mutex ); pthread_cond_wait( &handle->runnable, &stream_.mutex ); if ( stream_.state != STREAM_RUNNING ) { MUTEX_UNLOCK( &stream_.mutex ); return; } MUTEX_UNLOCK( &stream_.mutex ); } if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApiOss::callbackEvent(): the stream is closed ... this shouldn't happen!"; error( RtAudioError::WARNING ); return; } // Invoke user callback to get fresh output data. int doStopStream = 0; RtAudioCallback callback = (RtAudioCallback) stream_.callbackInfo.callback; double streamTime = getStreamTime(); RtAudioStreamStatus status = 0; if ( stream_.mode != INPUT && handle->xrun[0] == true ) { status |= RTAUDIO_OUTPUT_UNDERFLOW; handle->xrun[0] = false; } if ( stream_.mode != OUTPUT && handle->xrun[1] == true ) { status |= RTAUDIO_INPUT_OVERFLOW; handle->xrun[1] = false; } doStopStream = callback( stream_.userBuffer[0], stream_.userBuffer[1], stream_.bufferSize, streamTime, status, stream_.callbackInfo.userData ); if ( doStopStream == 2 ) { this->abortStream(); return; } MUTEX_LOCK( &stream_.mutex ); // The state might change while waiting on a mutex. if ( stream_.state == STREAM_STOPPED ) goto unlock; int result; char *buffer; int samples; RtAudioFormat format; if ( stream_.mode == OUTPUT || stream_.mode == DUPLEX ) { // Setup parameters and do buffer conversion if necessary. if ( stream_.doConvertBuffer[0] ) { buffer = stream_.deviceBuffer; convertBuffer( buffer, stream_.userBuffer[0], stream_.convertInfo[0] ); samples = stream_.bufferSize * stream_.nDeviceChannels[0]; format = stream_.deviceFormat[0]; } else { buffer = stream_.userBuffer[0]; samples = stream_.bufferSize * stream_.nUserChannels[0]; format = stream_.userFormat; } // Do byte swapping if necessary. if ( stream_.doByteSwap[0] ) byteSwapBuffer( buffer, samples, format ); if ( stream_.mode == DUPLEX && handle->triggered == false ) { int trig = 0; ioctl( handle->id[0], SNDCTL_DSP_SETTRIGGER, &trig ); result = write( handle->id[0], buffer, samples * formatBytes(format) ); trig = PCM_ENABLE_INPUT|PCM_ENABLE_OUTPUT; ioctl( handle->id[0], SNDCTL_DSP_SETTRIGGER, &trig ); handle->triggered = true; } else // Write samples to device. result = write( handle->id[0], buffer, samples * formatBytes(format) ); if ( result == -1 ) { // We'll assume this is an underrun, though there isn't a // specific means for determining that. handle->xrun[0] = true; errorText_ = "RtApiOss::callbackEvent: audio write error."; error( RtAudioError::WARNING ); // Continue on to input section. } } if ( stream_.mode == INPUT || stream_.mode == DUPLEX ) { // Setup parameters. if ( stream_.doConvertBuffer[1] ) { buffer = stream_.deviceBuffer; samples = stream_.bufferSize * stream_.nDeviceChannels[1]; format = stream_.deviceFormat[1]; } else { buffer = stream_.userBuffer[1]; samples = stream_.bufferSize * stream_.nUserChannels[1]; format = stream_.userFormat; } // Read samples from device. result = read( handle->id[1], buffer, samples * formatBytes(format) ); if ( result == -1 ) { // We'll assume this is an overrun, though there isn't a // specific means for determining that. handle->xrun[1] = true; errorText_ = "RtApiOss::callbackEvent: audio read error."; error( RtAudioError::WARNING ); goto unlock; } // Do byte swapping if necessary. if ( stream_.doByteSwap[1] ) byteSwapBuffer( buffer, samples, format ); // Do buffer conversion if necessary. if ( stream_.doConvertBuffer[1] ) convertBuffer( stream_.userBuffer[1], stream_.deviceBuffer, stream_.convertInfo[1] ); } unlock: MUTEX_UNLOCK( &stream_.mutex ); RtApi::tickStreamTime(); if ( doStopStream == 1 ) this->stopStream(); } static void *ossCallbackHandler( void *ptr ) { CallbackInfo *info = (CallbackInfo *) ptr; RtApiOss *object = (RtApiOss *) info->object; bool *isRunning = &info->isRunning; #ifdef SCHED_RR // Undefined with some OSes (e.g. NetBSD 1.6.x with GNU Pthread) if (info->doRealtime) { std::cerr << "RtAudio oss: " << (sched_getscheduler(0) == SCHED_RR ? "" : "_NOT_ ") << "running realtime scheduling" << std::endl; } #endif while ( *isRunning == true ) { pthread_testcancel(); object->callbackEvent(); } pthread_exit( NULL ); } //******************** End of __LINUX_OSS__ *********************// #endif // *************************************************** // // // Protected common (OS-independent) RtAudio methods. // // *************************************************** // // This method can be modified to control the behavior of error // message printing. void RtApi :: error( RtAudioError::Type type ) { errorStream_.str(""); // clear the ostringstream RtAudioErrorCallback errorCallback = (RtAudioErrorCallback) stream_.callbackInfo.errorCallback; if ( errorCallback ) { // abortStream() can generate new error messages. Ignore them. Just keep original one. if ( firstErrorOccurred_ ) return; firstErrorOccurred_ = true; const std::string errorMessage = errorText_; if ( type != RtAudioError::WARNING && stream_.state != STREAM_STOPPED) { stream_.callbackInfo.isRunning = false; // exit from the thread abortStream(); } errorCallback( type, errorMessage ); firstErrorOccurred_ = false; return; } if ( type == RtAudioError::WARNING && showWarnings_ == true ) std::cerr << '\n' << errorText_ << "\n\n"; else if ( type != RtAudioError::WARNING ) throw( RtAudioError( errorText_, type ) ); } void RtApi :: verifyStream() { if ( stream_.state == STREAM_CLOSED ) { errorText_ = "RtApi:: a stream is not open!"; error( RtAudioError::INVALID_USE ); } } void RtApi :: clearStreamInfo() { stream_.mode = UNINITIALIZED; stream_.state = STREAM_CLOSED; stream_.sampleRate = 0; stream_.bufferSize = 0; stream_.nBuffers = 0; stream_.userFormat = 0; stream_.userInterleaved = true; stream_.streamTime = 0.0; stream_.apiHandle = 0; stream_.deviceBuffer = 0; stream_.callbackInfo.callback = 0; stream_.callbackInfo.userData = 0; stream_.callbackInfo.isRunning = false; stream_.callbackInfo.errorCallback = 0; for ( int i=0; i<2; i++ ) { stream_.device[i] = 11111; stream_.doConvertBuffer[i] = false; stream_.deviceInterleaved[i] = true; stream_.doByteSwap[i] = false; stream_.nUserChannels[i] = 0; stream_.nDeviceChannels[i] = 0; stream_.channelOffset[i] = 0; stream_.deviceFormat[i] = 0; stream_.latency[i] = 0; stream_.userBuffer[i] = 0; stream_.convertInfo[i].channels = 0; stream_.convertInfo[i].inJump = 0; stream_.convertInfo[i].outJump = 0; stream_.convertInfo[i].inFormat = 0; stream_.convertInfo[i].outFormat = 0; stream_.convertInfo[i].inOffset.clear(); stream_.convertInfo[i].outOffset.clear(); } } unsigned int RtApi :: formatBytes( RtAudioFormat format ) { if ( format == RTAUDIO_SINT16 ) return 2; else if ( format == RTAUDIO_SINT32 || format == RTAUDIO_FLOAT32 ) return 4; else if ( format == RTAUDIO_FLOAT64 ) return 8; else if ( format == RTAUDIO_SINT24 ) return 3; else if ( format == RTAUDIO_SINT8 ) return 1; errorText_ = "RtApi::formatBytes: undefined format."; error( RtAudioError::WARNING ); return 0; } void RtApi :: setConvertInfo( StreamMode mode, unsigned int firstChannel ) { if ( mode == INPUT ) { // convert device to user buffer stream_.convertInfo[mode].inJump = stream_.nDeviceChannels[1]; stream_.convertInfo[mode].outJump = stream_.nUserChannels[1]; stream_.convertInfo[mode].inFormat = stream_.deviceFormat[1]; stream_.convertInfo[mode].outFormat = stream_.userFormat; } else { // convert user to device buffer stream_.convertInfo[mode].inJump = stream_.nUserChannels[0]; stream_.convertInfo[mode].outJump = stream_.nDeviceChannels[0]; stream_.convertInfo[mode].inFormat = stream_.userFormat; stream_.convertInfo[mode].outFormat = stream_.deviceFormat[0]; } if ( stream_.convertInfo[mode].inJump < stream_.convertInfo[mode].outJump ) stream_.convertInfo[mode].channels = stream_.convertInfo[mode].inJump; else stream_.convertInfo[mode].channels = stream_.convertInfo[mode].outJump; // Set up the interleave/deinterleave offsets. if ( stream_.deviceInterleaved[mode] != stream_.userInterleaved ) { if ( ( mode == OUTPUT && stream_.deviceInterleaved[mode] ) || ( mode == INPUT && stream_.userInterleaved ) ) { for ( int k=0; k 0 ) { if ( stream_.deviceInterleaved[mode] ) { if ( mode == OUTPUT ) { for ( int k=0; k> 8); //out[info.outOffset[j]] >>= 8; } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_FLOAT32) { Float32 *in = (Float32 *)inBuffer; for (unsigned int i=0; i> 8); } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_SINT32) { Int32 *in = (Int32 *)inBuffer; for (unsigned int i=0; i> 16) & 0x0000ffff); } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_FLOAT32) { Float32 *in = (Float32 *)inBuffer; for (unsigned int i=0; i> 8) & 0x00ff); } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_SINT24) { Int24 *in = (Int24 *)inBuffer; for (unsigned int i=0; i> 16); } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_SINT32) { Int32 *in = (Int32 *)inBuffer; for (unsigned int i=0; i> 24) & 0x000000ff); } in += info.inJump; out += info.outJump; } } else if (info.inFormat == RTAUDIO_FLOAT32) { Float32 *in = (Float32 *)inBuffer; for (unsigned int i=0; i>8) | (x<<8); } //static inline uint32_t bswap_32(uint32_t x) { return (bswap_16(x&0xffff)<<16) | (bswap_16(x>>16)); } //static inline uint64_t bswap_64(uint64_t x) { return (((unsigned long long)bswap_32(x&0xffffffffull))<<32) | (bswap_32(x>>32)); } void RtApi :: byteSwapBuffer( char *buffer, unsigned int samples, RtAudioFormat format ) { char val; char *ptr; ptr = buffer; if ( format == RTAUDIO_SINT16 ) { for ( unsigned int i=0; i= 4 #define RTAUDIO_DLL_PUBLIC __attribute__( (visibility( "default" )) ) #else #define RTAUDIO_DLL_PUBLIC #endif #endif #include #include #include #include /*! \typedef typedef unsigned long RtAudioFormat; \brief RtAudio data format type. Support for signed integers and floats. Audio data fed to/from an RtAudio stream is assumed to ALWAYS be in host byte order. The internal routines will automatically take care of any necessary byte-swapping between the host format and the soundcard. Thus, endian-ness is not a concern in the following format definitions. - \e RTAUDIO_SINT8: 8-bit signed integer. - \e RTAUDIO_SINT16: 16-bit signed integer. - \e RTAUDIO_SINT24: 24-bit signed integer. - \e RTAUDIO_SINT32: 32-bit signed integer. - \e RTAUDIO_FLOAT32: Normalized between plus/minus 1.0. - \e RTAUDIO_FLOAT64: Normalized between plus/minus 1.0. */ typedef unsigned long RtAudioFormat; static const RtAudioFormat RTAUDIO_SINT8 = 0x1; // 8-bit signed integer. static const RtAudioFormat RTAUDIO_SINT16 = 0x2; // 16-bit signed integer. static const RtAudioFormat RTAUDIO_SINT24 = 0x4; // 24-bit signed integer. static const RtAudioFormat RTAUDIO_SINT32 = 0x8; // 32-bit signed integer. static const RtAudioFormat RTAUDIO_FLOAT32 = 0x10; // Normalized between plus/minus 1.0. static const RtAudioFormat RTAUDIO_FLOAT64 = 0x20; // Normalized between plus/minus 1.0. /*! \typedef typedef unsigned long RtAudioStreamFlags; \brief RtAudio stream option flags. The following flags can be OR'ed together to allow a client to make changes to the default stream behavior: - \e RTAUDIO_NONINTERLEAVED: Use non-interleaved buffers (default = interleaved). - \e RTAUDIO_MINIMIZE_LATENCY: Attempt to set stream parameters for lowest possible latency. - \e RTAUDIO_HOG_DEVICE: Attempt grab device for exclusive use. - \e RTAUDIO_ALSA_USE_DEFAULT: Use the "default" PCM device (ALSA only). - \e RTAUDIO_JACK_DONT_CONNECT: Do not automatically connect ports (JACK only). By default, RtAudio streams pass and receive audio data from the client in an interleaved format. By passing the RTAUDIO_NONINTERLEAVED flag to the openStream() function, audio data will instead be presented in non-interleaved buffers. In this case, each buffer argument in the RtAudioCallback function will point to a single array of data, with \c nFrames samples for each channel concatenated back-to-back. For example, the first sample of data for the second channel would be located at index \c nFrames (assuming the \c buffer pointer was recast to the correct data type for the stream). Certain audio APIs offer a number of parameters that influence the I/O latency of a stream. By default, RtAudio will attempt to set these parameters internally for robust (glitch-free) performance (though some APIs, like Windows DirectSound, make this difficult). By passing the RTAUDIO_MINIMIZE_LATENCY flag to the openStream() function, internal stream settings will be influenced in an attempt to minimize stream latency, though possibly at the expense of stream performance. If the RTAUDIO_HOG_DEVICE flag is set, RtAudio will attempt to open the input and/or output stream device(s) for exclusive use. Note that this is not possible with all supported audio APIs. If the RTAUDIO_SCHEDULE_REALTIME flag is set, RtAudio will attempt to select realtime scheduling (round-robin) for the callback thread. If the RTAUDIO_ALSA_USE_DEFAULT flag is set, RtAudio will attempt to open the "default" PCM device when using the ALSA API. Note that this will override any specified input or output device id. If the RTAUDIO_JACK_DONT_CONNECT flag is set, RtAudio will not attempt to automatically connect the ports of the client to the audio device. */ typedef unsigned int RtAudioStreamFlags; static const RtAudioStreamFlags RTAUDIO_NONINTERLEAVED = 0x1; // Use non-interleaved buffers (default = interleaved). static const RtAudioStreamFlags RTAUDIO_MINIMIZE_LATENCY = 0x2; // Attempt to set stream parameters for lowest possible latency. static const RtAudioStreamFlags RTAUDIO_HOG_DEVICE = 0x4; // Attempt grab device and prevent use by others. static const RtAudioStreamFlags RTAUDIO_SCHEDULE_REALTIME = 0x8; // Try to select realtime scheduling for callback thread. static const RtAudioStreamFlags RTAUDIO_ALSA_USE_DEFAULT = 0x10; // Use the "default" PCM device (ALSA only). static const RtAudioStreamFlags RTAUDIO_JACK_DONT_CONNECT = 0x20; // Do not automatically connect ports (JACK only). /*! \typedef typedef unsigned long RtAudioStreamStatus; \brief RtAudio stream status (over- or underflow) flags. Notification of a stream over- or underflow is indicated by a non-zero stream \c status argument in the RtAudioCallback function. The stream status can be one of the following two options, depending on whether the stream is open for output and/or input: - \e RTAUDIO_INPUT_OVERFLOW: Input data was discarded because of an overflow condition at the driver. - \e RTAUDIO_OUTPUT_UNDERFLOW: The output buffer ran low, likely producing a break in the output sound. */ typedef unsigned int RtAudioStreamStatus; static const RtAudioStreamStatus RTAUDIO_INPUT_OVERFLOW = 0x1; // Input data was discarded because of an overflow condition at the driver. static const RtAudioStreamStatus RTAUDIO_OUTPUT_UNDERFLOW = 0x2; // The output buffer ran low, likely causing a gap in the output sound. //! RtAudio callback function prototype. /*! All RtAudio clients must create a function of type RtAudioCallback to read and/or write data from/to the audio stream. When the underlying audio system is ready for new input or output data, this function will be invoked. \param outputBuffer For output (or duplex) streams, the client should write \c nFrames of audio sample frames into this buffer. This argument should be recast to the datatype specified when the stream was opened. For input-only streams, this argument will be NULL. \param inputBuffer For input (or duplex) streams, this buffer will hold \c nFrames of input audio sample frames. This argument should be recast to the datatype specified when the stream was opened. For output-only streams, this argument will be NULL. \param nFrames The number of sample frames of input or output data in the buffers. The actual buffer size in bytes is dependent on the data type and number of channels in use. \param streamTime The number of seconds that have elapsed since the stream was started. \param status If non-zero, this argument indicates a data overflow or underflow condition for the stream. The particular condition can be determined by comparison with the RtAudioStreamStatus flags. \param userData A pointer to optional data provided by the client when opening the stream (default = NULL). \return To continue normal stream operation, the RtAudioCallback function should return a value of zero. To stop the stream and drain the output buffer, the function should return a value of one. To abort the stream immediately, the client should return a value of two. */ typedef int (*RtAudioCallback)( void *outputBuffer, void *inputBuffer, unsigned int nFrames, double streamTime, RtAudioStreamStatus status, void *userData ); /************************************************************************/ /*! \class RtAudioError \brief Exception handling class for RtAudio. The RtAudioError class is quite simple but it does allow errors to be "caught" by RtAudioError::Type. See the RtAudio documentation to know which methods can throw an RtAudioError. */ /************************************************************************/ class RTAUDIO_DLL_PUBLIC RtAudioError : public std::runtime_error { public: //! Defined RtAudioError types. enum Type { WARNING, /*!< A non-critical error. */ DEBUG_WARNING, /*!< A non-critical error which might be useful for debugging. */ UNSPECIFIED, /*!< The default, unspecified error type. */ NO_DEVICES_FOUND, /*!< No devices found on system. */ INVALID_DEVICE, /*!< An invalid device ID was specified. */ MEMORY_ERROR, /*!< An error occured during memory allocation. */ INVALID_PARAMETER, /*!< An invalid parameter was specified to a function. */ INVALID_USE, /*!< The function was called incorrectly. */ DRIVER_ERROR, /*!< A system driver error occured. */ SYSTEM_ERROR, /*!< A system error occured. */ THREAD_ERROR /*!< A thread error occured. */ }; //! The constructor. RtAudioError( const std::string& message, Type type = RtAudioError::UNSPECIFIED ) : std::runtime_error(message), type_(type) {} //! Prints thrown error message to stderr. virtual void printMessage( void ) const { std::cerr << '\n' << what() << "\n\n"; } //! Returns the thrown error message type. virtual const Type& getType(void) const { return type_; } //! Returns the thrown error message string. virtual const std::string getMessage(void) const { return std::string(what()); } protected: Type type_; }; //! RtAudio error callback function prototype. /*! \param type Type of error. \param errorText Error description. */ typedef void (*RtAudioErrorCallback)( RtAudioError::Type type, const std::string &errorText ); // **************************************************************** // // // RtAudio class declaration. // // RtAudio is a "controller" used to select an available audio i/o // interface. It presents a common API for the user to call but all // functionality is implemented by the class RtApi and its // subclasses. RtAudio creates an instance of an RtApi subclass // based on the user's API choice. If no choice is made, RtAudio // attempts to make a "logical" API selection. // // **************************************************************** // class RtApi; class RTAUDIO_DLL_PUBLIC RtAudio { public: //! Audio API specifier arguments. enum Api { UNSPECIFIED, /*!< Search for a working compiled API. */ LINUX_ALSA, /*!< The Advanced Linux Sound Architecture API. */ LINUX_PULSE, /*!< The Linux PulseAudio API. */ LINUX_OSS, /*!< The Linux Open Sound System API. */ UNIX_JACK, /*!< The Jack Low-Latency Audio Server API. */ MACOSX_CORE, /*!< Macintosh OS-X Core Audio API. */ WINDOWS_WASAPI, /*!< The Microsoft WASAPI API. */ WINDOWS_ASIO, /*!< The Steinberg Audio Stream I/O API. */ WINDOWS_DS, /*!< The Microsoft DirectSound API. */ RTAUDIO_DUMMY, /*!< A compilable but non-functional API. */ NUM_APIS /*!< Number of values in this enum. */ }; //! The public device information structure for returning queried values. struct DeviceInfo { bool probed; /*!< true if the device capabilities were successfully probed. */ std::string name; /*!< Character string device identifier. */ unsigned int outputChannels; /*!< Maximum output channels supported by device. */ unsigned int inputChannels; /*!< Maximum input channels supported by device. */ unsigned int duplexChannels; /*!< Maximum simultaneous input/output channels supported by device. */ bool isDefaultOutput; /*!< true if this is the default output device. */ bool isDefaultInput; /*!< true if this is the default input device. */ std::vector sampleRates; /*!< Supported sample rates (queried from list of standard rates). */ unsigned int preferredSampleRate; /*!< Preferred sample rate, e.g. for WASAPI the system sample rate. */ RtAudioFormat nativeFormats; /*!< Bit mask of supported data formats. */ // Default constructor. DeviceInfo() :probed(false), outputChannels(0), inputChannels(0), duplexChannels(0), isDefaultOutput(false), isDefaultInput(false), preferredSampleRate(0), nativeFormats(0) {} }; //! The structure for specifying input or ouput stream parameters. struct StreamParameters { unsigned int deviceId; /*!< Device index (0 to getDeviceCount() - 1). */ unsigned int nChannels; /*!< Number of channels. */ unsigned int firstChannel; /*!< First channel index on device (default = 0). */ // Default constructor. StreamParameters() : deviceId(0), nChannels(0), firstChannel(0) {} }; //! The structure for specifying stream options. /*! The following flags can be OR'ed together to allow a client to make changes to the default stream behavior: - \e RTAUDIO_NONINTERLEAVED: Use non-interleaved buffers (default = interleaved). - \e RTAUDIO_MINIMIZE_LATENCY: Attempt to set stream parameters for lowest possible latency. - \e RTAUDIO_HOG_DEVICE: Attempt grab device for exclusive use. - \e RTAUDIO_SCHEDULE_REALTIME: Attempt to select realtime scheduling for callback thread. - \e RTAUDIO_ALSA_USE_DEFAULT: Use the "default" PCM device (ALSA only). By default, RtAudio streams pass and receive audio data from the client in an interleaved format. By passing the RTAUDIO_NONINTERLEAVED flag to the openStream() function, audio data will instead be presented in non-interleaved buffers. In this case, each buffer argument in the RtAudioCallback function will point to a single array of data, with \c nFrames samples for each channel concatenated back-to-back. For example, the first sample of data for the second channel would be located at index \c nFrames (assuming the \c buffer pointer was recast to the correct data type for the stream). Certain audio APIs offer a number of parameters that influence the I/O latency of a stream. By default, RtAudio will attempt to set these parameters internally for robust (glitch-free) performance (though some APIs, like Windows DirectSound, make this difficult). By passing the RTAUDIO_MINIMIZE_LATENCY flag to the openStream() function, internal stream settings will be influenced in an attempt to minimize stream latency, though possibly at the expense of stream performance. If the RTAUDIO_HOG_DEVICE flag is set, RtAudio will attempt to open the input and/or output stream device(s) for exclusive use. Note that this is not possible with all supported audio APIs. If the RTAUDIO_SCHEDULE_REALTIME flag is set, RtAudio will attempt to select realtime scheduling (round-robin) for the callback thread. The \c priority parameter will only be used if the RTAUDIO_SCHEDULE_REALTIME flag is set. It defines the thread's realtime priority. If the RTAUDIO_ALSA_USE_DEFAULT flag is set, RtAudio will attempt to open the "default" PCM device when using the ALSA API. Note that this will override any specified input or output device id. The \c numberOfBuffers parameter can be used to control stream latency in the Windows DirectSound, Linux OSS, and Linux Alsa APIs only. A value of two is usually the smallest allowed. Larger numbers can potentially result in more robust stream performance, though likely at the cost of stream latency. The value set by the user is replaced during execution of the RtAudio::openStream() function by the value actually used by the system. The \c streamName parameter can be used to set the client name when using the Jack API. By default, the client name is set to RtApiJack. However, if you wish to create multiple instances of RtAudio with Jack, each instance must have a unique client name. */ struct StreamOptions { RtAudioStreamFlags flags; /*!< A bit-mask of stream flags (RTAUDIO_NONINTERLEAVED, RTAUDIO_MINIMIZE_LATENCY, RTAUDIO_HOG_DEVICE, RTAUDIO_ALSA_USE_DEFAULT). */ unsigned int numberOfBuffers; /*!< Number of stream buffers. */ std::string streamName; /*!< A stream name (currently used only in Jack). */ int priority; /*!< Scheduling priority of callback thread (only used with flag RTAUDIO_SCHEDULE_REALTIME). */ // Default constructor. StreamOptions() : flags(0), numberOfBuffers(0), priority(0) {} }; //! A static function to determine the current RtAudio version. static std::string getVersion( void ); //! A static function to determine the available compiled audio APIs. /*! The values returned in the std::vector can be compared against the enumerated list values. Note that there can be more than one API compiled for certain operating systems. */ static void getCompiledApi( std::vector &apis ); //! Return the name of a specified compiled audio API. /*! This obtains a short lower-case name used for identification purposes. This value is guaranteed to remain identical across library versions. If the API is unknown, this function will return the empty string. */ static std::string getApiName( RtAudio::Api api ); //! Return the display name of a specified compiled audio API. /*! This obtains a long name used for display purposes. If the API is unknown, this function will return the empty string. */ static std::string getApiDisplayName( RtAudio::Api api ); //! Return the compiled audio API having the given name. /*! A case insensitive comparison will check the specified name against the list of compiled APIs, and return the one which matches. On failure, the function returns UNSPECIFIED. */ static RtAudio::Api getCompiledApiByName( const std::string &name ); //! The class constructor. /*! The constructor performs minor initialization tasks. An exception can be thrown if no API support is compiled. If no API argument is specified and multiple API support has been compiled, the default order of use is JACK, ALSA, OSS (Linux systems) and ASIO, DS (Windows systems). */ RtAudio( RtAudio::Api api=UNSPECIFIED ); //! The destructor. /*! If a stream is running or open, it will be stopped and closed automatically. */ ~RtAudio(); //! Returns the audio API specifier for the current instance of RtAudio. RtAudio::Api getCurrentApi( void ); //! A public function that queries for the number of audio devices available. /*! This function performs a system query of available devices each time it is called, thus supporting devices connected \e after instantiation. If a system error occurs during processing, a warning will be issued. */ unsigned int getDeviceCount( void ); //! Return an RtAudio::DeviceInfo structure for a specified device number. /*! Any device integer between 0 and getDeviceCount() - 1 is valid. If an invalid argument is provided, an RtAudioError (type = INVALID_USE) will be thrown. If a device is busy or otherwise unavailable, the structure member "probed" will have a value of "false" and all other members are undefined. If the specified device is the current default input or output device, the corresponding "isDefault" member will have a value of "true". */ RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); //! A function that returns the index of the default output device. /*! If the underlying audio API does not provide a "default device", or if no devices are available, the return value will be 0. Note that this is a valid device identifier and it is the client's responsibility to verify that a device is available before attempting to open a stream. */ unsigned int getDefaultOutputDevice( void ); //! A function that returns the index of the default input device. /*! If the underlying audio API does not provide a "default device", or if no devices are available, the return value will be 0. Note that this is a valid device identifier and it is the client's responsibility to verify that a device is available before attempting to open a stream. */ unsigned int getDefaultInputDevice( void ); //! A public function for opening a stream with the specified parameters. /*! An RtAudioError (type = SYSTEM_ERROR) is thrown if a stream cannot be opened with the specified parameters or an error occurs during processing. An RtAudioError (type = INVALID_USE) is thrown if any invalid device ID or channel number parameters are specified. \param outputParameters Specifies output stream parameters to use when opening a stream, including a device ID, number of channels, and starting channel number. For input-only streams, this argument should be NULL. The device ID is an index value between 0 and getDeviceCount() - 1. \param inputParameters Specifies input stream parameters to use when opening a stream, including a device ID, number of channels, and starting channel number. For output-only streams, this argument should be NULL. The device ID is an index value between 0 and getDeviceCount() - 1. \param format An RtAudioFormat specifying the desired sample data format. \param sampleRate The desired sample rate (sample frames per second). \param *bufferFrames A pointer to a value indicating the desired internal buffer size in sample frames. The actual value used by the device is returned via the same pointer. A value of zero can be specified, in which case the lowest allowable value is determined. \param callback A client-defined function that will be invoked when input data is available and/or output data is needed. \param userData An optional pointer to data that can be accessed from within the callback function. \param options An optional pointer to a structure containing various global stream options, including a list of OR'ed RtAudioStreamFlags and a suggested number of stream buffers that can be used to control stream latency. More buffers typically result in more robust performance, though at a cost of greater latency. If a value of zero is specified, a system-specific median value is chosen. If the RTAUDIO_MINIMIZE_LATENCY flag bit is set, the lowest allowable value is used. The actual value used is returned via the structure argument. The parameter is API dependent. \param errorCallback A client-defined function that will be invoked when an error has occured. */ void openStream( RtAudio::StreamParameters *outputParameters, RtAudio::StreamParameters *inputParameters, RtAudioFormat format, unsigned int sampleRate, unsigned int *bufferFrames, RtAudioCallback callback, void *userData = NULL, RtAudio::StreamOptions *options = NULL, RtAudioErrorCallback errorCallback = NULL ); //! A function that closes a stream and frees any associated stream memory. /*! If a stream is not open, this function issues a warning and returns (no exception is thrown). */ void closeStream( void ); //! A function that starts a stream. /*! An RtAudioError (type = SYSTEM_ERROR) is thrown if an error occurs during processing. An RtAudioError (type = INVALID_USE) is thrown if a stream is not open. A warning is issued if the stream is already running. */ void startStream( void ); //! Stop a stream, allowing any samples remaining in the output queue to be played. /*! An RtAudioError (type = SYSTEM_ERROR) is thrown if an error occurs during processing. An RtAudioError (type = INVALID_USE) is thrown if a stream is not open. A warning is issued if the stream is already stopped. */ void stopStream( void ); //! Stop a stream, discarding any samples remaining in the input/output queue. /*! An RtAudioError (type = SYSTEM_ERROR) is thrown if an error occurs during processing. An RtAudioError (type = INVALID_USE) is thrown if a stream is not open. A warning is issued if the stream is already stopped. */ void abortStream( void ); //! Returns true if a stream is open and false if not. bool isStreamOpen( void ) const; //! Returns true if the stream is running and false if it is stopped or not open. bool isStreamRunning( void ) const; //! Returns the number of elapsed seconds since the stream was started. /*! If a stream is not open, an RtAudioError (type = INVALID_USE) will be thrown. */ double getStreamTime( void ); //! Set the stream time to a time in seconds greater than or equal to 0.0. /*! If a stream is not open, an RtAudioError (type = INVALID_USE) will be thrown. */ void setStreamTime( double time ); //! Returns the internal stream latency in sample frames. /*! The stream latency refers to delay in audio input and/or output caused by internal buffering by the audio system and/or hardware. For duplex streams, the returned value will represent the sum of the input and output latencies. If a stream is not open, an RtAudioError (type = INVALID_USE) will be thrown. If the API does not report latency, the return value will be zero. */ long getStreamLatency( void ); //! Returns actual sample rate in use by the stream. /*! On some systems, the sample rate used may be slightly different than that specified in the stream parameters. If a stream is not open, an RtAudioError (type = INVALID_USE) will be thrown. */ unsigned int getStreamSampleRate( void ); //! Specify whether warning messages should be printed to stderr. void showWarnings( bool value = true ); protected: void openRtApi( RtAudio::Api api ); RtApi *rtapi_; }; // Operating system dependent thread functionality. #if defined(__WINDOWS_DS__) || defined(__WINDOWS_ASIO__) || defined(__WINDOWS_WASAPI__) #ifndef NOMINMAX #define NOMINMAX #endif #include #include #include typedef uintptr_t ThreadHandle; typedef CRITICAL_SECTION StreamMutex; #elif defined(__LINUX_ALSA__) || defined(__LINUX_PULSE__) || defined(__UNIX_JACK__) || defined(__LINUX_OSS__) || defined(__MACOSX_CORE__) // Using pthread library for various flavors of unix. #include typedef pthread_t ThreadHandle; typedef pthread_mutex_t StreamMutex; #else // Setup for "dummy" behavior #define __RTAUDIO_DUMMY__ typedef int ThreadHandle; typedef int StreamMutex; #endif // This global structure type is used to pass callback information // between the private RtAudio stream structure and global callback // handling functions. struct CallbackInfo { void *object; // Used as a "this" pointer. ThreadHandle thread; void *callback; void *userData; void *errorCallback; void *apiInfo; // void pointer for API specific callback information bool isRunning; bool doRealtime; int priority; // Default constructor. CallbackInfo() :object(0), callback(0), userData(0), errorCallback(0), apiInfo(0), isRunning(false), doRealtime(false), priority(0) {} }; // **************************************************************** // // // RtApi class declaration. // // Subclasses of RtApi contain all API- and OS-specific code necessary // to fully implement the RtAudio API. // // Note that RtApi is an abstract base class and cannot be // explicitly instantiated. The class RtAudio will create an // instance of an RtApi subclass (RtApiOss, RtApiAlsa, // RtApiJack, RtApiCore, RtApiDs, or RtApiAsio). // // **************************************************************** // #pragma pack(push, 1) class S24 { protected: unsigned char c3[3]; public: S24() {} S24& operator = ( const int& i ) { c3[0] = (i & 0x000000ff); c3[1] = (i & 0x0000ff00) >> 8; c3[2] = (i & 0x00ff0000) >> 16; return *this; } S24( const double& d ) { *this = (int) d; } S24( const float& f ) { *this = (int) f; } S24( const signed short& s ) { *this = (int) s; } S24( const char& c ) { *this = (int) c; } int asInt() { int i = c3[0] | (c3[1] << 8) | (c3[2] << 16); if (i & 0x800000) i |= ~0xffffff; return i; } }; #pragma pack(pop) #if defined( HAVE_GETTIMEOFDAY ) #include #endif #include class RTAUDIO_DLL_PUBLIC RtApi { public: RtApi(); virtual ~RtApi(); virtual RtAudio::Api getCurrentApi( void ) = 0; virtual unsigned int getDeviceCount( void ) = 0; virtual RtAudio::DeviceInfo getDeviceInfo( unsigned int device ) = 0; virtual unsigned int getDefaultInputDevice( void ); virtual unsigned int getDefaultOutputDevice( void ); void openStream( RtAudio::StreamParameters *outputParameters, RtAudio::StreamParameters *inputParameters, RtAudioFormat format, unsigned int sampleRate, unsigned int *bufferFrames, RtAudioCallback callback, void *userData, RtAudio::StreamOptions *options, RtAudioErrorCallback errorCallback ); virtual void closeStream( void ); virtual void startStream( void ) = 0; virtual void stopStream( void ) = 0; virtual void abortStream( void ) = 0; long getStreamLatency( void ); unsigned int getStreamSampleRate( void ); virtual double getStreamTime( void ); virtual void setStreamTime( double time ); bool isStreamOpen( void ) const { return stream_.state != STREAM_CLOSED; } bool isStreamRunning( void ) const { return stream_.state == STREAM_RUNNING; } void showWarnings( bool value ) { showWarnings_ = value; } protected: static const unsigned int MAX_SAMPLE_RATES; static const unsigned int SAMPLE_RATES[]; enum { FAILURE, SUCCESS }; enum StreamState { STREAM_STOPPED, STREAM_STOPPING, STREAM_RUNNING, STREAM_CLOSED = -50 }; enum StreamMode { OUTPUT, INPUT, DUPLEX, UNINITIALIZED = -75 }; // A protected structure used for buffer conversion. struct ConvertInfo { int channels; int inJump, outJump; RtAudioFormat inFormat, outFormat; std::vector inOffset; std::vector outOffset; }; // A protected structure for audio streams. struct RtApiStream { unsigned int device[2]; // Playback and record, respectively. void *apiHandle; // void pointer for API specific stream handle information StreamMode mode; // OUTPUT, INPUT, or DUPLEX. StreamState state; // STOPPED, RUNNING, or CLOSED char *userBuffer[2]; // Playback and record, respectively. char *deviceBuffer; bool doConvertBuffer[2]; // Playback and record, respectively. bool userInterleaved; bool deviceInterleaved[2]; // Playback and record, respectively. bool doByteSwap[2]; // Playback and record, respectively. unsigned int sampleRate; unsigned int bufferSize; unsigned int nBuffers; unsigned int nUserChannels[2]; // Playback and record, respectively. unsigned int nDeviceChannels[2]; // Playback and record channels, respectively. unsigned int channelOffset[2]; // Playback and record, respectively. unsigned long latency[2]; // Playback and record, respectively. RtAudioFormat userFormat; RtAudioFormat deviceFormat[2]; // Playback and record, respectively. StreamMutex mutex; CallbackInfo callbackInfo; ConvertInfo convertInfo[2]; double streamTime; // Number of elapsed seconds since the stream started. #if defined(HAVE_GETTIMEOFDAY) struct timeval lastTickTimestamp; #endif RtApiStream() :apiHandle(0), deviceBuffer(0) { device[0] = 11111; device[1] = 11111; } }; typedef S24 Int24; typedef signed short Int16; typedef signed int Int32; typedef float Float32; typedef double Float64; std::ostringstream errorStream_; std::string errorText_; bool showWarnings_; RtApiStream stream_; bool firstErrorOccurred_; /*! Protected, api-specific method that attempts to open a device with the given parameters. This function MUST be implemented by all subclasses. If an error is encountered during the probe, a "warning" message is reported and FAILURE is returned. A successful probe is indicated by a return value of SUCCESS. */ virtual bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); //! A protected function used to increment the stream time. void tickStreamTime( void ); //! Protected common method to clear an RtApiStream structure. void clearStreamInfo(); /*! Protected common method that throws an RtAudioError (type = INVALID_USE) if a stream is not open. */ void verifyStream( void ); //! Protected common error method to allow global control over error handling. void error( RtAudioError::Type type ); /*! Protected method used to perform format, channel number, and/or interleaving conversions between the user and device buffers. */ void convertBuffer( char *outBuffer, char *inBuffer, ConvertInfo &info ); //! Protected common method used to perform byte-swapping on buffers. void byteSwapBuffer( char *buffer, unsigned int samples, RtAudioFormat format ); //! Protected common method that returns the number of bytes for a given format. unsigned int formatBytes( RtAudioFormat format ); //! Protected common method that sets up the parameters for buffer conversion. void setConvertInfo( StreamMode mode, unsigned int firstChannel ); }; // **************************************************************** // // // Inline RtAudio definitions. // // **************************************************************** // inline RtAudio::Api RtAudio :: getCurrentApi( void ) { return rtapi_->getCurrentApi(); } inline unsigned int RtAudio :: getDeviceCount( void ) { return rtapi_->getDeviceCount(); } inline RtAudio::DeviceInfo RtAudio :: getDeviceInfo( unsigned int device ) { return rtapi_->getDeviceInfo( device ); } inline unsigned int RtAudio :: getDefaultInputDevice( void ) { return rtapi_->getDefaultInputDevice(); } inline unsigned int RtAudio :: getDefaultOutputDevice( void ) { return rtapi_->getDefaultOutputDevice(); } inline void RtAudio :: closeStream( void ) { return rtapi_->closeStream(); } inline void RtAudio :: startStream( void ) { return rtapi_->startStream(); } inline void RtAudio :: stopStream( void ) { return rtapi_->stopStream(); } inline void RtAudio :: abortStream( void ) { return rtapi_->abortStream(); } inline bool RtAudio :: isStreamOpen( void ) const { return rtapi_->isStreamOpen(); } inline bool RtAudio :: isStreamRunning( void ) const { return rtapi_->isStreamRunning(); } inline long RtAudio :: getStreamLatency( void ) { return rtapi_->getStreamLatency(); } inline unsigned int RtAudio :: getStreamSampleRate( void ) { return rtapi_->getStreamSampleRate(); } inline double RtAudio :: getStreamTime( void ) { return rtapi_->getStreamTime(); } inline void RtAudio :: setStreamTime( double time ) { return rtapi_->setStreamTime( time ); } inline void RtAudio :: showWarnings( bool value ) { rtapi_->showWarnings( value ); } // RtApi Subclass prototypes. #if defined(__MACOSX_CORE__) #include class RtApiCore: public RtApi { public: RtApiCore(); ~RtApiCore(); RtAudio::Api getCurrentApi( void ) { return RtAudio::MACOSX_CORE; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); unsigned int getDefaultOutputDevice( void ); unsigned int getDefaultInputDevice( void ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! bool callbackEvent( AudioDeviceID deviceId, const AudioBufferList *inBufferList, const AudioBufferList *outBufferList ); private: bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); static const char* getErrorCode( OSStatus code ); }; #endif #if defined(__UNIX_JACK__) class RtApiJack: public RtApi { public: RtApiJack(); ~RtApiJack(); RtAudio::Api getCurrentApi( void ) { return RtAudio::UNIX_JACK; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! bool callbackEvent( unsigned long nframes ); private: bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); bool shouldAutoconnect_; }; #endif #if defined(__WINDOWS_ASIO__) class RtApiAsio: public RtApi { public: RtApiAsio(); ~RtApiAsio(); RtAudio::Api getCurrentApi( void ) { return RtAudio::WINDOWS_ASIO; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! bool callbackEvent( long bufferIndex ); private: std::vector devices_; void saveDeviceInfo( void ); bool coInitialized_; bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); }; #endif #if defined(__WINDOWS_DS__) class RtApiDs: public RtApi { public: RtApiDs(); ~RtApiDs(); RtAudio::Api getCurrentApi( void ) { return RtAudio::WINDOWS_DS; } unsigned int getDeviceCount( void ); unsigned int getDefaultOutputDevice( void ); unsigned int getDefaultInputDevice( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! void callbackEvent( void ); private: bool coInitialized_; bool buffersRolling; long duplexPrerollBytes; std::vector dsDevices; bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); }; #endif #if defined(__WINDOWS_WASAPI__) struct IMMDeviceEnumerator; class RtApiWasapi : public RtApi { public: RtApiWasapi(); virtual ~RtApiWasapi(); RtAudio::Api getCurrentApi( void ) { return RtAudio::WINDOWS_WASAPI; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); unsigned int getDefaultOutputDevice( void ); unsigned int getDefaultInputDevice( void ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); private: bool coInitialized_; IMMDeviceEnumerator* deviceEnumerator_; bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int* bufferSize, RtAudio::StreamOptions* options ); static DWORD WINAPI runWasapiThread( void* wasapiPtr ); static DWORD WINAPI stopWasapiThread( void* wasapiPtr ); static DWORD WINAPI abortWasapiThread( void* wasapiPtr ); void wasapiThread(); }; #endif #if defined(__LINUX_ALSA__) class RtApiAlsa: public RtApi { public: RtApiAlsa(); ~RtApiAlsa(); RtAudio::Api getCurrentApi() { return RtAudio::LINUX_ALSA; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! void callbackEvent( void ); private: std::vector devices_; void saveDeviceInfo( void ); bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); }; #endif #if defined(__LINUX_PULSE__) class RtApiPulse: public RtApi { public: ~RtApiPulse(); RtAudio::Api getCurrentApi() { return RtAudio::LINUX_PULSE; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! void callbackEvent( void ); private: std::vector devices_; void saveDeviceInfo( void ); bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); }; #endif #if defined(__LINUX_OSS__) class RtApiOss: public RtApi { public: RtApiOss(); ~RtApiOss(); RtAudio::Api getCurrentApi() { return RtAudio::LINUX_OSS; } unsigned int getDeviceCount( void ); RtAudio::DeviceInfo getDeviceInfo( unsigned int device ); void closeStream( void ); void startStream( void ); void stopStream( void ); void abortStream( void ); // This function is intended for internal use only. It must be // public because it is called by the internal callback handler, // which is not a member of RtAudio. External use of this function // will most likely produce highly undesireable results! void callbackEvent( void ); private: bool probeDeviceOpen( unsigned int device, StreamMode mode, unsigned int channels, unsigned int firstChannel, unsigned int sampleRate, RtAudioFormat format, unsigned int *bufferSize, RtAudio::StreamOptions *options ); }; #endif #if defined(__RTAUDIO_DUMMY__) class RtApiDummy: public RtApi { public: RtApiDummy() { errorText_ = "RtApiDummy: This class provides no functionality."; error( RtAudioError::WARNING ); } RtAudio::Api getCurrentApi( void ) { return RtAudio::RTAUDIO_DUMMY; } unsigned int getDeviceCount( void ) { return 0; } RtAudio::DeviceInfo getDeviceInfo( unsigned int /*device*/ ) { RtAudio::DeviceInfo info; return info; } void closeStream( void ) {} void startStream( void ) {} void stopStream( void ) {} void abortStream( void ) {} private: bool probeDeviceOpen( unsigned int /*device*/, StreamMode /*mode*/, unsigned int /*channels*/, unsigned int /*firstChannel*/, unsigned int /*sampleRate*/, RtAudioFormat /*format*/, unsigned int * /*bufferSize*/, RtAudio::StreamOptions * /*options*/ ) { return false; } }; #endif #endif // Indentation settings for Vim and Emacs // // Local Variables: // c-basic-offset: 2 // indent-tabs-mode: nil // End: // // vim: et sts=2 sw=2 zytrax-master/drivers/rtaudio/sound_driver_rtaudio.cpp000066400000000000000000000076171347722000700240100ustar00rootroot00000000000000 #include "sound_driver_rtaudio.h" #include "engine/sound_driver_manager.h" #include "globals/error_macros.h" #include "globals/vector.h" #include // #include "drivers/rtaudio/rtaudio/RtAudio.h" Vector rt_audios; class SoundDriverRTAudio : public SoundDriver { public: RtAudio *rt_audio; std::recursive_mutex mutex; int device; int mix_rate; RtAudio::DeviceInfo info; RtAudio::StreamParameters parameters; unsigned int buffer_frames; int step_frames; bool active; virtual String get_id() const { String name = RtAudio::getApiName(rt_audio->getCurrentApi()).c_str(); name += "::"; name += info.name.c_str(); return name; } static int rt_audio_callbacks(void *outputBuffer, void *inputBuffer, unsigned int nFrames, double streamTime, RtAudioStreamStatus status, void *userData) { SoundDriverRTAudio *driver = (SoundDriverRTAudio *)userData; return driver->rt_audio_callback(outputBuffer, inputBuffer, nFrames, streamTime, status); } int rt_audio_callback(void *outputBuffer, void *inputBuffer, unsigned int nFrames, double streamTime, RtAudioStreamStatus status) { if (mutex.try_lock()) { mix((AudioFrame *)outputBuffer, nFrames); mutex.unlock(); } return 0; } virtual void lock() { mutex.lock(); } virtual void unlock() { mutex.unlock(); } virtual String get_name() const { String name = RtAudio::getApiDisplayName(rt_audio->getCurrentApi()).c_str(); name += ": "; name += info.name.c_str(); return name; } virtual float get_max_level_l() { return 0; } virtual float get_max_level_r() { return 0; } virtual bool is_active() { return active; } virtual bool init() { ERR_FAIL_COND_V(active, false); active = false; parameters.deviceId = device; parameters.nChannels = 2; parameters.firstChannel = 0; buffer_frames = SoundDriverManager::get_buffer_size_frames(SoundDriverManager::get_buffer_size()); step_frames = SoundDriverManager::SoundDriverManager::get_buffer_size_frames(SoundDriverManager::get_step_buffer_size()); mix_rate = SoundDriverManager::get_mix_frequency_hz(SoundDriverManager::get_mix_frequency()); try { rt_audio->openStream(¶meters, NULL, RTAUDIO_FLOAT32, mix_rate, &buffer_frames, rt_audio_callbacks, this); } catch (RtAudioError &error) { error.printMessage(); return false; } active = true; rt_audio->startStream(); return true; } virtual void finish() { ERR_FAIL_COND(!active); active = false; rt_audio->stopStream(); rt_audio->closeStream(); } virtual int get_mix_rate() const { return mix_rate; } virtual int get_buffer_size() const { return buffer_frames; } virtual int get_step_size() const { return step_frames; } SoundDriverRTAudio() { active = false; device = 0; mix_rate = 44100; } ~SoundDriverRTAudio() {} }; static Vector drivers; void register_rtaudio_driver() { std::vector apis; RtAudio::getCompiledApi(apis); for (int i = 0; i < apis.size(); i++) { RtAudio *rt_audio; try { rt_audio = new RtAudio(apis[i]); } catch (RtAudioError &error) { error.printMessage(); continue; } rt_audios.push_back(rt_audio); // Determine the number of devices available int devices = rt_audio->getDeviceCount(); // Scan through devices for various capabilities for (int j = 0; j < devices; j++) { SoundDriverRTAudio *driver = new SoundDriverRTAudio; try { driver->info = rt_audio->getDeviceInfo(j); if (driver->info.probed) { driver->device = j; driver->rt_audio = rt_audio; drivers.push_back(driver); SoundDriverManager::register_driver(driver); continue; } } catch (RtAudioError &error) { error.printMessage(); } delete driver; //not worked } } } void cleanup_rtaudio_driver() { for (int i = 0; i < drivers.size(); i++) { delete drivers[i]; } for (int i = 0; i < rt_audios.size(); i++) { delete rt_audios[i]; } } zytrax-master/drivers/rtaudio/sound_driver_rtaudio.h000066400000000000000000000003021347722000700234350ustar00rootroot00000000000000#ifndef SOUND_DRIVER_RTAUDIO_H #define SOUND_DRIVER_RTAUDIO_H #include "engine/sound_driver.h" void register_rtaudio_driver(); void cleanup_rtaudio_driver(); #endif // SOUND_DRIVER_RTAUDIO_H zytrax-master/drivers/rtmidi/000077500000000000000000000000001347722000700166605ustar00rootroot00000000000000zytrax-master/drivers/rtmidi/midi_driver_rtmidi.cpp000066400000000000000000000036231347722000700232350ustar00rootroot00000000000000#include "midi_driver_rtmidi.h" #include "engine/midi_driver_manager.h" #include "globals/vector.h" // #include "drivers/rtmidi/rtmidi/RtMidi.h" static RtMidiIn *midiin = NULL; class MIDIInputDriverRtMidi : public MIDIInputDriver { public: static void midi_callback(double deltatime, std::vector *message, void *userData) { MIDIInputDriverRtMidi *driver = (MIDIInputDriverRtMidi *)userData; MIDIEvent ev; if (ev.parse(&(*message)[0]) == OK) { driver->event(deltatime, ev); } } virtual void lock() { } virtual void unlock() { } String name; String id; int index; bool active; virtual String get_name() const { return name; } virtual String get_id() const { return id; } virtual bool is_active() { return active; } virtual bool init() { ERR_FAIL_COND_V(active, false); midiin->openPort(index); // Set our callback function. This should be done immediately after // opening the port to avoid having incoming messages written to the // queue. midiin->setCallback(&midi_callback, this); // Don't ignore sysex, timing, or active sensing messages. midiin->ignoreTypes(false, false, false); active = true; } virtual void finish() { midiin->closePort(); active = false; } MIDIInputDriverRtMidi() { index = -1; active = false; } ~MIDIInputDriverRtMidi() { } }; static Vector midi_drivers; void register_rtmidi_driver() { midiin = new RtMidiIn(); unsigned int nPorts = midiin->getPortCount(); for (int i = 0; i < nPorts; i++) { MIDIInputDriverRtMidi *driver = new MIDIInputDriverRtMidi; driver->name.parse_utf8(midiin->getPortName(i).c_str()); driver->id = "RtMidi:" + driver->name; driver->index = i; midi_drivers.push_back(driver); MIDIDriverManager::add_input_driver(driver); } } void unregister_rtmidi_driver() { for (int i = 0; i < midi_drivers.size(); i++) { delete midi_drivers[i]; } delete midiin; } zytrax-master/drivers/rtmidi/midi_driver_rtmidi.h000066400000000000000000000002371347722000700227000ustar00rootroot00000000000000#ifndef MIDI_DRIVER_RTMIDI_H #define MIDI_DRIVER_RTMIDI_H // void register_rtmidi_driver(); void unregister_rtmidi_driver(); #endif // MIDI_DRIVER_RTMIDI_H zytrax-master/drivers/rtmidi/rtmidi/000077500000000000000000000000001347722000700201505ustar00rootroot00000000000000zytrax-master/drivers/rtmidi/rtmidi/RtMidi.cpp000066400000000000000000003135271347722000700220570ustar00rootroot00000000000000/**********************************************************************/ /*! \class RtMidi \brief An abstract base class for realtime MIDI input/output. This class implements some common functionality for the realtime MIDI input/output subclasses RtMidiIn and RtMidiOut. RtMidi GitHub site: https://github.com/thestk/rtmidi RtMidi WWW site: http://www.music.mcgill.ca/~gary/rtmidi/ RtMidi: realtime MIDI i/o C++ classes Copyright (c) 2003-2019 Gary P. Scavone Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Any person wishing to distribute modifications to the Software is asked to send the modifications to the original developer so that they can be incorporated into the canonical version. This is, however, not a binding provision of this license. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ /**********************************************************************/ #include "RtMidi.h" #include #if defined(__MACOSX_CORE__) #if TARGET_OS_IPHONE #define AudioGetCurrentHostTime CAHostTimeBase::GetCurrentTime #define AudioConvertHostTimeToNanos CAHostTimeBase::ConvertToNanos #endif #endif // Default for Windows is to add an identifier to the port names; this // flag can be defined (e.g. in your project file) to disable this behaviour. //#define RTMIDI_DO_NOT_ENSURE_UNIQUE_PORTNAMES // **************************************************************** // // // MidiInApi and MidiOutApi subclass prototypes. // // **************************************************************** // #if !defined(__LINUX_ALSA__) && !defined(__UNIX_JACK__) && !defined(__MACOSX_CORE__) && !defined(__WINDOWS_MM__) #define __RTMIDI_DUMMY__ #endif #if defined(__MACOSX_CORE__) class MidiInCore : public MidiInApi { public: MidiInCore(const std::string &clientName, unsigned int queueSizeLimit); ~MidiInCore(void); RtMidi::Api getCurrentApi(void) { return RtMidi::MACOSX_CORE; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); protected: void initialize(const std::string &clientName); }; class MidiOutCore : public MidiOutApi { public: MidiOutCore(const std::string &clientName); ~MidiOutCore(void); RtMidi::Api getCurrentApi(void) { return RtMidi::MACOSX_CORE; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); void sendMessage(const unsigned char *message, size_t size); protected: void initialize(const std::string &clientName); }; #endif #if defined(__UNIX_JACK__) class MidiInJack : public MidiInApi { public: MidiInJack(const std::string &clientName, unsigned int queueSizeLimit); ~MidiInJack(void); RtMidi::Api getCurrentApi(void) { return RtMidi::UNIX_JACK; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); protected: std::string clientName; void connect(void); void initialize(const std::string &clientName); }; class MidiOutJack : public MidiOutApi { public: MidiOutJack(const std::string &clientName); ~MidiOutJack(void); RtMidi::Api getCurrentApi(void) { return RtMidi::UNIX_JACK; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); void sendMessage(const unsigned char *message, size_t size); protected: std::string clientName; void connect(void); void initialize(const std::string &clientName); }; #endif #if defined(__LINUX_ALSA__) class MidiInAlsa : public MidiInApi { public: MidiInAlsa(const std::string &clientName, unsigned int queueSizeLimit); ~MidiInAlsa(void); RtMidi::Api getCurrentApi(void) { return RtMidi::LINUX_ALSA; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); protected: void initialize(const std::string &clientName); }; class MidiOutAlsa : public MidiOutApi { public: MidiOutAlsa(const std::string &clientName); ~MidiOutAlsa(void); RtMidi::Api getCurrentApi(void) { return RtMidi::LINUX_ALSA; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); void sendMessage(const unsigned char *message, size_t size); protected: void initialize(const std::string &clientName); }; #endif #if defined(__WINDOWS_MM__) class MidiInWinMM : public MidiInApi { public: MidiInWinMM(const std::string &clientName, unsigned int queueSizeLimit); ~MidiInWinMM(void); RtMidi::Api getCurrentApi(void) { return RtMidi::WINDOWS_MM; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); protected: void initialize(const std::string &clientName); }; class MidiOutWinMM : public MidiOutApi { public: MidiOutWinMM(const std::string &clientName); ~MidiOutWinMM(void); RtMidi::Api getCurrentApi(void) { return RtMidi::WINDOWS_MM; }; void openPort(unsigned int portNumber, const std::string &portName); void openVirtualPort(const std::string &portName); void closePort(void); void setClientName(const std::string &clientName); void setPortName(const std::string &portName); unsigned int getPortCount(void); std::string getPortName(unsigned int portNumber); void sendMessage(const unsigned char *message, size_t size); protected: void initialize(const std::string &clientName); }; #endif #if defined(__RTMIDI_DUMMY__) class MidiInDummy : public MidiInApi { public: MidiInDummy(const std::string & /*clientName*/, unsigned int queueSizeLimit) : MidiInApi(queueSizeLimit) { errorString_ = "MidiInDummy: This class provides no functionality."; error(RtMidiError::WARNING, errorString_); } RtMidi::Api getCurrentApi(void) { return RtMidi::RTMIDI_DUMMY; } void openPort(unsigned int /*portNumber*/, const std::string & /*portName*/) {} void openVirtualPort(const std::string & /*portName*/) {} void closePort(void) {} void setClientName(const std::string & /*clientName*/){}; void setPortName(const std::string & /*portName*/){}; unsigned int getPortCount(void) { return 0; } std::string getPortName(unsigned int /*portNumber*/) { return ""; } protected: void initialize(const std::string & /*clientName*/) {} }; class MidiOutDummy : public MidiOutApi { public: MidiOutDummy(const std::string & /*clientName*/) { errorString_ = "MidiOutDummy: This class provides no functionality."; error(RtMidiError::WARNING, errorString_); } RtMidi::Api getCurrentApi(void) { return RtMidi::RTMIDI_DUMMY; } void openPort(unsigned int /*portNumber*/, const std::string & /*portName*/) {} void openVirtualPort(const std::string & /*portName*/) {} void closePort(void) {} void setClientName(const std::string & /*clientName*/){}; void setPortName(const std::string & /*portName*/){}; unsigned int getPortCount(void) { return 0; } std::string getPortName(unsigned int /*portNumber*/) { return ""; } void sendMessage(const unsigned char * /*message*/, size_t /*size*/) {} protected: void initialize(const std::string & /*clientName*/) {} }; #endif //*********************************************************************// // RtMidi Definitions //*********************************************************************// RtMidi ::RtMidi() : rtapi_(0) { } RtMidi ::~RtMidi() { delete rtapi_; rtapi_ = 0; } std::string RtMidi ::getVersion(void) throw() { return std::string(RTMIDI_VERSION); } // Define API names and display names. // Must be in same order as API enum. extern "C" { const char *rtmidi_api_names[][2] = { { "unspecified", "Unknown" }, { "core", "CoreMidi" }, { "alsa", "ALSA" }, { "jack", "Jack" }, { "winmm", "Windows MultiMedia" }, { "dummy", "Dummy" }, }; const unsigned int rtmidi_num_api_names = sizeof(rtmidi_api_names) / sizeof(rtmidi_api_names[0]); // The order here will control the order of RtMidi's API search in // the constructor. extern "C" const RtMidi::Api rtmidi_compiled_apis[] = { #if defined(__MACOSX_CORE__) RtMidi::MACOSX_CORE, #endif #if defined(__LINUX_ALSA__) RtMidi::LINUX_ALSA, #endif #if defined(__UNIX_JACK__) RtMidi::UNIX_JACK, #endif #if defined(__WINDOWS_MM__) RtMidi::WINDOWS_MM, #endif #if defined(__RTMIDI_DUMMY__) RtMidi::RTMIDI_DUMMY, #endif RtMidi::UNSPECIFIED, }; extern "C" const unsigned int rtmidi_num_compiled_apis = sizeof(rtmidi_compiled_apis) / sizeof(rtmidi_compiled_apis[0]) - 1; } // This is a compile-time check that rtmidi_num_api_names == RtMidi::NUM_APIS. // If the build breaks here, check that they match. template class StaticAssert { private: StaticAssert() {} }; template <> class StaticAssert { public: StaticAssert() {} }; class StaticAssertions { StaticAssertions() { StaticAssert(); } }; void RtMidi ::getCompiledApi(std::vector &apis) throw() { apis = std::vector(rtmidi_compiled_apis, rtmidi_compiled_apis + rtmidi_num_compiled_apis); } std::string RtMidi ::getApiName(RtMidi::Api api) { if (api < 0 || api >= RtMidi::NUM_APIS) return ""; return rtmidi_api_names[api][0]; } std::string RtMidi ::getApiDisplayName(RtMidi::Api api) { if (api < 0 || api >= RtMidi::NUM_APIS) return "Unknown"; return rtmidi_api_names[api][1]; } RtMidi::Api RtMidi ::getCompiledApiByName(const std::string &name) { unsigned int i = 0; for (i = 0; i < rtmidi_num_compiled_apis; ++i) if (name == rtmidi_api_names[rtmidi_compiled_apis[i]][0]) return rtmidi_compiled_apis[i]; return RtMidi::UNSPECIFIED; } void RtMidi ::setClientName(const std::string &clientName) { rtapi_->setClientName(clientName); } void RtMidi ::setPortName(const std::string &portName) { rtapi_->setPortName(portName); } //*********************************************************************// // RtMidiIn Definitions //*********************************************************************// void RtMidiIn ::openMidiApi(RtMidi::Api api, const std::string &clientName, unsigned int queueSizeLimit) { delete rtapi_; rtapi_ = 0; #if defined(__UNIX_JACK__) if (api == UNIX_JACK) rtapi_ = new MidiInJack(clientName, queueSizeLimit); #endif #if defined(__LINUX_ALSA__) if (api == LINUX_ALSA) rtapi_ = new MidiInAlsa(clientName, queueSizeLimit); #endif #if defined(__WINDOWS_MM__) if (api == WINDOWS_MM) rtapi_ = new MidiInWinMM(clientName, queueSizeLimit); #endif #if defined(__MACOSX_CORE__) if (api == MACOSX_CORE) rtapi_ = new MidiInCore(clientName, queueSizeLimit); #endif #if defined(__RTMIDI_DUMMY__) if (api == RTMIDI_DUMMY) rtapi_ = new MidiInDummy(clientName, queueSizeLimit); #endif } RTMIDI_DLL_PUBLIC RtMidiIn ::RtMidiIn(RtMidi::Api api, const std::string &clientName, unsigned int queueSizeLimit) : RtMidi() { if (api != UNSPECIFIED) { // Attempt to open the specified API. openMidiApi(api, clientName, queueSizeLimit); if (rtapi_) return; // No compiled support for specified API value. Issue a warning // and continue as if no API was specified. std::cerr << "\nRtMidiIn: no compiled support for specified API argument!\n\n" << std::endl; } // Iterate through the compiled APIs and return as soon as we find // one with at least one port or we reach the end of the list. std::vector apis; getCompiledApi(apis); for (unsigned int i = 0; i < apis.size(); i++) { openMidiApi(apis[i], clientName, queueSizeLimit); if (rtapi_ && rtapi_->getPortCount()) break; } if (rtapi_) return; // It should not be possible to get here because the preprocessor // definition __RTMIDI_DUMMY__ is automatically defined if no // API-specific definitions are passed to the compiler. But just in // case something weird happens, we'll throw an error. std::string errorText = "RtMidiIn: no compiled API support found ... critical error!!"; throw(RtMidiError(errorText, RtMidiError::UNSPECIFIED)); } RtMidiIn ::~RtMidiIn() throw() { } //*********************************************************************// // RtMidiOut Definitions //*********************************************************************// void RtMidiOut ::openMidiApi(RtMidi::Api api, const std::string &clientName) { delete rtapi_; rtapi_ = 0; #if defined(__UNIX_JACK__) if (api == UNIX_JACK) rtapi_ = new MidiOutJack(clientName); #endif #if defined(__LINUX_ALSA__) if (api == LINUX_ALSA) rtapi_ = new MidiOutAlsa(clientName); #endif #if defined(__WINDOWS_MM__) if (api == WINDOWS_MM) rtapi_ = new MidiOutWinMM(clientName); #endif #if defined(__MACOSX_CORE__) if (api == MACOSX_CORE) rtapi_ = new MidiOutCore(clientName); #endif #if defined(__RTMIDI_DUMMY__) if (api == RTMIDI_DUMMY) rtapi_ = new MidiOutDummy(clientName); #endif } RTMIDI_DLL_PUBLIC RtMidiOut ::RtMidiOut(RtMidi::Api api, const std::string &clientName) { if (api != UNSPECIFIED) { // Attempt to open the specified API. openMidiApi(api, clientName); if (rtapi_) return; // No compiled support for specified API value. Issue a warning // and continue as if no API was specified. std::cerr << "\nRtMidiOut: no compiled support for specified API argument!\n\n" << std::endl; } // Iterate through the compiled APIs and return as soon as we find // one with at least one port or we reach the end of the list. std::vector apis; getCompiledApi(apis); for (unsigned int i = 0; i < apis.size(); i++) { openMidiApi(apis[i], clientName); if (rtapi_ && rtapi_->getPortCount()) break; } if (rtapi_) return; // It should not be possible to get here because the preprocessor // definition __RTMIDI_DUMMY__ is automatically defined if no // API-specific definitions are passed to the compiler. But just in // case something weird happens, we'll thrown an error. std::string errorText = "RtMidiOut: no compiled API support found ... critical error!!"; throw(RtMidiError(errorText, RtMidiError::UNSPECIFIED)); } RtMidiOut ::~RtMidiOut() throw() { } //*********************************************************************// // Common MidiApi Definitions //*********************************************************************// MidiApi ::MidiApi(void) : apiData_(0), connected_(false), errorCallback_(0), firstErrorOccurred_(false), errorCallbackUserData_(0) { } MidiApi ::~MidiApi(void) { } void MidiApi ::setErrorCallback(RtMidiErrorCallback errorCallback, void *userData = 0) { errorCallback_ = errorCallback; errorCallbackUserData_ = userData; } void MidiApi ::error(RtMidiError::Type type, std::string errorString) { if (errorCallback_) { if (firstErrorOccurred_) return; firstErrorOccurred_ = true; const std::string errorMessage = errorString; errorCallback_(type, errorMessage, errorCallbackUserData_); firstErrorOccurred_ = false; return; } if (type == RtMidiError::WARNING) { std::cerr << '\n' << errorString << "\n\n"; } else if (type == RtMidiError::DEBUG_WARNING) { #if defined(__RTMIDI_DEBUG__) std::cerr << '\n' << errorString << "\n\n"; #endif } else { std::cerr << '\n' << errorString << "\n\n"; throw RtMidiError(errorString, type); } } //*********************************************************************// // Common MidiInApi Definitions //*********************************************************************// MidiInApi ::MidiInApi(unsigned int queueSizeLimit) : MidiApi() { // Allocate the MIDI queue. inputData_.queue.ringSize = queueSizeLimit; if (inputData_.queue.ringSize > 0) inputData_.queue.ring = new MidiMessage[inputData_.queue.ringSize]; } MidiInApi ::~MidiInApi(void) { // Delete the MIDI queue. if (inputData_.queue.ringSize > 0) delete[] inputData_.queue.ring; } void MidiInApi ::setCallback(RtMidiIn::RtMidiCallback callback, void *userData) { if (inputData_.usingCallback) { errorString_ = "MidiInApi::setCallback: a callback function is already set!"; error(RtMidiError::WARNING, errorString_); return; } if (!callback) { errorString_ = "RtMidiIn::setCallback: callback function value is invalid!"; error(RtMidiError::WARNING, errorString_); return; } inputData_.userCallback = callback; inputData_.userData = userData; inputData_.usingCallback = true; } void MidiInApi ::cancelCallback() { if (!inputData_.usingCallback) { errorString_ = "RtMidiIn::cancelCallback: no callback function was set!"; error(RtMidiError::WARNING, errorString_); return; } inputData_.userCallback = 0; inputData_.userData = 0; inputData_.usingCallback = false; } void MidiInApi ::ignoreTypes(bool midiSysex, bool midiTime, bool midiSense) { inputData_.ignoreFlags = 0; if (midiSysex) inputData_.ignoreFlags = 0x01; if (midiTime) inputData_.ignoreFlags |= 0x02; if (midiSense) inputData_.ignoreFlags |= 0x04; } double MidiInApi ::getMessage(std::vector *message) { message->clear(); if (inputData_.usingCallback) { errorString_ = "RtMidiIn::getNextMessage: a user callback is currently set for this port."; error(RtMidiError::WARNING, errorString_); return 0.0; } double timeStamp; if (!inputData_.queue.pop(message, &timeStamp)) return 0.0; return timeStamp; } unsigned int MidiInApi::MidiQueue::size(unsigned int *__back, unsigned int *__front) { // Access back/front members exactly once and make stack copies for // size calculation unsigned int _back = back, _front = front, _size; if (_back >= _front) _size = _back - _front; else _size = ringSize - _front + _back; // Return copies of back/front so no new and unsynchronized accesses // to member variables are needed. if (__back) *__back = _back; if (__front) *__front = _front; return _size; } // As long as we haven't reached our queue size limit, push the message. bool MidiInApi::MidiQueue::push(const MidiInApi::MidiMessage &msg) { // Local stack copies of front/back unsigned int _back, _front, _size; // Get back/front indexes exactly once and calculate current size _size = size(&_back, &_front); if (_size < ringSize - 1) { ring[_back] = msg; back = (back + 1) % ringSize; return true; } return false; } bool MidiInApi::MidiQueue::pop(std::vector *msg, double *timeStamp) { // Local stack copies of front/back unsigned int _back, _front, _size; // Get back/front indexes exactly once and calculate current size _size = size(&_back, &_front); if (_size == 0) return false; // Copy queued message to the vector pointer argument and then "pop" it. msg->assign(ring[_front].bytes.begin(), ring[_front].bytes.end()); *timeStamp = ring[_front].timeStamp; // Update front front = (front + 1) % ringSize; return true; } //*********************************************************************// // Common MidiOutApi Definitions //*********************************************************************// MidiOutApi ::MidiOutApi(void) : MidiApi() { } MidiOutApi ::~MidiOutApi(void) { } // *************************************************** // // // OS/API-specific methods. // // *************************************************** // #if defined(__MACOSX_CORE__) // The CoreMIDI API is based on the use of a callback function for // MIDI input. We convert the system specific time stamps to delta // time values. // OS-X CoreMIDI header files. #include #include #include // A structure to hold variables related to the CoreMIDI API // implementation. struct CoreMidiData { MIDIClientRef client; MIDIPortRef port; MIDIEndpointRef endpoint; MIDIEndpointRef destinationId; unsigned long long lastTime; MIDISysexSendRequest sysexreq; }; //*********************************************************************// // API: OS-X // Class Definitions: MidiInCore //*********************************************************************// static void midiInputCallback(const MIDIPacketList *list, void *procRef, void * /*srcRef*/) { MidiInApi::RtMidiInData *data = static_cast(procRef); CoreMidiData *apiData = static_cast(data->apiData); unsigned char status; unsigned short nBytes, iByte, size; unsigned long long time; bool &continueSysex = data->continueSysex; MidiInApi::MidiMessage &message = data->message; const MIDIPacket *packet = &list->packet[0]; for (unsigned int i = 0; i < list->numPackets; ++i) { // My interpretation of the CoreMIDI documentation: all message // types, except sysex, are complete within a packet and there may // be several of them in a single packet. Sysex messages can be // broken across multiple packets and PacketLists but are bundled // alone within each packet (these packets do not contain other // message types). If sysex messages are split across multiple // MIDIPacketLists, they must be handled by multiple calls to this // function. nBytes = packet->length; if (nBytes == 0) { packet = MIDIPacketNext(packet); continue; } // Calculate time stamp. if (data->firstMessage) { message.timeStamp = 0.0; data->firstMessage = false; } else { time = packet->timeStamp; if (time == 0) { // this happens when receiving asynchronous sysex messages time = AudioGetCurrentHostTime(); } time -= apiData->lastTime; time = AudioConvertHostTimeToNanos(time); if (!continueSysex) message.timeStamp = time * 0.000000001; } // Track whether any non-filtered messages were found in this // packet for timestamp calculation bool foundNonFiltered = false; iByte = 0; if (continueSysex) { // We have a continuing, segmented sysex message. if (!(data->ignoreFlags & 0x01)) { // If we're not ignoring sysex messages, copy the entire packet. for (unsigned int j = 0; j < nBytes; ++j) message.bytes.push_back(packet->data[j]); } continueSysex = packet->data[nBytes - 1] != 0xF7; if (!(data->ignoreFlags & 0x01) && !continueSysex) { // If not a continuing sysex message, invoke the user callback function or queue the message. if (data->usingCallback) { RtMidiIn::RtMidiCallback callback = (RtMidiIn::RtMidiCallback)data->userCallback; callback(message.timeStamp, &message.bytes, data->userData); } else { // As long as we haven't reached our queue size limit, push the message. if (!data->queue.push(message)) std::cerr << "\nMidiInCore: message queue limit reached!!\n\n"; } message.bytes.clear(); } } else { while (iByte < nBytes) { size = 0; // We are expecting that the next byte in the packet is a status byte. status = packet->data[iByte]; if (!(status & 0x80)) break; // Determine the number of bytes in the MIDI message. if (status < 0xC0) size = 3; else if (status < 0xE0) size = 2; else if (status < 0xF0) size = 3; else if (status == 0xF0) { // A MIDI sysex if (data->ignoreFlags & 0x01) { size = 0; iByte = nBytes; } else size = nBytes - iByte; continueSysex = packet->data[nBytes - 1] != 0xF7; } else if (status == 0xF1) { // A MIDI time code message if (data->ignoreFlags & 0x02) { size = 0; iByte += 2; } else size = 2; } else if (status == 0xF2) size = 3; else if (status == 0xF3) size = 2; else if (status == 0xF8 && (data->ignoreFlags & 0x02)) { // A MIDI timing tick message and we're ignoring it. size = 0; iByte += 1; } else if (status == 0xFE && (data->ignoreFlags & 0x04)) { // A MIDI active sensing message and we're ignoring it. size = 0; iByte += 1; } else size = 1; // Copy the MIDI data to our vector. if (size) { foundNonFiltered = true; message.bytes.assign(&packet->data[iByte], &packet->data[iByte + size]); if (!continueSysex) { // If not a continuing sysex message, invoke the user callback function or queue the message. if (data->usingCallback) { RtMidiIn::RtMidiCallback callback = (RtMidiIn::RtMidiCallback)data->userCallback; callback(message.timeStamp, &message.bytes, data->userData); } else { // As long as we haven't reached our queue size limit, push the message. if (!data->queue.push(message)) std::cerr << "\nMidiInCore: message queue limit reached!!\n\n"; } message.bytes.clear(); } iByte += size; } } } // Save the time of the last non-filtered message if (foundNonFiltered) { apiData->lastTime = packet->timeStamp; if (apiData->lastTime == 0) { // this happens when receiving asynchronous sysex messages apiData->lastTime = AudioGetCurrentHostTime(); } } packet = MIDIPacketNext(packet); } } MidiInCore ::MidiInCore(const std::string &clientName, unsigned int queueSizeLimit) : MidiInApi(queueSizeLimit) { MidiInCore::initialize(clientName); } MidiInCore ::~MidiInCore(void) { // Close a connection if it exists. MidiInCore::closePort(); // Cleanup. CoreMidiData *data = static_cast(apiData_); MIDIClientDispose(data->client); if (data->endpoint) MIDIEndpointDispose(data->endpoint); delete data; } void MidiInCore ::initialize(const std::string &clientName) { // Set up our client. MIDIClientRef client; CFStringRef name = CFStringCreateWithCString(NULL, clientName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDIClientCreate(name, NULL, NULL, &client); if (result != noErr) { std::ostringstream ost; ost << "MidiInCore::initialize: error creating OS-X MIDI client object (" << result << ")."; errorString_ = ost.str(); error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific connection information. CoreMidiData *data = (CoreMidiData *)new CoreMidiData; data->client = client; data->endpoint = 0; apiData_ = (void *)data; inputData_.apiData = (void *)data; CFRelease(name); } void MidiInCore ::openPort(unsigned int portNumber, const std::string &portName) { if (connected_) { errorString_ = "MidiInCore::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); unsigned int nSrc = MIDIGetNumberOfSources(); if (nSrc < 1) { errorString_ = "MidiInCore::openPort: no MIDI input sources found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } if (portNumber >= nSrc) { std::ostringstream ost; ost << "MidiInCore::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } MIDIPortRef port; CoreMidiData *data = static_cast(apiData_); CFStringRef portNameRef = CFStringCreateWithCString(NULL, portName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDIInputPortCreate(data->client, portNameRef, midiInputCallback, (void *)&inputData_, &port); CFRelease(portNameRef); if (result != noErr) { MIDIClientDispose(data->client); errorString_ = "MidiInCore::openPort: error creating OS-X MIDI input port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Get the desired input source identifier. MIDIEndpointRef endpoint = MIDIGetSource(portNumber); if (endpoint == 0) { MIDIPortDispose(port); MIDIClientDispose(data->client); errorString_ = "MidiInCore::openPort: error getting MIDI input source reference."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Make the connection. result = MIDIPortConnectSource(port, endpoint, NULL); if (result != noErr) { MIDIPortDispose(port); MIDIClientDispose(data->client); errorString_ = "MidiInCore::openPort: error connecting OS-X MIDI input port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific port information. data->port = port; connected_ = true; } void MidiInCore ::openVirtualPort(const std::string &portName) { CoreMidiData *data = static_cast(apiData_); // Create a virtual MIDI input destination. MIDIEndpointRef endpoint; CFStringRef portNameRef = CFStringCreateWithCString(NULL, portName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDIDestinationCreate(data->client, portNameRef, midiInputCallback, (void *)&inputData_, &endpoint); CFRelease(portNameRef); if (result != noErr) { errorString_ = "MidiInCore::openVirtualPort: error creating virtual OS-X MIDI destination."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific connection information. data->endpoint = endpoint; } void MidiInCore ::closePort(void) { CoreMidiData *data = static_cast(apiData_); if (data->endpoint) { MIDIEndpointDispose(data->endpoint); data->endpoint = 0; } if (data->port) { MIDIPortDispose(data->port); data->port = 0; } connected_ = false; } void MidiInCore ::setClientName(const std::string &) { errorString_ = "MidiInCore::setClientName: this function is not implemented for the MACOSX_CORE API!"; error(RtMidiError::WARNING, errorString_); } void MidiInCore ::setPortName(const std::string &) { errorString_ = "MidiInCore::setPortName: this function is not implemented for the MACOSX_CORE API!"; error(RtMidiError::WARNING, errorString_); } unsigned int MidiInCore ::getPortCount() { CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); return MIDIGetNumberOfSources(); } // This function was submitted by Douglas Casey Tucker and apparently // derived largely from PortMidi. CFStringRef EndpointName(MIDIEndpointRef endpoint, bool isExternal) { CFMutableStringRef result = CFStringCreateMutable(NULL, 0); CFStringRef str; // Begin with the endpoint's name. str = NULL; MIDIObjectGetStringProperty(endpoint, kMIDIPropertyName, &str); if (str != NULL) { CFStringAppend(result, str); CFRelease(str); } MIDIEntityRef entity = 0; MIDIEndpointGetEntity(endpoint, &entity); if (entity == 0) // probably virtual return result; if (CFStringGetLength(result) == 0) { // endpoint name has zero length -- try the entity str = NULL; MIDIObjectGetStringProperty(entity, kMIDIPropertyName, &str); if (str != NULL) { CFStringAppend(result, str); CFRelease(str); } } // now consider the device's name MIDIDeviceRef device = 0; MIDIEntityGetDevice(entity, &device); if (device == 0) return result; str = NULL; MIDIObjectGetStringProperty(device, kMIDIPropertyName, &str); if (CFStringGetLength(result) == 0) { CFRelease(result); return str; } if (str != NULL) { // if an external device has only one entity, throw away // the endpoint name and just use the device name if (isExternal && MIDIDeviceGetNumberOfEntities(device) < 2) { CFRelease(result); return str; } else { if (CFStringGetLength(str) == 0) { CFRelease(str); return result; } // does the entity name already start with the device name? // (some drivers do this though they shouldn't) // if so, do not prepend if (CFStringCompareWithOptions(result, /* endpoint name */ str /* device name */, CFRangeMake(0, CFStringGetLength(str)), 0) != kCFCompareEqualTo) { // prepend the device name to the entity name if (CFStringGetLength(result) > 0) CFStringInsert(result, 0, CFSTR(" ")); CFStringInsert(result, 0, str); } CFRelease(str); } } return result; } // This function was submitted by Douglas Casey Tucker and apparently // derived largely from PortMidi. static CFStringRef ConnectedEndpointName(MIDIEndpointRef endpoint) { CFMutableStringRef result = CFStringCreateMutable(NULL, 0); CFStringRef str; OSStatus err; int i; // Does the endpoint have connections? CFDataRef connections = NULL; int nConnected = 0; bool anyStrings = false; err = MIDIObjectGetDataProperty(endpoint, kMIDIPropertyConnectionUniqueID, &connections); if (connections != NULL) { // It has connections, follow them // Concatenate the names of all connected devices nConnected = CFDataGetLength(connections) / sizeof(MIDIUniqueID); if (nConnected) { const SInt32 *pid = (const SInt32 *)(CFDataGetBytePtr(connections)); for (i = 0; i < nConnected; ++i, ++pid) { MIDIUniqueID id = EndianS32_BtoN(*pid); MIDIObjectRef connObject; MIDIObjectType connObjectType; err = MIDIObjectFindByUniqueID(id, &connObject, &connObjectType); if (err == noErr) { if (connObjectType == kMIDIObjectType_ExternalSource || connObjectType == kMIDIObjectType_ExternalDestination) { // Connected to an external device's endpoint (10.3 and later). str = EndpointName((MIDIEndpointRef)(connObject), true); } else { // Connected to an external device (10.2) (or something else, catch- str = NULL; MIDIObjectGetStringProperty(connObject, kMIDIPropertyName, &str); } if (str != NULL) { if (anyStrings) CFStringAppend(result, CFSTR(", ")); else anyStrings = true; CFStringAppend(result, str); CFRelease(str); } } } } CFRelease(connections); } if (anyStrings) return result; CFRelease(result); // Here, either the endpoint had no connections, or we failed to obtain names return EndpointName(endpoint, false); } std::string MidiInCore ::getPortName(unsigned int portNumber) { CFStringRef nameRef; MIDIEndpointRef portRef; char name[128]; std::string stringName; CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); if (portNumber >= MIDIGetNumberOfSources()) { std::ostringstream ost; ost << "MidiInCore::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); return stringName; } portRef = MIDIGetSource(portNumber); nameRef = ConnectedEndpointName(portRef); CFStringGetCString(nameRef, name, sizeof(name), kCFStringEncodingUTF8); CFRelease(nameRef); return stringName = name; } //*********************************************************************// // API: OS-X // Class Definitions: MidiOutCore //*********************************************************************// MidiOutCore ::MidiOutCore(const std::string &clientName) : MidiOutApi() { MidiOutCore::initialize(clientName); } MidiOutCore ::~MidiOutCore(void) { // Close a connection if it exists. MidiOutCore::closePort(); // Cleanup. CoreMidiData *data = static_cast(apiData_); MIDIClientDispose(data->client); if (data->endpoint) MIDIEndpointDispose(data->endpoint); delete data; } void MidiOutCore ::initialize(const std::string &clientName) { // Set up our client. MIDIClientRef client; CFStringRef name = CFStringCreateWithCString(NULL, clientName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDIClientCreate(name, NULL, NULL, &client); if (result != noErr) { std::ostringstream ost; ost << "MidiInCore::initialize: error creating OS-X MIDI client object (" << result << ")."; errorString_ = ost.str(); error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific connection information. CoreMidiData *data = (CoreMidiData *)new CoreMidiData; data->client = client; data->endpoint = 0; apiData_ = (void *)data; CFRelease(name); } unsigned int MidiOutCore ::getPortCount() { CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); return MIDIGetNumberOfDestinations(); } std::string MidiOutCore ::getPortName(unsigned int portNumber) { CFStringRef nameRef; MIDIEndpointRef portRef; char name[128]; std::string stringName; CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); if (portNumber >= MIDIGetNumberOfDestinations()) { std::ostringstream ost; ost << "MidiOutCore::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); return stringName; } portRef = MIDIGetDestination(portNumber); nameRef = ConnectedEndpointName(portRef); CFStringGetCString(nameRef, name, sizeof(name), kCFStringEncodingUTF8); CFRelease(nameRef); return stringName = name; } void MidiOutCore ::openPort(unsigned int portNumber, const std::string &portName) { if (connected_) { errorString_ = "MidiOutCore::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, false); unsigned int nDest = MIDIGetNumberOfDestinations(); if (nDest < 1) { errorString_ = "MidiOutCore::openPort: no MIDI output destinations found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } if (portNumber >= nDest) { std::ostringstream ost; ost << "MidiOutCore::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } MIDIPortRef port; CoreMidiData *data = static_cast(apiData_); CFStringRef portNameRef = CFStringCreateWithCString(NULL, portName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDIOutputPortCreate(data->client, portNameRef, &port); CFRelease(portNameRef); if (result != noErr) { MIDIClientDispose(data->client); errorString_ = "MidiOutCore::openPort: error creating OS-X MIDI output port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Get the desired output port identifier. MIDIEndpointRef destination = MIDIGetDestination(portNumber); if (destination == 0) { MIDIPortDispose(port); MIDIClientDispose(data->client); errorString_ = "MidiOutCore::openPort: error getting MIDI output destination reference."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific connection information. data->port = port; data->destinationId = destination; connected_ = true; } void MidiOutCore ::closePort(void) { CoreMidiData *data = static_cast(apiData_); if (data->endpoint) { MIDIEndpointDispose(data->endpoint); data->endpoint = 0; } if (data->port) { MIDIPortDispose(data->port); data->port = 0; } connected_ = false; } void MidiOutCore ::setClientName(const std::string &) { errorString_ = "MidiOutCore::setClientName: this function is not implemented for the MACOSX_CORE API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutCore ::setPortName(const std::string &) { errorString_ = "MidiOutCore::setPortName: this function is not implemented for the MACOSX_CORE API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutCore ::openVirtualPort(const std::string &portName) { CoreMidiData *data = static_cast(apiData_); if (data->endpoint) { errorString_ = "MidiOutCore::openVirtualPort: a virtual output port already exists!"; error(RtMidiError::WARNING, errorString_); return; } // Create a virtual MIDI output source. MIDIEndpointRef endpoint; CFStringRef portNameRef = CFStringCreateWithCString(NULL, portName.c_str(), kCFStringEncodingASCII); OSStatus result = MIDISourceCreate(data->client, portNameRef, &endpoint); CFRelease(portNameRef); if (result != noErr) { errorString_ = "MidiOutCore::initialize: error creating OS-X virtual MIDI source."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Save our api-specific connection information. data->endpoint = endpoint; } void MidiOutCore ::sendMessage(const unsigned char *message, size_t size) { // We use the MIDISendSysex() function to asynchronously send sysex // messages. Otherwise, we use a single CoreMidi MIDIPacket. unsigned int nBytes = static_cast(size); if (nBytes == 0) { errorString_ = "MidiOutCore::sendMessage: no data in message argument!"; error(RtMidiError::WARNING, errorString_); return; } MIDITimeStamp timeStamp = AudioGetCurrentHostTime(); CoreMidiData *data = static_cast(apiData_); OSStatus result; if (message[0] != 0xF0 && nBytes > 3) { errorString_ = "MidiOutCore::sendMessage: message format problem ... not sysex but > 3 bytes?"; error(RtMidiError::WARNING, errorString_); return; } Byte buffer[nBytes + (sizeof(MIDIPacketList))]; ByteCount listSize = sizeof(buffer); MIDIPacketList *packetList = (MIDIPacketList *)buffer; MIDIPacket *packet = MIDIPacketListInit(packetList); ByteCount remainingBytes = nBytes; while (remainingBytes && packet) { ByteCount bytesForPacket = remainingBytes > 65535 ? 65535 : remainingBytes; // 65535 = maximum size of a MIDIPacket const Byte *dataStartPtr = (const Byte *)&message[nBytes - remainingBytes]; packet = MIDIPacketListAdd(packetList, listSize, packet, timeStamp, bytesForPacket, dataStartPtr); remainingBytes -= bytesForPacket; } if (!packet) { errorString_ = "MidiOutCore::sendMessage: could not allocate packet list"; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Send to any destinations that may have connected to us. if (data->endpoint) { result = MIDIReceived(data->endpoint, packetList); if (result != noErr) { errorString_ = "MidiOutCore::sendMessage: error sending MIDI to virtual destinations."; error(RtMidiError::WARNING, errorString_); } } // And send to an explicit destination port if we're connected. if (connected_) { result = MIDISend(data->port, data->destinationId, packetList); if (result != noErr) { errorString_ = "MidiOutCore::sendMessage: error sending MIDI message to port."; error(RtMidiError::WARNING, errorString_); } } } #endif // __MACOSX_CORE__ //*********************************************************************// // API: LINUX ALSA SEQUENCER //*********************************************************************// // API information found at: // - http://www.alsa-project.org/documentation.php#Library #if defined(__LINUX_ALSA__) // The ALSA Sequencer API is based on the use of a callback function for // MIDI input. // // Thanks to Pedro Lopez-Cabanillas for help with the ALSA sequencer // time stamps and other assorted fixes!!! // If you don't need timestamping for incoming MIDI events, define the // preprocessor definition AVOID_TIMESTAMPING to save resources // associated with the ALSA sequencer queues. #include #include // ALSA header file. #include // A structure to hold variables related to the ALSA API // implementation. struct AlsaMidiData { snd_seq_t *seq; unsigned int portNum; int vport; snd_seq_port_subscribe_t *subscription; snd_midi_event_t *coder; unsigned int bufferSize; unsigned char *buffer; pthread_t thread; pthread_t dummy_thread_id; snd_seq_real_time_t lastTime; int queue_id; // an input queue is needed to get timestamped events int trigger_fds[2]; }; #define PORT_TYPE(pinfo, bits) ((snd_seq_port_info_get_capability(pinfo) & (bits)) == (bits)) //*********************************************************************// // API: LINUX ALSA // Class Definitions: MidiInAlsa //*********************************************************************// static void *alsaMidiHandler(void *ptr) { MidiInApi::RtMidiInData *data = static_cast(ptr); AlsaMidiData *apiData = static_cast(data->apiData); long nBytes; double time; bool continueSysex = false; bool doDecode = false; MidiInApi::MidiMessage message; int poll_fd_count; struct pollfd *poll_fds; snd_seq_event_t *ev; int result; apiData->bufferSize = 32; result = snd_midi_event_new(0, &apiData->coder); if (result < 0) { data->doInput = false; std::cerr << "\nMidiInAlsa::alsaMidiHandler: error initializing MIDI event parser!\n\n"; return 0; } unsigned char *buffer = (unsigned char *)malloc(apiData->bufferSize); if (buffer == NULL) { data->doInput = false; snd_midi_event_free(apiData->coder); apiData->coder = 0; std::cerr << "\nMidiInAlsa::alsaMidiHandler: error initializing buffer memory!\n\n"; return 0; } snd_midi_event_init(apiData->coder); snd_midi_event_no_status(apiData->coder, 1); // suppress running status messages poll_fd_count = snd_seq_poll_descriptors_count(apiData->seq, POLLIN) + 1; poll_fds = (struct pollfd *)alloca(poll_fd_count * sizeof(struct pollfd)); snd_seq_poll_descriptors(apiData->seq, poll_fds + 1, poll_fd_count - 1, POLLIN); poll_fds[0].fd = apiData->trigger_fds[0]; poll_fds[0].events = POLLIN; while (data->doInput) { if (snd_seq_event_input_pending(apiData->seq, 1) == 0) { // No data pending if (poll(poll_fds, poll_fd_count, -1) >= 0) { if (poll_fds[0].revents & POLLIN) { bool dummy; int res = read(poll_fds[0].fd, &dummy, sizeof(dummy)); (void)res; } } continue; } // If here, there should be data. result = snd_seq_event_input(apiData->seq, &ev); if (result == -ENOSPC) { std::cerr << "\nMidiInAlsa::alsaMidiHandler: MIDI input buffer overrun!\n\n"; continue; } else if (result <= 0) { std::cerr << "\nMidiInAlsa::alsaMidiHandler: unknown MIDI input error!\n"; perror("System reports"); continue; } // This is a bit weird, but we now have to decode an ALSA MIDI // event (back) into MIDI bytes. We'll ignore non-MIDI types. if (!continueSysex) message.bytes.clear(); doDecode = false; switch (ev->type) { case SND_SEQ_EVENT_PORT_SUBSCRIBED: #if defined(__RTMIDI_DEBUG__) std::cout << "MidiInAlsa::alsaMidiHandler: port connection made!\n"; #endif break; case SND_SEQ_EVENT_PORT_UNSUBSCRIBED: #if defined(__RTMIDI_DEBUG__) std::cerr << "MidiInAlsa::alsaMidiHandler: port connection has closed!\n"; std::cout << "sender = " << (int)ev->data.connect.sender.client << ":" << (int)ev->data.connect.sender.port << ", dest = " << (int)ev->data.connect.dest.client << ":" << (int)ev->data.connect.dest.port << std::endl; #endif break; case SND_SEQ_EVENT_QFRAME: // MIDI time code if (!(data->ignoreFlags & 0x02)) doDecode = true; break; case SND_SEQ_EVENT_TICK: // 0xF9 ... MIDI timing tick if (!(data->ignoreFlags & 0x02)) doDecode = true; break; case SND_SEQ_EVENT_CLOCK: // 0xF8 ... MIDI timing (clock) tick if (!(data->ignoreFlags & 0x02)) doDecode = true; break; case SND_SEQ_EVENT_SENSING: // Active sensing if (!(data->ignoreFlags & 0x04)) doDecode = true; break; case SND_SEQ_EVENT_SYSEX: if ((data->ignoreFlags & 0x01)) break; if (ev->data.ext.len > apiData->bufferSize) { apiData->bufferSize = ev->data.ext.len; free(buffer); buffer = (unsigned char *)malloc(apiData->bufferSize); if (buffer == NULL) { data->doInput = false; std::cerr << "\nMidiInAlsa::alsaMidiHandler: error resizing buffer memory!\n\n"; break; } } doDecode = true; break; default: doDecode = true; } if (doDecode) { nBytes = snd_midi_event_decode(apiData->coder, buffer, apiData->bufferSize, ev); if (nBytes > 0) { // The ALSA sequencer has a maximum buffer size for MIDI sysex // events of 256 bytes. If a device sends sysex messages larger // than this, they are segmented into 256 byte chunks. So, // we'll watch for this and concatenate sysex chunks into a // single sysex message if necessary. if (!continueSysex) message.bytes.assign(buffer, &buffer[nBytes]); else message.bytes.insert(message.bytes.end(), buffer, &buffer[nBytes]); continueSysex = ((ev->type == SND_SEQ_EVENT_SYSEX) && (message.bytes.back() != 0xF7)); if (!continueSysex) { // Calculate the time stamp: message.timeStamp = 0.0; // Method 1: Use the system time. //(void)gettimeofday(&tv, (struct timezone *)NULL); //time = (tv.tv_sec * 1000000) + tv.tv_usec; // Method 2: Use the ALSA sequencer event time data. // (thanks to Pedro Lopez-Cabanillas!). // Using method from: // https://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html // Perform the carry for the later subtraction by updating y. // Temp var y is timespec because computation requires signed types, // while snd_seq_real_time_t has unsigned types. snd_seq_real_time_t &x(ev->time.time); struct timespec y; y.tv_nsec = apiData->lastTime.tv_nsec; y.tv_sec = apiData->lastTime.tv_sec; if (x.tv_nsec < y.tv_nsec) { int nsec = (y.tv_nsec - (int)x.tv_nsec) / 1000000000 + 1; y.tv_nsec -= 1000000000 * nsec; y.tv_sec += nsec; } if (x.tv_nsec - y.tv_nsec > 1000000000) { int nsec = ((int)x.tv_nsec - y.tv_nsec) / 1000000000; y.tv_nsec += 1000000000 * nsec; y.tv_sec -= nsec; } // Compute the time difference. time = (int)x.tv_sec - y.tv_sec + ((int)x.tv_nsec - y.tv_nsec) * 1e-9; apiData->lastTime = ev->time.time; if (data->firstMessage == true) data->firstMessage = false; else message.timeStamp = time; } else { #if defined(__RTMIDI_DEBUG__) std::cerr << "\nMidiInAlsa::alsaMidiHandler: event parsing error or not a MIDI event!\n\n"; #endif } } } snd_seq_free_event(ev); if (message.bytes.size() == 0 || continueSysex) continue; if (data->usingCallback) { RtMidiIn::RtMidiCallback callback = (RtMidiIn::RtMidiCallback)data->userCallback; callback(message.timeStamp, &message.bytes, data->userData); } else { // As long as we haven't reached our queue size limit, push the message. if (!data->queue.push(message)) std::cerr << "\nMidiInAlsa: message queue limit reached!!\n\n"; } } if (buffer) free(buffer); snd_midi_event_free(apiData->coder); apiData->coder = 0; apiData->thread = apiData->dummy_thread_id; return 0; } MidiInAlsa ::MidiInAlsa(const std::string &clientName, unsigned int queueSizeLimit) : MidiInApi(queueSizeLimit) { MidiInAlsa::initialize(clientName); } MidiInAlsa ::~MidiInAlsa() { // Close a connection if it exists. MidiInAlsa::closePort(); // Shutdown the input thread. AlsaMidiData *data = static_cast(apiData_); if (inputData_.doInput) { inputData_.doInput = false; int res = write(data->trigger_fds[1], &inputData_.doInput, sizeof(inputData_.doInput)); (void)res; if (!pthread_equal(data->thread, data->dummy_thread_id)) pthread_join(data->thread, NULL); } // Cleanup. close(data->trigger_fds[0]); close(data->trigger_fds[1]); if (data->vport >= 0) snd_seq_delete_port(data->seq, data->vport); #ifndef AVOID_TIMESTAMPING snd_seq_free_queue(data->seq, data->queue_id); #endif snd_seq_close(data->seq); delete data; } void MidiInAlsa ::initialize(const std::string &clientName) { // Set up the ALSA sequencer client. snd_seq_t *seq; int result = snd_seq_open(&seq, "default", SND_SEQ_OPEN_DUPLEX, SND_SEQ_NONBLOCK); if (result < 0) { errorString_ = "MidiInAlsa::initialize: error creating ALSA sequencer client object."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Set client name. snd_seq_set_client_name(seq, clientName.c_str()); // Save our api-specific connection information. AlsaMidiData *data = (AlsaMidiData *)new AlsaMidiData; data->seq = seq; data->portNum = -1; data->vport = -1; data->subscription = 0; data->dummy_thread_id = pthread_self(); data->thread = data->dummy_thread_id; data->trigger_fds[0] = -1; data->trigger_fds[1] = -1; apiData_ = (void *)data; inputData_.apiData = (void *)data; if (pipe(data->trigger_fds) == -1) { errorString_ = "MidiInAlsa::initialize: error creating pipe objects."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Create the input queue #ifndef AVOID_TIMESTAMPING data->queue_id = snd_seq_alloc_named_queue(seq, "RtMidi Queue"); // Set arbitrary tempo (mm=100) and resolution (240) snd_seq_queue_tempo_t *qtempo; snd_seq_queue_tempo_alloca(&qtempo); snd_seq_queue_tempo_set_tempo(qtempo, 600000); snd_seq_queue_tempo_set_ppq(qtempo, 240); snd_seq_set_queue_tempo(data->seq, data->queue_id, qtempo); snd_seq_drain_output(data->seq); #endif } // This function is used to count or get the pinfo structure for a given port number. unsigned int portInfo(snd_seq_t *seq, snd_seq_port_info_t *pinfo, unsigned int type, int portNumber) { snd_seq_client_info_t *cinfo; int client; int count = 0; snd_seq_client_info_alloca(&cinfo); snd_seq_client_info_set_client(cinfo, -1); while (snd_seq_query_next_client(seq, cinfo) >= 0) { client = snd_seq_client_info_get_client(cinfo); if (client == 0) continue; // Reset query info snd_seq_port_info_set_client(pinfo, client); snd_seq_port_info_set_port(pinfo, -1); while (snd_seq_query_next_port(seq, pinfo) >= 0) { unsigned int atyp = snd_seq_port_info_get_type(pinfo); if (((atyp & SND_SEQ_PORT_TYPE_MIDI_GENERIC) == 0) && ((atyp & SND_SEQ_PORT_TYPE_SYNTH) == 0) && ((atyp & SND_SEQ_PORT_TYPE_APPLICATION) == 0)) continue; unsigned int caps = snd_seq_port_info_get_capability(pinfo); if ((caps & type) != type) continue; if (count == portNumber) return 1; ++count; } } // If a negative portNumber was used, return the port count. if (portNumber < 0) return count; return 0; } unsigned int MidiInAlsa ::getPortCount() { snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); AlsaMidiData *data = static_cast(apiData_); return portInfo(data->seq, pinfo, SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ, -1); } std::string MidiInAlsa ::getPortName(unsigned int portNumber) { snd_seq_client_info_t *cinfo; snd_seq_port_info_t *pinfo; snd_seq_client_info_alloca(&cinfo); snd_seq_port_info_alloca(&pinfo); std::string stringName; AlsaMidiData *data = static_cast(apiData_); if (portInfo(data->seq, pinfo, SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ, (int)portNumber)) { int cnum = snd_seq_port_info_get_client(pinfo); snd_seq_get_any_client_info(data->seq, cnum, cinfo); std::ostringstream os; os << snd_seq_client_info_get_name(cinfo); os << ":"; os << snd_seq_port_info_get_name(pinfo); os << " "; // These lines added to make sure devices are listed os << snd_seq_port_info_get_client(pinfo); // with full portnames added to ensure individual device names os << ":"; os << snd_seq_port_info_get_port(pinfo); stringName = os.str(); return stringName; } // If we get here, we didn't find a match. errorString_ = "MidiInAlsa::getPortName: error looking for port name!"; error(RtMidiError::WARNING, errorString_); return stringName; } void MidiInAlsa ::openPort(unsigned int portNumber, const std::string &portName) { if (connected_) { errorString_ = "MidiInAlsa::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } unsigned int nSrc = this->getPortCount(); if (nSrc < 1) { errorString_ = "MidiInAlsa::openPort: no MIDI input sources found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } snd_seq_port_info_t *src_pinfo; snd_seq_port_info_alloca(&src_pinfo); AlsaMidiData *data = static_cast(apiData_); if (portInfo(data->seq, src_pinfo, SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ, (int)portNumber) == 0) { std::ostringstream ost; ost << "MidiInAlsa::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } snd_seq_addr_t sender, receiver; sender.client = snd_seq_port_info_get_client(src_pinfo); sender.port = snd_seq_port_info_get_port(src_pinfo); receiver.client = snd_seq_client_id(data->seq); snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); if (data->vport < 0) { snd_seq_port_info_set_client(pinfo, 0); snd_seq_port_info_set_port(pinfo, 0); snd_seq_port_info_set_capability(pinfo, SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE); snd_seq_port_info_set_type(pinfo, SND_SEQ_PORT_TYPE_MIDI_GENERIC | SND_SEQ_PORT_TYPE_APPLICATION); snd_seq_port_info_set_midi_channels(pinfo, 16); #ifndef AVOID_TIMESTAMPING snd_seq_port_info_set_timestamping(pinfo, 1); snd_seq_port_info_set_timestamp_real(pinfo, 1); snd_seq_port_info_set_timestamp_queue(pinfo, data->queue_id); #endif snd_seq_port_info_set_name(pinfo, portName.c_str()); data->vport = snd_seq_create_port(data->seq, pinfo); if (data->vport < 0) { errorString_ = "MidiInAlsa::openPort: ALSA error creating input port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } data->vport = snd_seq_port_info_get_port(pinfo); } receiver.port = data->vport; if (!data->subscription) { // Make subscription if (snd_seq_port_subscribe_malloc(&data->subscription) < 0) { errorString_ = "MidiInAlsa::openPort: ALSA error allocation port subscription."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } snd_seq_port_subscribe_set_sender(data->subscription, &sender); snd_seq_port_subscribe_set_dest(data->subscription, &receiver); if (snd_seq_subscribe_port(data->seq, data->subscription)) { snd_seq_port_subscribe_free(data->subscription); data->subscription = 0; errorString_ = "MidiInAlsa::openPort: ALSA error making port connection."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } } if (inputData_.doInput == false) { // Start the input queue #ifndef AVOID_TIMESTAMPING snd_seq_start_queue(data->seq, data->queue_id, NULL); snd_seq_drain_output(data->seq); #endif // Start our MIDI input thread. pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_attr_setschedpolicy(&attr, SCHED_OTHER); inputData_.doInput = true; int err = pthread_create(&data->thread, &attr, alsaMidiHandler, &inputData_); pthread_attr_destroy(&attr); if (err) { snd_seq_unsubscribe_port(data->seq, data->subscription); snd_seq_port_subscribe_free(data->subscription); data->subscription = 0; inputData_.doInput = false; errorString_ = "MidiInAlsa::openPort: error starting MIDI input thread!"; error(RtMidiError::THREAD_ERROR, errorString_); return; } } connected_ = true; } void MidiInAlsa ::openVirtualPort(const std::string &portName) { AlsaMidiData *data = static_cast(apiData_); if (data->vport < 0) { snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); snd_seq_port_info_set_capability(pinfo, SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE); snd_seq_port_info_set_type(pinfo, SND_SEQ_PORT_TYPE_MIDI_GENERIC | SND_SEQ_PORT_TYPE_APPLICATION); snd_seq_port_info_set_midi_channels(pinfo, 16); #ifndef AVOID_TIMESTAMPING snd_seq_port_info_set_timestamping(pinfo, 1); snd_seq_port_info_set_timestamp_real(pinfo, 1); snd_seq_port_info_set_timestamp_queue(pinfo, data->queue_id); #endif snd_seq_port_info_set_name(pinfo, portName.c_str()); data->vport = snd_seq_create_port(data->seq, pinfo); if (data->vport < 0) { errorString_ = "MidiInAlsa::openVirtualPort: ALSA error creating virtual port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } data->vport = snd_seq_port_info_get_port(pinfo); } if (inputData_.doInput == false) { // Wait for old thread to stop, if still running if (!pthread_equal(data->thread, data->dummy_thread_id)) pthread_join(data->thread, NULL); // Start the input queue #ifndef AVOID_TIMESTAMPING snd_seq_start_queue(data->seq, data->queue_id, NULL); snd_seq_drain_output(data->seq); #endif // Start our MIDI input thread. pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_attr_setschedpolicy(&attr, SCHED_OTHER); inputData_.doInput = true; int err = pthread_create(&data->thread, &attr, alsaMidiHandler, &inputData_); pthread_attr_destroy(&attr); if (err) { if (data->subscription) { snd_seq_unsubscribe_port(data->seq, data->subscription); snd_seq_port_subscribe_free(data->subscription); data->subscription = 0; } inputData_.doInput = false; errorString_ = "MidiInAlsa::openPort: error starting MIDI input thread!"; error(RtMidiError::THREAD_ERROR, errorString_); return; } } } void MidiInAlsa ::closePort(void) { AlsaMidiData *data = static_cast(apiData_); if (connected_) { if (data->subscription) { snd_seq_unsubscribe_port(data->seq, data->subscription); snd_seq_port_subscribe_free(data->subscription); data->subscription = 0; } // Stop the input queue #ifndef AVOID_TIMESTAMPING snd_seq_stop_queue(data->seq, data->queue_id, NULL); snd_seq_drain_output(data->seq); #endif connected_ = false; } // Stop thread to avoid triggering the callback, while the port is intended to be closed if (inputData_.doInput) { inputData_.doInput = false; int res = write(data->trigger_fds[1], &inputData_.doInput, sizeof(inputData_.doInput)); (void)res; if (!pthread_equal(data->thread, data->dummy_thread_id)) pthread_join(data->thread, NULL); } } void MidiInAlsa ::setClientName(const std::string &clientName) { AlsaMidiData *data = static_cast(apiData_); snd_seq_set_client_name(data->seq, clientName.c_str()); } void MidiInAlsa ::setPortName(const std::string &portName) { AlsaMidiData *data = static_cast(apiData_); snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); snd_seq_get_port_info(data->seq, data->vport, pinfo); snd_seq_port_info_set_name(pinfo, portName.c_str()); snd_seq_set_port_info(data->seq, data->vport, pinfo); } //*********************************************************************// // API: LINUX ALSA // Class Definitions: MidiOutAlsa //*********************************************************************// MidiOutAlsa ::MidiOutAlsa(const std::string &clientName) : MidiOutApi() { MidiOutAlsa::initialize(clientName); } MidiOutAlsa ::~MidiOutAlsa() { // Close a connection if it exists. MidiOutAlsa::closePort(); // Cleanup. AlsaMidiData *data = static_cast(apiData_); if (data->vport >= 0) snd_seq_delete_port(data->seq, data->vport); if (data->coder) snd_midi_event_free(data->coder); if (data->buffer) free(data->buffer); snd_seq_close(data->seq); delete data; } void MidiOutAlsa ::initialize(const std::string &clientName) { // Set up the ALSA sequencer client. snd_seq_t *seq; int result1 = snd_seq_open(&seq, "default", SND_SEQ_OPEN_OUTPUT, SND_SEQ_NONBLOCK); if (result1 < 0) { errorString_ = "MidiOutAlsa::initialize: error creating ALSA sequencer client object."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Set client name. snd_seq_set_client_name(seq, clientName.c_str()); // Save our api-specific connection information. AlsaMidiData *data = (AlsaMidiData *)new AlsaMidiData; data->seq = seq; data->portNum = -1; data->vport = -1; data->bufferSize = 32; data->coder = 0; data->buffer = 0; int result = snd_midi_event_new(data->bufferSize, &data->coder); if (result < 0) { delete data; errorString_ = "MidiOutAlsa::initialize: error initializing MIDI event parser!\n\n"; error(RtMidiError::DRIVER_ERROR, errorString_); return; } data->buffer = (unsigned char *)malloc(data->bufferSize); if (data->buffer == NULL) { delete data; errorString_ = "MidiOutAlsa::initialize: error allocating buffer memory!\n\n"; error(RtMidiError::MEMORY_ERROR, errorString_); return; } snd_midi_event_init(data->coder); apiData_ = (void *)data; } unsigned int MidiOutAlsa ::getPortCount() { snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); AlsaMidiData *data = static_cast(apiData_); return portInfo(data->seq, pinfo, SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE, -1); } std::string MidiOutAlsa ::getPortName(unsigned int portNumber) { snd_seq_client_info_t *cinfo; snd_seq_port_info_t *pinfo; snd_seq_client_info_alloca(&cinfo); snd_seq_port_info_alloca(&pinfo); std::string stringName; AlsaMidiData *data = static_cast(apiData_); if (portInfo(data->seq, pinfo, SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE, (int)portNumber)) { int cnum = snd_seq_port_info_get_client(pinfo); snd_seq_get_any_client_info(data->seq, cnum, cinfo); std::ostringstream os; os << snd_seq_client_info_get_name(cinfo); os << ":"; os << snd_seq_port_info_get_name(pinfo); os << " "; // These lines added to make sure devices are listed os << snd_seq_port_info_get_client(pinfo); // with full portnames added to ensure individual device names os << ":"; os << snd_seq_port_info_get_port(pinfo); stringName = os.str(); return stringName; } // If we get here, we didn't find a match. errorString_ = "MidiOutAlsa::getPortName: error looking for port name!"; error(RtMidiError::WARNING, errorString_); return stringName; } void MidiOutAlsa ::openPort(unsigned int portNumber, const std::string &portName) { if (connected_) { errorString_ = "MidiOutAlsa::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } unsigned int nSrc = this->getPortCount(); if (nSrc < 1) { errorString_ = "MidiOutAlsa::openPort: no MIDI output sources found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); AlsaMidiData *data = static_cast(apiData_); if (portInfo(data->seq, pinfo, SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE, (int)portNumber) == 0) { std::ostringstream ost; ost << "MidiOutAlsa::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } snd_seq_addr_t sender, receiver; receiver.client = snd_seq_port_info_get_client(pinfo); receiver.port = snd_seq_port_info_get_port(pinfo); sender.client = snd_seq_client_id(data->seq); if (data->vport < 0) { data->vport = snd_seq_create_simple_port(data->seq, portName.c_str(), SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ, SND_SEQ_PORT_TYPE_MIDI_GENERIC | SND_SEQ_PORT_TYPE_APPLICATION); if (data->vport < 0) { errorString_ = "MidiOutAlsa::openPort: ALSA error creating output port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } } sender.port = data->vport; // Make subscription if (snd_seq_port_subscribe_malloc(&data->subscription) < 0) { snd_seq_port_subscribe_free(data->subscription); errorString_ = "MidiOutAlsa::openPort: error allocating port subscription."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } snd_seq_port_subscribe_set_sender(data->subscription, &sender); snd_seq_port_subscribe_set_dest(data->subscription, &receiver); snd_seq_port_subscribe_set_time_update(data->subscription, 1); snd_seq_port_subscribe_set_time_real(data->subscription, 1); if (snd_seq_subscribe_port(data->seq, data->subscription)) { snd_seq_port_subscribe_free(data->subscription); errorString_ = "MidiOutAlsa::openPort: ALSA error making port connection."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } connected_ = true; } void MidiOutAlsa ::closePort(void) { if (connected_) { AlsaMidiData *data = static_cast(apiData_); snd_seq_unsubscribe_port(data->seq, data->subscription); snd_seq_port_subscribe_free(data->subscription); data->subscription = 0; connected_ = false; } } void MidiOutAlsa ::setClientName(const std::string &clientName) { AlsaMidiData *data = static_cast(apiData_); snd_seq_set_client_name(data->seq, clientName.c_str()); } void MidiOutAlsa ::setPortName(const std::string &portName) { AlsaMidiData *data = static_cast(apiData_); snd_seq_port_info_t *pinfo; snd_seq_port_info_alloca(&pinfo); snd_seq_get_port_info(data->seq, data->vport, pinfo); snd_seq_port_info_set_name(pinfo, portName.c_str()); snd_seq_set_port_info(data->seq, data->vport, pinfo); } void MidiOutAlsa ::openVirtualPort(const std::string &portName) { AlsaMidiData *data = static_cast(apiData_); if (data->vport < 0) { data->vport = snd_seq_create_simple_port(data->seq, portName.c_str(), SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ, SND_SEQ_PORT_TYPE_MIDI_GENERIC | SND_SEQ_PORT_TYPE_APPLICATION); if (data->vport < 0) { errorString_ = "MidiOutAlsa::openVirtualPort: ALSA error creating virtual port."; error(RtMidiError::DRIVER_ERROR, errorString_); } } } void MidiOutAlsa ::sendMessage(const unsigned char *message, size_t size) { int result; AlsaMidiData *data = static_cast(apiData_); unsigned int nBytes = static_cast(size); if (nBytes > data->bufferSize) { data->bufferSize = nBytes; result = snd_midi_event_resize_buffer(data->coder, nBytes); if (result != 0) { errorString_ = "MidiOutAlsa::sendMessage: ALSA error resizing MIDI event buffer."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } free(data->buffer); data->buffer = (unsigned char *)malloc(data->bufferSize); if (data->buffer == NULL) { errorString_ = "MidiOutAlsa::initialize: error allocating buffer memory!\n\n"; error(RtMidiError::MEMORY_ERROR, errorString_); return; } } snd_seq_event_t ev; snd_seq_ev_clear(&ev); snd_seq_ev_set_source(&ev, data->vport); snd_seq_ev_set_subs(&ev); snd_seq_ev_set_direct(&ev); for (unsigned int i = 0; i < nBytes; ++i) data->buffer[i] = message[i]; result = snd_midi_event_encode(data->coder, data->buffer, (long)nBytes, &ev); if (result < (int)nBytes) { errorString_ = "MidiOutAlsa::sendMessage: event parsing error!"; error(RtMidiError::WARNING, errorString_); return; } // Send the event. result = snd_seq_event_output(data->seq, &ev); if (result < 0) { errorString_ = "MidiOutAlsa::sendMessage: error sending MIDI message to port."; error(RtMidiError::WARNING, errorString_); return; } snd_seq_drain_output(data->seq); } #endif // __LINUX_ALSA__ //*********************************************************************// // API: Windows Multimedia Library (MM) //*********************************************************************// // API information deciphered from: // - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/multimed/htm/_win32_midi_reference.asp // Thanks to Jean-Baptiste Berruchon for the sysex code. #if defined(__WINDOWS_MM__) // The Windows MM API is based on the use of a callback function for // MIDI input. We convert the system specific time stamps to delta // time values. // Windows MM MIDI header files. #include // #include // Convert a null-terminated wide string or ANSI-encoded string to UTF-8. static std::string ConvertToUTF8(const TCHAR *str) { std::string u8str; const WCHAR *wstr = L""; #if defined(UNICODE) || defined(_UNICODE) wstr = str; #else // Convert from ANSI encoding to wide string int wlength = MultiByteToWideChar(CP_ACP, 0, str, -1, NULL, 0); std::wstring wstrtemp; if (wlength) { wstrtemp.assign(wlength - 1, 0); MultiByteToWideChar(CP_ACP, 0, str, -1, &wstrtemp[0], wlength); wstr = &wstrtemp[0]; } #endif // Convert from wide string to UTF-8 int length = WideCharToMultiByte(CP_UTF8, 0, wstr, -1, NULL, 0, NULL, NULL); if (length) { u8str.assign(length - 1, 0); length = WideCharToMultiByte(CP_UTF8, 0, wstr, -1, &u8str[0], length, NULL, NULL); } return u8str; } #define RT_SYSEX_BUFFER_SIZE 1024 #define RT_SYSEX_BUFFER_COUNT 4 // A structure to hold variables related to the CoreMIDI API // implementation. struct WinMidiData { HMIDIIN inHandle; // Handle to Midi Input Device HMIDIOUT outHandle; // Handle to Midi Output Device DWORD lastTime; MidiInApi::MidiMessage message; LPMIDIHDR sysexBuffer[RT_SYSEX_BUFFER_COUNT]; CRITICAL_SECTION _mutex; // [Patrice] see https://groups.google.com/forum/#!topic/mididev/6OUjHutMpEo }; //*********************************************************************// // API: Windows MM // Class Definitions: MidiInWinMM //*********************************************************************// static void CALLBACK midiInputCallback(HMIDIIN /*hmin*/, UINT inputStatus, DWORD_PTR instancePtr, DWORD_PTR midiMessage, DWORD timestamp) { if (inputStatus != MIM_DATA && inputStatus != MIM_LONGDATA && inputStatus != MIM_LONGERROR) return; //MidiInApi::RtMidiInData *data = static_cast (instancePtr); MidiInApi::RtMidiInData *data = (MidiInApi::RtMidiInData *)instancePtr; WinMidiData *apiData = static_cast(data->apiData); // Calculate time stamp. if (data->firstMessage == true) { apiData->message.timeStamp = 0.0; data->firstMessage = false; } else apiData->message.timeStamp = (double)(timestamp - apiData->lastTime) * 0.001; if (inputStatus == MIM_DATA) { // Channel or system message // Make sure the first byte is a status byte. unsigned char status = (unsigned char)(midiMessage & 0x000000FF); if (!(status & 0x80)) return; // Determine the number of bytes in the MIDI message. unsigned short nBytes = 1; if (status < 0xC0) nBytes = 3; else if (status < 0xE0) nBytes = 2; else if (status < 0xF0) nBytes = 3; else if (status == 0xF1) { if (data->ignoreFlags & 0x02) return; else nBytes = 2; } else if (status == 0xF2) nBytes = 3; else if (status == 0xF3) nBytes = 2; else if (status == 0xF8 && (data->ignoreFlags & 0x02)) { // A MIDI timing tick message and we're ignoring it. return; } else if (status == 0xFE && (data->ignoreFlags & 0x04)) { // A MIDI active sensing message and we're ignoring it. return; } // Copy bytes to our MIDI message. unsigned char *ptr = (unsigned char *)&midiMessage; for (int i = 0; i < nBytes; ++i) apiData->message.bytes.push_back(*ptr++); } else { // Sysex message ( MIM_LONGDATA or MIM_LONGERROR ) MIDIHDR *sysex = (MIDIHDR *)midiMessage; if (!(data->ignoreFlags & 0x01) && inputStatus != MIM_LONGERROR) { // Sysex message and we're not ignoring it for (int i = 0; i < (int)sysex->dwBytesRecorded; ++i) apiData->message.bytes.push_back(sysex->lpData[i]); } // The WinMM API requires that the sysex buffer be requeued after // input of each sysex message. Even if we are ignoring sysex // messages, we still need to requeue the buffer in case the user // decides to not ignore sysex messages in the future. However, // it seems that WinMM calls this function with an empty sysex // buffer when an application closes and in this case, we should // avoid requeueing it, else the computer suddenly reboots after // one or two minutes. if (apiData->sysexBuffer[sysex->dwUser]->dwBytesRecorded > 0) { //if ( sysex->dwBytesRecorded > 0 ) { EnterCriticalSection(&(apiData->_mutex)); MMRESULT result = midiInAddBuffer(apiData->inHandle, apiData->sysexBuffer[sysex->dwUser], sizeof(MIDIHDR)); LeaveCriticalSection(&(apiData->_mutex)); if (result != MMSYSERR_NOERROR) std::cerr << "\nRtMidiIn::midiInputCallback: error sending sysex to Midi device!!\n\n"; if (data->ignoreFlags & 0x01) return; } else return; } // Save the time of the last non-filtered message apiData->lastTime = timestamp; if (data->usingCallback) { RtMidiIn::RtMidiCallback callback = (RtMidiIn::RtMidiCallback)data->userCallback; callback(apiData->message.timeStamp, &apiData->message.bytes, data->userData); } else { // As long as we haven't reached our queue size limit, push the message. if (!data->queue.push(apiData->message)) std::cerr << "\nMidiInWinMM: message queue limit reached!!\n\n"; } // Clear the vector for the next input message. apiData->message.bytes.clear(); } MidiInWinMM ::MidiInWinMM(const std::string &clientName, unsigned int queueSizeLimit) : MidiInApi(queueSizeLimit) { MidiInWinMM::initialize(clientName); } MidiInWinMM ::~MidiInWinMM() { // Close a connection if it exists. MidiInWinMM::closePort(); WinMidiData *data = static_cast(apiData_); DeleteCriticalSection(&(data->_mutex)); // Cleanup. delete data; } void MidiInWinMM ::initialize(const std::string & /*clientName*/) { // We'll issue a warning here if no devices are available but not // throw an error since the user can plugin something later. unsigned int nDevices = midiInGetNumDevs(); if (nDevices == 0) { errorString_ = "MidiInWinMM::initialize: no MIDI input devices currently available."; error(RtMidiError::WARNING, errorString_); } // Save our api-specific connection information. WinMidiData *data = (WinMidiData *)new WinMidiData; apiData_ = (void *)data; inputData_.apiData = (void *)data; data->message.bytes.clear(); // needs to be empty for first input message if (!InitializeCriticalSectionAndSpinCount(&(data->_mutex), 0x00000400)) { errorString_ = "MidiInWinMM::initialize: InitializeCriticalSectionAndSpinCount failed."; error(RtMidiError::WARNING, errorString_); } } void MidiInWinMM ::openPort(unsigned int portNumber, const std::string & /*portName*/) { if (connected_) { errorString_ = "MidiInWinMM::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } unsigned int nDevices = midiInGetNumDevs(); if (nDevices == 0) { errorString_ = "MidiInWinMM::openPort: no MIDI input sources found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } if (portNumber >= nDevices) { std::ostringstream ost; ost << "MidiInWinMM::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } WinMidiData *data = static_cast(apiData_); MMRESULT result = midiInOpen(&data->inHandle, portNumber, (DWORD_PTR)&midiInputCallback, (DWORD_PTR)&inputData_, CALLBACK_FUNCTION); if (result != MMSYSERR_NOERROR) { errorString_ = "MidiInWinMM::openPort: error creating Windows MM MIDI input port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Allocate and init the sysex buffers. for (int i = 0; i < RT_SYSEX_BUFFER_COUNT; ++i) { data->sysexBuffer[i] = (MIDIHDR *)new char[sizeof(MIDIHDR)]; data->sysexBuffer[i]->lpData = new char[RT_SYSEX_BUFFER_SIZE]; data->sysexBuffer[i]->dwBufferLength = RT_SYSEX_BUFFER_SIZE; data->sysexBuffer[i]->dwUser = i; // We use the dwUser parameter as buffer indicator data->sysexBuffer[i]->dwFlags = 0; result = midiInPrepareHeader(data->inHandle, data->sysexBuffer[i], sizeof(MIDIHDR)); if (result != MMSYSERR_NOERROR) { midiInClose(data->inHandle); data->inHandle = 0; errorString_ = "MidiInWinMM::openPort: error starting Windows MM MIDI input port (PrepareHeader)."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Register the buffer. result = midiInAddBuffer(data->inHandle, data->sysexBuffer[i], sizeof(MIDIHDR)); if (result != MMSYSERR_NOERROR) { midiInClose(data->inHandle); data->inHandle = 0; errorString_ = "MidiInWinMM::openPort: error starting Windows MM MIDI input port (AddBuffer)."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } } result = midiInStart(data->inHandle); if (result != MMSYSERR_NOERROR) { midiInClose(data->inHandle); data->inHandle = 0; errorString_ = "MidiInWinMM::openPort: error starting Windows MM MIDI input port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } connected_ = true; } void MidiInWinMM ::openVirtualPort(const std::string & /*portName*/) { // This function cannot be implemented for the Windows MM MIDI API. errorString_ = "MidiInWinMM::openVirtualPort: cannot be implemented in Windows MM MIDI API!"; error(RtMidiError::WARNING, errorString_); } void MidiInWinMM ::closePort(void) { if (connected_) { WinMidiData *data = static_cast(apiData_); EnterCriticalSection(&(data->_mutex)); midiInReset(data->inHandle); midiInStop(data->inHandle); for (int i = 0; i < RT_SYSEX_BUFFER_COUNT; ++i) { int result = midiInUnprepareHeader(data->inHandle, data->sysexBuffer[i], sizeof(MIDIHDR)); delete[] data->sysexBuffer[i]->lpData; delete[] data->sysexBuffer[i]; if (result != MMSYSERR_NOERROR) { midiInClose(data->inHandle); data->inHandle = 0; errorString_ = "MidiInWinMM::openPort: error closing Windows MM MIDI input port (midiInUnprepareHeader)."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } } midiInClose(data->inHandle); data->inHandle = 0; connected_ = false; LeaveCriticalSection(&(data->_mutex)); } } void MidiInWinMM ::setClientName(const std::string &) { errorString_ = "MidiInWinMM::setClientName: this function is not implemented for the WINDOWS_MM API!"; error(RtMidiError::WARNING, errorString_); } void MidiInWinMM ::setPortName(const std::string &) { errorString_ = "MidiInWinMM::setPortName: this function is not implemented for the WINDOWS_MM API!"; error(RtMidiError::WARNING, errorString_); } unsigned int MidiInWinMM ::getPortCount() { return midiInGetNumDevs(); } std::string MidiInWinMM ::getPortName(unsigned int portNumber) { std::string stringName; unsigned int nDevices = midiInGetNumDevs(); if (portNumber >= nDevices) { std::ostringstream ost; ost << "MidiInWinMM::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); return stringName; } MIDIINCAPS deviceCaps; midiInGetDevCaps(portNumber, &deviceCaps, sizeof(MIDIINCAPS)); stringName = ConvertToUTF8(deviceCaps.szPname); // Next lines added to add the portNumber to the name so that // the device's names are sure to be listed with individual names // even when they have the same brand name #ifndef RTMIDI_DO_NOT_ENSURE_UNIQUE_PORTNAMES std::ostringstream os; os << " "; os << portNumber; stringName += os.str(); #endif return stringName; } //*********************************************************************// // API: Windows MM // Class Definitions: MidiOutWinMM //*********************************************************************// MidiOutWinMM ::MidiOutWinMM(const std::string &clientName) : MidiOutApi() { MidiOutWinMM::initialize(clientName); } MidiOutWinMM ::~MidiOutWinMM() { // Close a connection if it exists. MidiOutWinMM::closePort(); // Cleanup. WinMidiData *data = static_cast(apiData_); delete data; } void MidiOutWinMM ::initialize(const std::string & /*clientName*/) { // We'll issue a warning here if no devices are available but not // throw an error since the user can plug something in later. unsigned int nDevices = midiOutGetNumDevs(); if (nDevices == 0) { errorString_ = "MidiOutWinMM::initialize: no MIDI output devices currently available."; error(RtMidiError::WARNING, errorString_); } // Save our api-specific connection information. WinMidiData *data = (WinMidiData *)new WinMidiData; apiData_ = (void *)data; } unsigned int MidiOutWinMM ::getPortCount() { return midiOutGetNumDevs(); } std::string MidiOutWinMM ::getPortName(unsigned int portNumber) { std::string stringName; unsigned int nDevices = midiOutGetNumDevs(); if (portNumber >= nDevices) { std::ostringstream ost; ost << "MidiOutWinMM::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); return stringName; } MIDIOUTCAPS deviceCaps; midiOutGetDevCaps(portNumber, &deviceCaps, sizeof(MIDIOUTCAPS)); stringName = ConvertToUTF8(deviceCaps.szPname); // Next lines added to add the portNumber to the name so that // the device's names are sure to be listed with individual names // even when they have the same brand name std::ostringstream os; #ifndef RTMIDI_DO_NOT_ENSURE_UNIQUE_PORTNAMES os << " "; os << portNumber; stringName += os.str(); #endif return stringName; } void MidiOutWinMM ::openPort(unsigned int portNumber, const std::string & /*portName*/) { if (connected_) { errorString_ = "MidiOutWinMM::openPort: a valid connection already exists!"; error(RtMidiError::WARNING, errorString_); return; } unsigned int nDevices = midiOutGetNumDevs(); if (nDevices < 1) { errorString_ = "MidiOutWinMM::openPort: no MIDI output destinations found!"; error(RtMidiError::NO_DEVICES_FOUND, errorString_); return; } if (portNumber >= nDevices) { std::ostringstream ost; ost << "MidiOutWinMM::openPort: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::INVALID_PARAMETER, errorString_); return; } WinMidiData *data = static_cast(apiData_); MMRESULT result = midiOutOpen(&data->outHandle, portNumber, (DWORD)NULL, (DWORD)NULL, CALLBACK_NULL); if (result != MMSYSERR_NOERROR) { errorString_ = "MidiOutWinMM::openPort: error creating Windows MM MIDI output port."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } connected_ = true; } void MidiOutWinMM ::closePort(void) { if (connected_) { WinMidiData *data = static_cast(apiData_); midiOutReset(data->outHandle); midiOutClose(data->outHandle); data->outHandle = 0; connected_ = false; } } void MidiOutWinMM ::setClientName(const std::string &) { errorString_ = "MidiOutWinMM::setClientName: this function is not implemented for the WINDOWS_MM API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutWinMM ::setPortName(const std::string &) { errorString_ = "MidiOutWinMM::setPortName: this function is not implemented for the WINDOWS_MM API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutWinMM ::openVirtualPort(const std::string & /*portName*/) { // This function cannot be implemented for the Windows MM MIDI API. errorString_ = "MidiOutWinMM::openVirtualPort: cannot be implemented in Windows MM MIDI API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutWinMM ::sendMessage(const unsigned char *message, size_t size) { if (!connected_) return; unsigned int nBytes = static_cast(size); if (nBytes == 0) { errorString_ = "MidiOutWinMM::sendMessage: message argument is empty!"; error(RtMidiError::WARNING, errorString_); return; } MMRESULT result; WinMidiData *data = static_cast(apiData_); if (message[0] == 0xF0) { // Sysex message // Allocate buffer for sysex data. char *buffer = (char *)malloc(nBytes); if (buffer == NULL) { errorString_ = "MidiOutWinMM::sendMessage: error allocating sysex message memory!"; error(RtMidiError::MEMORY_ERROR, errorString_); return; } // Copy data to buffer. for (unsigned int i = 0; i < nBytes; ++i) buffer[i] = message[i]; // Create and prepare MIDIHDR structure. MIDIHDR sysex; sysex.lpData = (LPSTR)buffer; sysex.dwBufferLength = nBytes; sysex.dwFlags = 0; result = midiOutPrepareHeader(data->outHandle, &sysex, sizeof(MIDIHDR)); if (result != MMSYSERR_NOERROR) { free(buffer); errorString_ = "MidiOutWinMM::sendMessage: error preparing sysex header."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Send the message. result = midiOutLongMsg(data->outHandle, &sysex, sizeof(MIDIHDR)); if (result != MMSYSERR_NOERROR) { free(buffer); errorString_ = "MidiOutWinMM::sendMessage: error sending sysex message."; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Unprepare the buffer and MIDIHDR. while (MIDIERR_STILLPLAYING == midiOutUnprepareHeader(data->outHandle, &sysex, sizeof(MIDIHDR))) Sleep(1); free(buffer); } else { // Channel or system message. // Make sure the message size isn't too big. if (nBytes > 3) { errorString_ = "MidiOutWinMM::sendMessage: message size is greater than 3 bytes (and not sysex)!"; error(RtMidiError::WARNING, errorString_); return; } // Pack MIDI bytes into double word. DWORD packet; unsigned char *ptr = (unsigned char *)&packet; for (unsigned int i = 0; i < nBytes; ++i) { *ptr = message[i]; ++ptr; } // Send the message immediately. result = midiOutShortMsg(data->outHandle, packet); if (result != MMSYSERR_NOERROR) { errorString_ = "MidiOutWinMM::sendMessage: error sending MIDI message."; error(RtMidiError::DRIVER_ERROR, errorString_); } } } #endif // __WINDOWS_MM__ //*********************************************************************// // API: UNIX JACK // // Written primarily by Alexander Svetalkin, with updates for delta // time by Gary Scavone, April 2011. // // *********************************************************************// #if defined(__UNIX_JACK__) // JACK header files #include #include #include #ifdef HAVE_SEMAPHORE #include #endif #define JACK_RINGBUFFER_SIZE 16384 // Default size for ringbuffer struct JackMidiData { jack_client_t *client; jack_port_t *port; jack_ringbuffer_t *buffSize; jack_ringbuffer_t *buffMessage; jack_time_t lastTime; #ifdef HAVE_SEMAPHORE sem_t sem_cleanup; sem_t sem_needpost; #endif MidiInApi ::RtMidiInData *rtMidiIn; }; //*********************************************************************// // API: JACK // Class Definitions: MidiInJack //*********************************************************************// static int jackProcessIn(jack_nframes_t nframes, void *arg) { JackMidiData *jData = (JackMidiData *)arg; MidiInApi ::RtMidiInData *rtData = jData->rtMidiIn; jack_midi_event_t event; jack_time_t time; // Is port created? if (jData->port == NULL) return 0; void *buff = jack_port_get_buffer(jData->port, nframes); bool &continueSysex = rtData->continueSysex; unsigned char &ignoreFlags = rtData->ignoreFlags; // We have midi events in buffer int evCount = jack_midi_get_event_count(buff); for (int j = 0; j < evCount; j++) { MidiInApi::MidiMessage &message = rtData->message; jack_midi_event_get(&event, buff, j); // Compute the delta time. time = jack_get_time(); if (rtData->firstMessage == true) { message.timeStamp = 0.0; rtData->firstMessage = false; } else message.timeStamp = (time - jData->lastTime) * 0.000001; jData->lastTime = time; if (!continueSysex) message.bytes.clear(); if (!((continueSysex || event.buffer[0] == 0xF0) && (ignoreFlags & 0x01))) { // Unless this is a (possibly continued) SysEx message and we're ignoring SysEx, // copy the event buffer into the MIDI message struct. for (unsigned int i = 0; i < event.size; i++) message.bytes.push_back(event.buffer[i]); } switch (event.buffer[0]) { case 0xF0: // Start of a SysEx message continueSysex = event.buffer[event.size - 1] != 0xF7; if (ignoreFlags & 0x01) continue; break; case 0xF1: case 0xF8: // MIDI Time Code or Timing Clock message if (ignoreFlags & 0x02) continue; break; case 0xFE: // Active Sensing message if (ignoreFlags & 0x04) continue; break; default: if (continueSysex) { // Continuation of a SysEx message continueSysex = event.buffer[event.size - 1] != 0xF7; if (ignoreFlags & 0x01) continue; } // All other MIDI messages } if (!continueSysex) { // If not a continuation of a SysEx message, // invoke the user callback function or queue the message. if (rtData->usingCallback) { RtMidiIn::RtMidiCallback callback = (RtMidiIn::RtMidiCallback)rtData->userCallback; callback(message.timeStamp, &message.bytes, rtData->userData); } else { // As long as we haven't reached our queue size limit, push the message. if (!rtData->queue.push(message)) std::cerr << "\nMidiInJack: message queue limit reached!!\n\n"; } } } return 0; } MidiInJack ::MidiInJack(const std::string &clientName, unsigned int queueSizeLimit) : MidiInApi(queueSizeLimit) { MidiInJack::initialize(clientName); } void MidiInJack ::initialize(const std::string &clientName) { JackMidiData *data = new JackMidiData; apiData_ = (void *)data; data->rtMidiIn = &inputData_; data->port = NULL; data->client = NULL; this->clientName = clientName; connect(); } void MidiInJack ::connect() { JackMidiData *data = static_cast(apiData_); if (data->client) return; // Initialize JACK client if ((data->client = jack_client_open(clientName.c_str(), JackNoStartServer, NULL)) == 0) { errorString_ = "MidiInJack::initialize: JACK server not running?"; error(RtMidiError::WARNING, errorString_); return; } jack_set_process_callback(data->client, jackProcessIn, data); jack_activate(data->client); } MidiInJack ::~MidiInJack() { JackMidiData *data = static_cast(apiData_); MidiInJack::closePort(); if (data->client) jack_client_close(data->client); delete data; } void MidiInJack ::openPort(unsigned int portNumber, const std::string &portName) { JackMidiData *data = static_cast(apiData_); connect(); // Creating new port if (data->port == NULL) data->port = jack_port_register(data->client, portName.c_str(), JACK_DEFAULT_MIDI_TYPE, JackPortIsInput, 0); if (data->port == NULL) { errorString_ = "MidiInJack::openPort: JACK error creating port"; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Connecting to the output std::string name = getPortName(portNumber); jack_connect(data->client, name.c_str(), jack_port_name(data->port)); connected_ = true; } void MidiInJack ::openVirtualPort(const std::string &portName) { JackMidiData *data = static_cast(apiData_); connect(); if (data->port == NULL) data->port = jack_port_register(data->client, portName.c_str(), JACK_DEFAULT_MIDI_TYPE, JackPortIsInput, 0); if (data->port == NULL) { errorString_ = "MidiInJack::openVirtualPort: JACK error creating virtual port"; error(RtMidiError::DRIVER_ERROR, errorString_); } } unsigned int MidiInJack ::getPortCount() { int count = 0; JackMidiData *data = static_cast(apiData_); connect(); if (!data->client) return 0; // List of available ports const char **ports = jack_get_ports(data->client, NULL, JACK_DEFAULT_MIDI_TYPE, JackPortIsOutput); if (ports == NULL) return 0; while (ports[count] != NULL) count++; free(ports); return count; } std::string MidiInJack ::getPortName(unsigned int portNumber) { JackMidiData *data = static_cast(apiData_); std::string retStr(""); connect(); // List of available ports const char **ports = jack_get_ports(data->client, NULL, JACK_DEFAULT_MIDI_TYPE, JackPortIsOutput); // Check port validity if (ports == NULL) { errorString_ = "MidiInJack::getPortName: no ports available!"; error(RtMidiError::WARNING, errorString_); return retStr; } unsigned int i; for (i = 0; i < portNumber && ports[i]; i++) { } if (i < portNumber || !ports[portNumber]) { std::ostringstream ost; ost << "MidiInJack::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); } else retStr.assign(ports[portNumber]); jack_free(ports); return retStr; } void MidiInJack ::closePort() { JackMidiData *data = static_cast(apiData_); if (data->port == NULL) return; jack_port_unregister(data->client, data->port); data->port = NULL; connected_ = false; } void MidiInJack::setClientName(const std::string &) { errorString_ = "MidiInJack::setClientName: this function is not implemented for the UNIX_JACK API!"; error(RtMidiError::WARNING, errorString_); } void MidiInJack ::setPortName(const std::string &portName) { JackMidiData *data = static_cast(apiData_); #ifdef JACK_HAS_PORT_RENAME jack_port_rename(data->client, data->port, portName.c_str()); #else jack_port_set_name(data->port, portName.c_str()); #endif } //*********************************************************************// // API: JACK // Class Definitions: MidiOutJack //*********************************************************************// // Jack process callback static int jackProcessOut(jack_nframes_t nframes, void *arg) { JackMidiData *data = (JackMidiData *)arg; jack_midi_data_t *midiData; int space; // Is port created? if (data->port == NULL) return 0; void *buff = jack_port_get_buffer(data->port, nframes); jack_midi_clear_buffer(buff); while (jack_ringbuffer_read_space(data->buffSize) > 0) { jack_ringbuffer_read(data->buffSize, (char *)&space, (size_t)sizeof(space)); midiData = jack_midi_event_reserve(buff, 0, space); jack_ringbuffer_read(data->buffMessage, (char *)midiData, (size_t)space); } #ifdef HAVE_SEMAPHORE if (!sem_trywait(&data->sem_needpost)) sem_post(&data->sem_cleanup); #endif return 0; } MidiOutJack ::MidiOutJack(const std::string &clientName) : MidiOutApi() { MidiOutJack::initialize(clientName); } void MidiOutJack ::initialize(const std::string &clientName) { JackMidiData *data = new JackMidiData; apiData_ = (void *)data; data->port = NULL; data->client = NULL; #ifdef HAVE_SEMAPHORE sem_init(&data->sem_cleanup, 0, 0); sem_init(&data->sem_needpost, 0, 0); #endif this->clientName = clientName; connect(); } void MidiOutJack ::connect() { JackMidiData *data = static_cast(apiData_); if (data->client) return; // Initialize output ringbuffers data->buffSize = jack_ringbuffer_create(JACK_RINGBUFFER_SIZE); data->buffMessage = jack_ringbuffer_create(JACK_RINGBUFFER_SIZE); // Initialize JACK client if ((data->client = jack_client_open(clientName.c_str(), JackNoStartServer, NULL)) == 0) { errorString_ = "MidiOutJack::initialize: JACK server not running?"; error(RtMidiError::WARNING, errorString_); return; } jack_set_process_callback(data->client, jackProcessOut, data); jack_activate(data->client); } MidiOutJack ::~MidiOutJack() { JackMidiData *data = static_cast(apiData_); MidiOutJack::closePort(); // Cleanup jack_ringbuffer_free(data->buffSize); jack_ringbuffer_free(data->buffMessage); if (data->client) { jack_client_close(data->client); } #ifdef HAVE_SEMAPHORE sem_destroy(&data->sem_cleanup); sem_destroy(&data->sem_needpost); #endif delete data; } void MidiOutJack ::openPort(unsigned int portNumber, const std::string &portName) { JackMidiData *data = static_cast(apiData_); connect(); // Creating new port if (data->port == NULL) data->port = jack_port_register(data->client, portName.c_str(), JACK_DEFAULT_MIDI_TYPE, JackPortIsOutput, 0); if (data->port == NULL) { errorString_ = "MidiOutJack::openPort: JACK error creating port"; error(RtMidiError::DRIVER_ERROR, errorString_); return; } // Connecting to the output std::string name = getPortName(portNumber); jack_connect(data->client, jack_port_name(data->port), name.c_str()); connected_ = true; } void MidiOutJack ::openVirtualPort(const std::string &portName) { JackMidiData *data = static_cast(apiData_); connect(); if (data->port == NULL) data->port = jack_port_register(data->client, portName.c_str(), JACK_DEFAULT_MIDI_TYPE, JackPortIsOutput, 0); if (data->port == NULL) { errorString_ = "MidiOutJack::openVirtualPort: JACK error creating virtual port"; error(RtMidiError::DRIVER_ERROR, errorString_); } } unsigned int MidiOutJack ::getPortCount() { int count = 0; JackMidiData *data = static_cast(apiData_); connect(); if (!data->client) return 0; // List of available ports const char **ports = jack_get_ports(data->client, NULL, JACK_DEFAULT_MIDI_TYPE, JackPortIsInput); if (ports == NULL) return 0; while (ports[count] != NULL) count++; free(ports); return count; } std::string MidiOutJack ::getPortName(unsigned int portNumber) { JackMidiData *data = static_cast(apiData_); std::string retStr(""); connect(); // List of available ports const char **ports = jack_get_ports(data->client, NULL, JACK_DEFAULT_MIDI_TYPE, JackPortIsInput); // Check port validity if (ports == NULL) { errorString_ = "MidiOutJack::getPortName: no ports available!"; error(RtMidiError::WARNING, errorString_); return retStr; } if (ports[portNumber] == NULL) { std::ostringstream ost; ost << "MidiOutJack::getPortName: the 'portNumber' argument (" << portNumber << ") is invalid."; errorString_ = ost.str(); error(RtMidiError::WARNING, errorString_); } else retStr.assign(ports[portNumber]); free(ports); return retStr; } void MidiOutJack ::closePort() { JackMidiData *data = static_cast(apiData_); if (data->port == NULL) return; #ifdef HAVE_SEMAPHORE struct timespec ts; if (clock_gettime(CLOCK_REALTIME, &ts) != -1) { ts.tv_sec += 1; // wait max one second sem_post(&data->sem_needpost); sem_timedwait(&data->sem_cleanup, &ts); } #endif jack_port_unregister(data->client, data->port); data->port = NULL; connected_ = false; } void MidiOutJack::setClientName(const std::string &) { errorString_ = "MidiOutJack::setClientName: this function is not implemented for the UNIX_JACK API!"; error(RtMidiError::WARNING, errorString_); } void MidiOutJack ::setPortName(const std::string &portName) { JackMidiData *data = static_cast(apiData_); #ifdef JACK_HAS_PORT_RENAME jack_port_rename(data->client, data->port, portName.c_str()); #else jack_port_set_name(data->port, portName.c_str()); #endif } void MidiOutJack ::sendMessage(const unsigned char *message, size_t size) { int nBytes = static_cast(size); JackMidiData *data = static_cast(apiData_); // Write full message to buffer jack_ringbuffer_write(data->buffMessage, (const char *)message, nBytes); jack_ringbuffer_write(data->buffSize, (char *)&nBytes, sizeof(nBytes)); } #endif // __UNIX_JACK__ zytrax-master/drivers/rtmidi/rtmidi/RtMidi.h000066400000000000000000000606431347722000700215220ustar00rootroot00000000000000/**********************************************************************/ /*! \class RtMidi \brief An abstract base class for realtime MIDI input/output. This class implements some common functionality for the realtime MIDI input/output subclasses RtMidiIn and RtMidiOut. RtMidi GitHub site: https://github.com/thestk/rtmidi RtMidi WWW site: http://www.music.mcgill.ca/~gary/rtmidi/ RtMidi: realtime MIDI i/o C++ classes Copyright (c) 2003-2019 Gary P. Scavone Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Any person wishing to distribute modifications to the Software is asked to send the modifications to the original developer so that they can be incorporated into the canonical version. This is, however, not a binding provision of this license. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ /**********************************************************************/ /*! \file RtMidi.h */ #ifndef RTMIDI_H #define RTMIDI_H #if defined _WIN32 || defined __CYGWIN__ #if defined(RTMIDI_EXPORT) #define RTMIDI_DLL_PUBLIC __declspec(dllexport) #else #define RTMIDI_DLL_PUBLIC #endif #else #if __GNUC__ >= 4 #define RTMIDI_DLL_PUBLIC __attribute__((visibility("default"))) #else #define RTMIDI_DLL_PUBLIC #endif #endif #define RTMIDI_VERSION "4.0.0" #include #include #include #include /************************************************************************/ /*! \class RtMidiError \brief Exception handling class for RtMidi. The RtMidiError class is quite simple but it does allow errors to be "caught" by RtMidiError::Type. See the RtMidi documentation to know which methods can throw an RtMidiError. */ /************************************************************************/ class RTMIDI_DLL_PUBLIC RtMidiError : public std::exception { public: //! Defined RtMidiError types. enum Type { WARNING, /*!< A non-critical error. */ DEBUG_WARNING, /*!< A non-critical error which might be useful for debugging. */ UNSPECIFIED, /*!< The default, unspecified error type. */ NO_DEVICES_FOUND, /*!< No devices found on system. */ INVALID_DEVICE, /*!< An invalid device ID was specified. */ MEMORY_ERROR, /*!< An error occured during memory allocation. */ INVALID_PARAMETER, /*!< An invalid parameter was specified to a function. */ INVALID_USE, /*!< The function was called incorrectly. */ DRIVER_ERROR, /*!< A system driver error occured. */ SYSTEM_ERROR, /*!< A system error occured. */ THREAD_ERROR /*!< A thread error occured. */ }; //! The constructor. RtMidiError(const std::string &message, Type type = RtMidiError::UNSPECIFIED) throw() : message_(message), type_(type) {} //! The destructor. virtual ~RtMidiError(void) throw() {} //! Prints thrown error message to stderr. virtual void printMessage(void) const throw() { std::cerr << '\n' << message_ << "\n\n"; } //! Returns the thrown error message type. virtual const Type &getType(void) const throw() { return type_; } //! Returns the thrown error message string. virtual const std::string &getMessage(void) const throw() { return message_; } //! Returns the thrown error message as a c-style string. virtual const char *what(void) const throw() { return message_.c_str(); } protected: std::string message_; Type type_; }; //! RtMidi error callback function prototype. /*! \param type Type of error. \param errorText Error description. Note that class behaviour is undefined after a critical error (not a warning) is reported. */ typedef void (*RtMidiErrorCallback)(RtMidiError::Type type, const std::string &errorText, void *userData); class MidiApi; class RTMIDI_DLL_PUBLIC RtMidi { public: //! MIDI API specifier arguments. enum Api { UNSPECIFIED, /*!< Search for a working compiled API. */ MACOSX_CORE, /*!< Macintosh OS-X CoreMIDI API. */ LINUX_ALSA, /*!< The Advanced Linux Sound Architecture API. */ UNIX_JACK, /*!< The JACK Low-Latency MIDI Server API. */ WINDOWS_MM, /*!< The Microsoft Multimedia MIDI API. */ RTMIDI_DUMMY, /*!< A compilable but non-functional API. */ NUM_APIS /*!< Number of values in this enum. */ }; //! A static function to determine the current RtMidi version. static std::string getVersion(void) throw(); //! A static function to determine the available compiled MIDI APIs. /*! The values returned in the std::vector can be compared against the enumerated list values. Note that there can be more than one API compiled for certain operating systems. */ static void getCompiledApi(std::vector &apis) throw(); //! Return the name of a specified compiled MIDI API. /*! This obtains a short lower-case name used for identification purposes. This value is guaranteed to remain identical across library versions. If the API is unknown, this function will return the empty string. */ static std::string getApiName(RtMidi::Api api); //! Return the display name of a specified compiled MIDI API. /*! This obtains a long name used for display purposes. If the API is unknown, this function will return the empty string. */ static std::string getApiDisplayName(RtMidi::Api api); //! Return the compiled MIDI API having the given name. /*! A case insensitive comparison will check the specified name against the list of compiled APIs, and return the one which matches. On failure, the function returns UNSPECIFIED. */ static RtMidi::Api getCompiledApiByName(const std::string &name); //! Pure virtual openPort() function. virtual void openPort(unsigned int portNumber = 0, const std::string &portName = std::string("RtMidi")) = 0; //! Pure virtual openVirtualPort() function. virtual void openVirtualPort(const std::string &portName = std::string("RtMidi")) = 0; //! Pure virtual getPortCount() function. virtual unsigned int getPortCount() = 0; //! Pure virtual getPortName() function. virtual std::string getPortName(unsigned int portNumber = 0) = 0; //! Pure virtual closePort() function. virtual void closePort(void) = 0; void setClientName(const std::string &clientName); void setPortName(const std::string &portName); //! Returns true if a port is open and false if not. /*! Note that this only applies to connections made with the openPort() function, not to virtual ports. */ virtual bool isPortOpen(void) const = 0; //! Set an error callback function to be invoked when an error has occured. /*! The callback function will be called whenever an error has occured. It is best to set the error callback function before opening a port. */ virtual void setErrorCallback(RtMidiErrorCallback errorCallback = NULL, void *userData = 0) = 0; protected: RtMidi(); virtual ~RtMidi(); MidiApi *rtapi_; }; /**********************************************************************/ /*! \class RtMidiIn \brief A realtime MIDI input class. This class provides a common, platform-independent API for realtime MIDI input. It allows access to a single MIDI input port. Incoming MIDI messages are either saved to a queue for retrieval using the getMessage() function or immediately passed to a user-specified callback function. Create multiple instances of this class to connect to more than one MIDI device at the same time. With the OS-X, Linux ALSA, and JACK MIDI APIs, it is also possible to open a virtual input port to which other MIDI software clients can connect. */ /**********************************************************************/ // **************************************************************** // // // RtMidiIn and RtMidiOut class declarations. // // RtMidiIn / RtMidiOut are "controllers" used to select an available // MIDI input or output interface. They present common APIs for the // user to call but all functionality is implemented by the classes // MidiInApi, MidiOutApi and their subclasses. RtMidiIn and RtMidiOut // each create an instance of a MidiInApi or MidiOutApi subclass based // on the user's API choice. If no choice is made, they attempt to // make a "logical" API selection. // // **************************************************************** // class RTMIDI_DLL_PUBLIC RtMidiIn : public RtMidi { public: //! User callback function type definition. typedef void (*RtMidiCallback)(double timeStamp, std::vector *message, void *userData); //! Default constructor that allows an optional api, client name and queue size. /*! An exception will be thrown if a MIDI system initialization error occurs. The queue size defines the maximum number of messages that can be held in the MIDI queue (when not using a callback function). If the queue size limit is reached, incoming messages will be ignored. If no API argument is specified and multiple API support has been compiled, the default order of use is ALSA, JACK (Linux) and CORE, JACK (OS-X). \param api An optional API id can be specified. \param clientName An optional client name can be specified. This will be used to group the ports that are created by the application. \param queueSizeLimit An optional size of the MIDI input queue can be specified. */ RtMidiIn(RtMidi::Api api = UNSPECIFIED, const std::string &clientName = "RtMidi Input Client", unsigned int queueSizeLimit = 100); //! If a MIDI connection is still open, it will be closed by the destructor. ~RtMidiIn(void) throw(); //! Returns the MIDI API specifier for the current instance of RtMidiIn. RtMidi::Api getCurrentApi(void) throw(); //! Open a MIDI input connection given by enumeration number. /*! \param portNumber An optional port number greater than 0 can be specified. Otherwise, the default or first port found is opened. \param portName An optional name for the application port that is used to connect to portId can be specified. */ void openPort(unsigned int portNumber = 0, const std::string &portName = std::string("RtMidi Input")); //! Create a virtual input port, with optional name, to allow software connections (OS X, JACK and ALSA only). /*! This function creates a virtual MIDI input port to which other software applications can connect. This type of functionality is currently only supported by the Macintosh OS-X, any JACK, and Linux ALSA APIs (the function returns an error for the other APIs). \param portName An optional name for the application port that is used to connect to portId can be specified. */ void openVirtualPort(const std::string &portName = std::string("RtMidi Input")); //! Set a callback function to be invoked for incoming MIDI messages. /*! The callback function will be called whenever an incoming MIDI message is received. While not absolutely necessary, it is best to set the callback function before opening a MIDI port to avoid leaving some messages in the queue. \param callback A callback function must be given. \param userData Optionally, a pointer to additional data can be passed to the callback function whenever it is called. */ void setCallback(RtMidiCallback callback, void *userData = 0); //! Cancel use of the current callback function (if one exists). /*! Subsequent incoming MIDI messages will be written to the queue and can be retrieved with the \e getMessage function. */ void cancelCallback(); //! Close an open MIDI connection (if one exists). void closePort(void); //! Returns true if a port is open and false if not. /*! Note that this only applies to connections made with the openPort() function, not to virtual ports. */ virtual bool isPortOpen() const; //! Return the number of available MIDI input ports. /*! \return This function returns the number of MIDI ports of the selected API. */ unsigned int getPortCount(); //! Return a string identifier for the specified MIDI input port number. /*! \return The name of the port with the given Id is returned. \retval An empty string is returned if an invalid port specifier is provided. User code should assume a UTF-8 encoding. */ std::string getPortName(unsigned int portNumber = 0); //! Specify whether certain MIDI message types should be queued or ignored during input. /*! By default, MIDI timing and active sensing messages are ignored during message input because of their relative high data rates. MIDI sysex messages are ignored by default as well. Variable values of "true" imply that the respective message type will be ignored. */ void ignoreTypes(bool midiSysex = true, bool midiTime = true, bool midiSense = true); //! Fill the user-provided vector with the data bytes for the next available MIDI message in the input queue and return the event delta-time in seconds. /*! This function returns immediately whether a new message is available or not. A valid message is indicated by a non-zero vector size. An exception is thrown if an error occurs during message retrieval or an input connection was not previously established. */ double getMessage(std::vector *message); //! Set an error callback function to be invoked when an error has occured. /*! The callback function will be called whenever an error has occured. It is best to set the error callback function before opening a port. */ virtual void setErrorCallback(RtMidiErrorCallback errorCallback = NULL, void *userData = 0); protected: void openMidiApi(RtMidi::Api api, const std::string &clientName, unsigned int queueSizeLimit); }; /**********************************************************************/ /*! \class RtMidiOut \brief A realtime MIDI output class. This class provides a common, platform-independent API for MIDI output. It allows one to probe available MIDI output ports, to connect to one such port, and to send MIDI bytes immediately over the connection. Create multiple instances of this class to connect to more than one MIDI device at the same time. With the OS-X, Linux ALSA and JACK MIDI APIs, it is also possible to open a virtual port to which other MIDI software clients can connect. */ /**********************************************************************/ class RTMIDI_DLL_PUBLIC RtMidiOut : public RtMidi { public: //! Default constructor that allows an optional client name. /*! An exception will be thrown if a MIDI system initialization error occurs. If no API argument is specified and multiple API support has been compiled, the default order of use is ALSA, JACK (Linux) and CORE, JACK (OS-X). */ RtMidiOut(RtMidi::Api api = UNSPECIFIED, const std::string &clientName = "RtMidi Output Client"); //! The destructor closes any open MIDI connections. ~RtMidiOut(void) throw(); //! Returns the MIDI API specifier for the current instance of RtMidiOut. RtMidi::Api getCurrentApi(void) throw(); //! Open a MIDI output connection. /*! An optional port number greater than 0 can be specified. Otherwise, the default or first port found is opened. An exception is thrown if an error occurs while attempting to make the port connection. */ void openPort(unsigned int portNumber = 0, const std::string &portName = std::string("RtMidi Output")); //! Close an open MIDI connection (if one exists). void closePort(void); //! Returns true if a port is open and false if not. /*! Note that this only applies to connections made with the openPort() function, not to virtual ports. */ virtual bool isPortOpen() const; //! Create a virtual output port, with optional name, to allow software connections (OS X, JACK and ALSA only). /*! This function creates a virtual MIDI output port to which other software applications can connect. This type of functionality is currently only supported by the Macintosh OS-X, Linux ALSA and JACK APIs (the function does nothing with the other APIs). An exception is thrown if an error occurs while attempting to create the virtual port. */ void openVirtualPort(const std::string &portName = std::string("RtMidi Output")); //! Return the number of available MIDI output ports. unsigned int getPortCount(void); //! Return a string identifier for the specified MIDI port type and number. /*! \return The name of the port with the given Id is returned. \retval An empty string is returned if an invalid port specifier is provided. User code should assume a UTF-8 encoding. */ std::string getPortName(unsigned int portNumber = 0); //! Immediately send a single message out an open MIDI output port. /*! An exception is thrown if an error occurs during output or an output connection was not previously established. */ void sendMessage(const std::vector *message); //! Immediately send a single message out an open MIDI output port. /*! An exception is thrown if an error occurs during output or an output connection was not previously established. \param message A pointer to the MIDI message as raw bytes \param size Length of the MIDI message in bytes */ void sendMessage(const unsigned char *message, size_t size); //! Set an error callback function to be invoked when an error has occured. /*! The callback function will be called whenever an error has occured. It is best to set the error callback function before opening a port. */ virtual void setErrorCallback(RtMidiErrorCallback errorCallback = NULL, void *userData = 0); protected: void openMidiApi(RtMidi::Api api, const std::string &clientName); }; // **************************************************************** // // // MidiInApi / MidiOutApi class declarations. // // Subclasses of MidiInApi and MidiOutApi contain all API- and // OS-specific code necessary to fully implement the RtMidi API. // // Note that MidiInApi and MidiOutApi are abstract base classes and // cannot be explicitly instantiated. RtMidiIn and RtMidiOut will // create instances of a MidiInApi or MidiOutApi subclass. // // **************************************************************** // class RTMIDI_DLL_PUBLIC MidiApi { public: MidiApi(); virtual ~MidiApi(); virtual RtMidi::Api getCurrentApi(void) = 0; virtual void openPort(unsigned int portNumber, const std::string &portName) = 0; virtual void openVirtualPort(const std::string &portName) = 0; virtual void closePort(void) = 0; virtual void setClientName(const std::string &clientName) = 0; virtual void setPortName(const std::string &portName) = 0; virtual unsigned int getPortCount(void) = 0; virtual std::string getPortName(unsigned int portNumber) = 0; inline bool isPortOpen() const { return connected_; } void setErrorCallback(RtMidiErrorCallback errorCallback, void *userData); //! A basic error reporting function for RtMidi classes. void error(RtMidiError::Type type, std::string errorString); protected: virtual void initialize(const std::string &clientName) = 0; void *apiData_; bool connected_; std::string errorString_; RtMidiErrorCallback errorCallback_; bool firstErrorOccurred_; void *errorCallbackUserData_; }; class RTMIDI_DLL_PUBLIC MidiInApi : public MidiApi { public: MidiInApi(unsigned int queueSizeLimit); virtual ~MidiInApi(void); void setCallback(RtMidiIn::RtMidiCallback callback, void *userData); void cancelCallback(void); virtual void ignoreTypes(bool midiSysex, bool midiTime, bool midiSense); double getMessage(std::vector *message); // A MIDI structure used internally by the class to store incoming // messages. Each message represents one and only one MIDI message. struct MidiMessage { std::vector bytes; //! Time in seconds elapsed since the previous message double timeStamp; // Default constructor. MidiMessage() : bytes(0), timeStamp(0.0) {} }; struct MidiQueue { unsigned int front; unsigned int back; unsigned int ringSize; MidiMessage *ring; // Default constructor. MidiQueue() : front(0), back(0), ringSize(0), ring(0) {} bool push(const MidiMessage &); bool pop(std::vector *, double *); unsigned int size(unsigned int *back = 0, unsigned int *front = 0); }; // The RtMidiInData structure is used to pass private class data to // the MIDI input handling function or thread. struct RtMidiInData { MidiQueue queue; MidiMessage message; unsigned char ignoreFlags; bool doInput; bool firstMessage; void *apiData; bool usingCallback; RtMidiIn::RtMidiCallback userCallback; void *userData; bool continueSysex; // Default constructor. RtMidiInData() : ignoreFlags(7), doInput(false), firstMessage(true), apiData(0), usingCallback(false), userCallback(0), userData(0), continueSysex(false) {} }; protected: RtMidiInData inputData_; }; class RTMIDI_DLL_PUBLIC MidiOutApi : public MidiApi { public: MidiOutApi(void); virtual ~MidiOutApi(void); virtual void sendMessage(const unsigned char *message, size_t size) = 0; }; // **************************************************************** // // // Inline RtMidiIn and RtMidiOut definitions. // // **************************************************************** // inline RtMidi::Api RtMidiIn ::getCurrentApi(void) throw() { return rtapi_->getCurrentApi(); } inline void RtMidiIn ::openPort(unsigned int portNumber, const std::string &portName) { rtapi_->openPort(portNumber, portName); } inline void RtMidiIn ::openVirtualPort(const std::string &portName) { rtapi_->openVirtualPort(portName); } inline void RtMidiIn ::closePort(void) { rtapi_->closePort(); } inline bool RtMidiIn ::isPortOpen() const { return rtapi_->isPortOpen(); } inline void RtMidiIn ::setCallback(RtMidiCallback callback, void *userData) { static_cast(rtapi_)->setCallback(callback, userData); } inline void RtMidiIn ::cancelCallback(void) { static_cast(rtapi_)->cancelCallback(); } inline unsigned int RtMidiIn ::getPortCount(void) { return rtapi_->getPortCount(); } inline std::string RtMidiIn ::getPortName(unsigned int portNumber) { return rtapi_->getPortName(portNumber); } inline void RtMidiIn ::ignoreTypes(bool midiSysex, bool midiTime, bool midiSense) { static_cast(rtapi_)->ignoreTypes(midiSysex, midiTime, midiSense); } inline double RtMidiIn ::getMessage(std::vector *message) { return static_cast(rtapi_)->getMessage(message); } inline void RtMidiIn ::setErrorCallback(RtMidiErrorCallback errorCallback, void *userData) { rtapi_->setErrorCallback(errorCallback, userData); } inline RtMidi::Api RtMidiOut ::getCurrentApi(void) throw() { return rtapi_->getCurrentApi(); } inline void RtMidiOut ::openPort(unsigned int portNumber, const std::string &portName) { rtapi_->openPort(portNumber, portName); } inline void RtMidiOut ::openVirtualPort(const std::string &portName) { rtapi_->openVirtualPort(portName); } inline void RtMidiOut ::closePort(void) { rtapi_->closePort(); } inline bool RtMidiOut ::isPortOpen() const { return rtapi_->isPortOpen(); } inline unsigned int RtMidiOut ::getPortCount(void) { return rtapi_->getPortCount(); } inline std::string RtMidiOut ::getPortName(unsigned int portNumber) { return rtapi_->getPortName(portNumber); } inline void RtMidiOut ::sendMessage(const std::vector *message) { static_cast(rtapi_)->sendMessage(&message->at(0), message->size()); } inline void RtMidiOut ::sendMessage(const unsigned char *message, size_t size) { static_cast(rtapi_)->sendMessage(message, size); } inline void RtMidiOut ::setErrorCallback(RtMidiErrorCallback errorCallback, void *userData) { rtapi_->setErrorCallback(errorCallback, userData); } #endif zytrax-master/drivers/vst2/000077500000000000000000000000001347722000700162665ustar00rootroot00000000000000zytrax-master/drivers/vst2/audio_effect_provider_vst2.cpp000066400000000000000000000167371347722000700243150ustar00rootroot00000000000000#include "audio_effect_provider_vst2.h" #include "vestige.h" #include ///////////////////////////////////////////// #ifdef WINDOWS_ENABLED AEffect *AudioEffectProviderVST2::open_vst_from_lib_handle(HINSTANCE libhandle, audioMasterCallback p_master_callback) { if (libhandle == NULL) { //printf("invalid file: %s\n",lib_name); return NULL; } AEffect *(__cdecl * getNewPlugInstance)(audioMasterCallback); getNewPlugInstance = (AEffect * (__cdecl *)(audioMasterCallback)) GetProcAddress(libhandle, "VSTPluginMain"); if (!getNewPlugInstance) { getNewPlugInstance = (AEffect * (__cdecl *)(audioMasterCallback)) GetProcAddress(libhandle, "main"); } if (getNewPlugInstance == NULL) { FreeLibrary(libhandle); //WARN_PRINT("Can't find symbol 'main'"); return NULL; } return getNewPlugInstance(p_master_callback); } #else AEffect *AudioEffectProviderVST2::open_vst_from_lib_handle(void *libhandle, audioMasterCallback p_master_callback) { if (libhandle == NULL) { //printf("invalid file: %s\n",lib_name); return NULL; } AEffect *(*getNewPlugInstance)(audioMasterCallback); getNewPlugInstance = (AEffect * (*)(audioMasterCallback)) dlsym(libhandle, "VSTPluginMain"); if (!getNewPlugInstance) { getNewPlugInstance = (AEffect * (*)(audioMasterCallback)) dlsym(libhandle, "main"); } if (getNewPlugInstance == NULL) { dlclose(libhandle); //WARN_PRINT("Can't find symbol 'main'"); return NULL; } return getNewPlugInstance(p_master_callback); } #endif String AudioEffectProviderVST2::get_id() const { return "VST2"; } AudioEffect *AudioEffectProviderVST2::instantiate_effect(const AudioEffectInfo *p_info) { AudioEffectVST2 *fx_vst2 = new AudioEffectVST2; if (fx_vst2->open(p_info->path, p_info->unique_ID, p_info->caption, p_info->provider_id) != OK) { return NULL; } return fx_vst2; } void AudioEffectProviderVST2::scan_effects(AudioEffectFactory *p_factory, ScanCallback p_callback, void *p_userdata) { for (int i = 0; i < MAX_SCAN_PATHS; i++) { String p = get_scan_path(i).strip_edges(); if (p == String()) { continue; } printf("scanning path: %s\n", p.utf8().get_data()); #ifdef WINDOWS_ENABLED _WDIR *dir; struct _wdirent *dirent; dir = _wopendir(p.c_str()); #else DIR *dir; struct dirent *dirent; dir = opendir(p.utf8().get_data()); #endif if (dir == NULL) { printf("failed?\n"); return; } printf("opened dir\n"); #ifdef WINDOWS_ENABLED while ((dirent = _wreaddir(dir))) { #else while ((dirent = readdir(dir))) { #endif String lib_name = p + "/" + String(dirent->d_name); #ifdef WINDOWS_ENABLED if (lib_name.get_extension() != "dll") { #elif defined(OSX_ENABLED) if (lib_name.get_extension() != "dylib") { #else if (lib_name.get_extension() != "so") { #endif continue; } printf("opening plugin: %s\n", lib_name.utf8().get_data()); #ifdef WINDOWS_ENABLED HINSTANCE libhandle = LoadLibraryW(lib_name.c_str()); AEffect *ptrPlug = open_vst_from_lib_handle(libhandle, &host); #else void *libhandle = dlopen(lib_name.utf8().get_data(), RTLD_LOCAL | RTLD_LAZY); if (!libhandle) { continue; } AEffect *ptrPlug = open_vst_from_lib_handle(libhandle, &host); #endif if (ptrPlug == NULL) { #ifdef WINDOWS_ENABLED FreeLibrary(libhandle); #else dlclose(libhandle); #endif continue; } if (ptrPlug->magic != kEffectMagic) { #ifdef WINDOWS_ENABLED FreeLibrary(libhandle); #else dlclose(libhandle); #endif continue; } ptrPlug->dispatcher(ptrPlug, effOpen, 0, 0, NULL, 0.0f); if (ptrPlug->numOutputs >= 2) { //needs to have at least 2 outputs AudioEffectInfo info; String name = dirent->d_name; name = name.substr(0, name.find(".")); info.caption = name; printf("plugin name: %s\n", info.caption.utf8().get_data()); info.description = "VST Info:\n Name: " + info.caption + "\n ID: " + String::num(ptrPlug->uniqueID) + "\n Version: " + String(ptrPlug->version); info.unique_ID = "VST_" + String::num(ptrPlug->uniqueID); info.synth = /*(ptrPlug->dispatcher(ptrPlug,effGetVstVersion,0,0,NULL,0.0f)==2 */ ptrPlug->flags & effFlagsIsSynth; info.category = info.synth ? "VST Instruments" : "VST Effects"; info.has_ui = (ptrPlug->flags & effFlagsHasEditor); info.provider_caption = "VST2"; info.version = String::num(ptrPlug->version); info.provider_id = get_id(); info.path = lib_name; if (ptrPlug->flags & effFlagsProgramChunks) { info.description += " (CS)"; } /* Perform the "write only" test */ //plugin_data->write_only=true; //i cant really be certain of anything with VST plugins, so this is always true /* if (ptrPlug->numParams) { ptrPlug->setParameter(ptrPlug,0,1.0); //set 1.0 float res=ptrPlug->getParameter(ptrPlug,0); if (res<0.8) { //try if it's not near 1.0, with some threshold, then no reading (far most of the ones that dont support this will just return 0) } } */ p_factory->add_audio_effect(info); if (p_callback) { p_callback(info.caption, p_userdata); } } ptrPlug->dispatcher(ptrPlug, effClose, 0, 0, NULL, 0.0f); #ifdef WINDOWS_ENABLED FreeLibrary(libhandle); #else dlclose(libhandle); #endif } } } intptr_t VESTIGECALLBACK AudioEffectProviderVST2::host(AEffect *effect, int32_t opcode, int32_t index, intptr_t value, void *ptr, float opt) { long retval = 0; //simple host for exploring plugin switch (opcode) { //VST 1.0 opcodes case audioMasterVersion: //Input values: //none //Return Value: //0 or 1 for old version //2 or higher for VST2.0 host? retval = 2; break; case audioMasterGetSampleRate: effect->dispatcher(effect, effSetSampleRate, 0, 0, NULL, 44100); //just crap break; case audioMasterGetBlockSize: //Input Values: //None //Return Value: //not tested, always return 0 //NB - Host must despatch effSetBlockSize to the plug in response //to this call //Check despatcher notes for any return codes from effSetBlockSize effect->dispatcher(effect, effSetBlockSize, 0, 1024, NULL, 0.0f); break; case audioMasterCanDo: //Input Values: // predefined "canDo" string //Return Value: //0 = Not Supported //non-zero value if host supports that feature //NB - Possible Can Do strings are: //"sendVstEvents", //"sendVstMidiEvent", //"sendVstTimeInfo", //"receiveVstEvents", //"receiveVstMidiEvent", //"receiveVstTimeInfo", //"reportConnectionChanges", //"acceptIOChanges", //"sizeWindow", //"asyncProcessing", //"offline", //"supplyIdle", //"supportShell" if (strcmp((char *)ptr, "supplyIdle") == 0 || strcmp((char *)ptr, "sendVstTimeInfo") == 0 || strcmp((char *)ptr, "sendVstEvents") == 0 || strcmp((char *)ptr, "sendVstMidiEvent") == 0 || strcmp((char *)ptr, "sizeWindow") == 0) { retval = 1; } else { retval = 0; } break; case audioMasterGetLanguage: //Input Values: //None //Return Value: //kVstLangEnglish //kVstLangGerman //kVstLangFrench //kVstLangItalian //kVstLangSpanish //kVstLangJapanese retval = kVstLangEnglish; break; } return retval; } String AudioEffectProviderVST2::get_name() const { return "VST2"; } AudioEffectProviderVST2 *AudioEffectProviderVST2::singleton = NULL; AudioEffectProviderVST2::AudioEffectProviderVST2() { //paths = "C:\\Program Files\\Synister64"; //paths = "C:\\Program Files\\Synister64"; //paths = "C:\\Program Files\\Common Files\\VST2\\SonicCat"; singleton = this; } AudioEffectProviderVST2::~AudioEffectProviderVST2() { } zytrax-master/drivers/vst2/audio_effect_provider_vst2.h000066400000000000000000000016771347722000700237570ustar00rootroot00000000000000#ifndef AUDIOEFFECTFACTORYVST_H #define AUDIOEFFECTFACTORYVST_H #include "drivers/vst2/audio_effect_vst2.h" class AudioEffectProviderVST2 : public AudioEffectProvider { static intptr_t VESTIGECALLBACK host(AEffect *effect, int32_t opcode, int32_t index, intptr_t value, void *ptr, float opt); friend class AudioEffectVST2; #ifdef WINDOWS_ENABLED static AEffect *open_vst_from_lib_handle(HINSTANCE libhandle, audioMasterCallback p_master_callback); #else static AEffect *open_vst_from_lib_handle(void *libhandle, audioMasterCallback p_master_callback); #endif public: static AudioEffectProviderVST2 *singleton; virtual String get_name() const; virtual String get_id() const; virtual AudioEffect *instantiate_effect(const AudioEffectInfo *p_info); virtual void scan_effects(AudioEffectFactory *p_factory, ScanCallback p_callback, void *p_userdata); AudioEffectProviderVST2(); ~AudioEffectProviderVST2(); }; #endif // AUDIOEFFECTFACTORYVST_H zytrax-master/drivers/vst2/audio_effect_vst2.cpp000066400000000000000000000647221347722000700224000ustar00rootroot00000000000000#include "audio_effect_vst2.h" #include "audio_effect_provider_vst2.h" #include "base64.h" int AudioEffectVST2::_get_internal_control_port_count() const { return control_ports.size(); } ControlPort *AudioEffectVST2::_get_internal_control_port(int p_index) { return &control_ports[p_index]; } bool AudioEffectVST2::has_secondary_input() const { return has_side_input; } void AudioEffectVST2::_process(const Event *p_events, int p_event_count) { if (effect->flags & effFlagsIsSynth) { //pass time to midi float time = float(process_block_size) / sampling_rate; //convert input events to actual MIDI events int midi_event_count; const MIDIEventStamped *midi_events = _process_midi_events(p_events, p_event_count, time, midi_event_count); event_pointers->numEvents = 0; if (stop_all_notes) { for (int i = 0; i < 127; i++) { //send noteoff for every channel VstMidiEvent &vstem = event_array[event_pointers->numEvents]; vstem.midiData[0] = 0x80 | last_midi_channel; vstem.midiData[1] = i; vstem.midiData[2] = 127; vstem.midiData[3] = 0; event_pointers->numEvents++; } { //send all notes off VstMidiEvent &vstem = event_array[event_pointers->numEvents]; vstem.midiData[0] = 0xB0 | last_midi_channel; vstem.midiData[1] = 0x7B; //all notes off vstem.midiData[2] = 0; vstem.midiData[3] = 0; vstem.deltaFrames = 0; event_pointers->numEvents++; } { //send reset controllers VstMidiEvent &vstem = event_array[event_pointers->numEvents]; vstem.midiData[0] = 0xB0 | last_midi_channel; vstem.midiData[1] = 0x79; //reset all controllers vstem.midiData[2] = 0; vstem.midiData[3] = 0; vstem.deltaFrames = 0; event_pointers->numEvents++; } stop_all_notes = false; } for (int i = 0; i < midi_event_count; i++) { if (event_pointers->numEvents == MAX_INPUT_EVENTS) { break; } const MIDIEvent &ev = midi_events[i].event; int frame_offset = midi_events[i].frame; VstMidiEvent &vstem = event_array[event_pointers->numEvents]; vstem.deltaFrames = frame_offset; switch (ev.type) { case MIDIEvent::MIDI_NOTE_ON: { vstem.midiData[0] = 0x90 | ev.channel; //channel 0? vstem.midiData[1] = ev.note.note; vstem.midiData[2] = ev.note.velocity; vstem.midiData[3] = 0; last_midi_channel = ev.channel; //remember for note off? } break; case MIDIEvent::MIDI_NOTE_OFF: { vstem.midiData[0] = 0x80 | ev.channel; //channel 0? vstem.midiData[1] = ev.note.note; vstem.midiData[2] = ev.note.velocity; vstem.midiData[3] = 0; } break; case MIDIEvent::MIDI_CONTROLLER: { vstem.midiData[0] = 0xB0 | ev.channel; //channel 0? vstem.midiData[1] = ev.control.index; vstem.midiData[2] = ev.control.parameter; vstem.midiData[3] = 0; } break; case MIDIEvent::MIDI_PITCH_BEND: { vstem.midiData[0] = 0xE0 | ev.channel; //channel 0? vstem.midiData[1] = ev.pitch_bend.bend & 0x7F; vstem.midiData[2] = ev.pitch_bend.bend >> 7; vstem.midiData[3] = 0; } break; case MIDIEvent::MIDI_AFTERTOUCH: { vstem.midiData[0] = 0xD0 | ev.channel; //channel 0? vstem.midiData[1] = ev.aftertouch.pressure; vstem.midiData[2] = 0; vstem.midiData[3] = 0; } break; case MIDIEvent::MIDI_NOTE_PRESSURE: { vstem.midiData[0] = 0xA0 | ev.channel; //channel 0? vstem.midiData[1] = ev.note.note; vstem.midiData[2] = ev.note.velocity; vstem.midiData[3] = 0; } break; case MIDIEvent::MIDI_PATCH_SELECT: { vstem.midiData[0] = 0xC0 | ev.channel; //channel 0? vstem.midiData[1] = ev.note.note; vstem.midiData[2] = ev.note.velocity; vstem.midiData[3] = 0; } break; default: { //unhandled, dont add continue; } } event_pointers->numEvents++; } effect->dispatcher(effect, effProcessEvents, 0, 0, event_pointers, 0.0f); } float **in_buffer_ptrs = in_buffers.size() ? &in_buffers[0] : 0; float **out_buffer_ptrs = out_buffers.size() ? &out_buffers[0] : 0; effect->processReplacing(effect, in_buffer_ptrs, out_buffer_ptrs, process_block_size); } void AudioEffectVST2::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { if (in_buffers.size() >= 2) { float *in_l = in_buffers[0]; float *in_r = in_buffers[1]; for (int i = 0; i < process_block_size; i++) { in_l[i] = p_in[i].l; in_r[i] = p_in[i].r; } } _process(p_events, p_event_count); float *out_l = out_buffers[0]; float *out_r = out_buffers[1]; for (int i = 0; i < process_block_size; i++) { p_out[i].l = out_l[i]; p_out[i].r = out_r[i]; } } void AudioEffectVST2::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { if (in_buffers.size() >= 4) { float *in_l = in_buffers[0]; float *in_r = in_buffers[1]; float *sec_l = in_buffers[2]; float *sec_r = in_buffers[3]; for (int i = 0; i < process_block_size; i++) { in_l[i] = p_in[i].l; in_r[i] = p_in[i].r; sec_l[i] = p_secondary[i].l; sec_r[i] = p_secondary[i].r; } } _process(p_events, p_event_count); float *out_l = out_buffers[0]; float *out_r = out_buffers[1]; for (int i = 0; i < process_block_size; i++) { p_out[i].l = out_l[i]; p_out[i].r = out_r[i]; } } void AudioEffectVST2::_clear_buffers() { for (int i = 0; i < in_buffers.size(); i++) { delete[] in_buffers[i]; } in_buffers.clear(); for (int i = 0; i < out_buffers.size(); i++) { delete[] out_buffers[i]; } out_buffers.clear(); } void AudioEffectVST2::_update_buffers() { _clear_buffers(); in_buffers.resize(effect->numInputs); for (int i = 0; i < effect->numInputs; i++) { in_buffers[i] = new float[process_block_size]; } out_buffers.resize(effect->numOutputs); for (int i = 0; i < effect->numOutputs; i++) { out_buffers[i] = new float[process_block_size]; } has_side_input = in_buffers.size() == 4 && out_buffers.size() >= 2; } void AudioEffectVST2::set_process_block_size(int p_size) { if (process_block_size == p_size) { return; } process_block_size = p_size; effect->dispatcher(effect, effMainsChanged, 0, 0, NULL, 0.0f); effect->dispatcher(effect, effSetBlockSize, 0, process_block_size, NULL, 0.0f); effect->dispatcher(effect, effMainsChanged, 0, 1, NULL, 0.0f); _update_buffers(); } void AudioEffectVST2::set_sampling_rate(int p_hz) { if (sampling_rate == p_hz) { return; } sampling_rate = p_hz; effect->dispatcher(effect, effMainsChanged, 0, 0, NULL, 0.0f); effect->dispatcher(effect, effSetSampleRate, 0, 0, NULL, sampling_rate); effect->dispatcher(effect, effMainsChanged, 0, 1, NULL, 0.0f); } String AudioEffectVST2::get_name() const { return name; } String AudioEffectVST2::get_unique_id() const { return unique_id; } String AudioEffectVST2::get_provider_id() const { return provider_id; } void AudioEffectVST2::reset() { //attempt to reset the plugin in any way possible, since it's not clear in VST how to do it effect->dispatcher(effect, effMainsChanged, 0, 0, NULL, 0.0f); //reset preset.. this maybe helps.. int current_preset = effect->dispatcher(effect, effGetProgram, 0, 0, 0, 0.0f); effect->dispatcher(effect, effSetProgram, 0, current_preset, NULL, 0.0f); //switch the plugin back on (calls Resume) effect->dispatcher(effect, effMainsChanged, 0, 1, NULL, 0.0f); if (effect->flags & effFlagsIsSynth) { stop_all_notes = true; _reset_midi(); } } bool AudioEffectVST2::has_user_interface() const { return (effect->flags & effFlagsHasEditor); } void AudioEffectVST2::get_user_interface_size(int &r_width, int &r_height) { ERect *rect = NULL; effect->dispatcher(effect, effEditGetRect, 0, 0, &rect, 0); ERR_FAIL_COND(!rect); r_width = rect->right - rect->left; r_height = rect->bottom - rect->top; } void AudioEffectVST2::process_user_interface() { effect->dispatcher(effect, effEditIdle, 0, 0, NULL, 0); } #ifdef WINDOWS_ENABLED void AudioEffectVST2::open_user_interface(void *p_window_ptr) { effect->dispatcher(effect, effEditOpen, 0, 0, p_window_ptr, 0); } #else void AudioEffectVST2::open_user_interface(long p_longint, void *p_window_ptr) { effect->dispatcher(effect, effEditOpen, 0, p_longint, p_window_ptr, 0); } #endif void AudioEffectVST2::resize_user_interface(int p_width, int p_height) { //fst->amc (fst->plugin, 15 /*audioMasterSizeWindow */, width, height, NULL, 0); effect->dispatcher(effect, audioMasterSizeWindow, p_width, p_height, NULL, 0); } void AudioEffectVST2::close_user_interface() { effect->dispatcher(effect, effEditClose, 0, 0, NULL, 0); } #if 1 #define DEBUG_CALLBACK(m_text) #else #define DEBUG_CALLBACK(m_text) printf("VST Callback: %s\n", m_text) #endif intptr_t VESTIGECALLBACK AudioEffectVST2::host(AEffect *effect, int32_t opcode, int32_t index, intptr_t value, void *ptr, float opt) { //simple host for exploring plugin AudioEffectVST2 *vst_effect = effect ? (AudioEffectVST2 *)effect->resvd1 : 0; switch (opcode) { //VST 1.0 opcodes case audioMasterAutomate: DEBUG_CALLBACK("audioMasterAutomate"); // index, value, returns 0 if (vst_effect) { if (index >= 0 && index < vst_effect->control_ports.size() && !vst_effect->control_ports[index].setting) { vst_effect->control_ports[index].ui_changed_notify(); } //update automation on host //plug->parameter_changed_externally (index, opt); } return 0; case audioMasterVersion: DEBUG_CALLBACK("audioMasterVersion"); // vst version, currently 2 (0 for older) return 2400; case audioMasterCurrentId: DEBUG_CALLBACK("audioMasterCurrentId"); // returns the unique id of a plug that's currently loading return 0; case audioMasterIdle: DEBUG_CALLBACK("audioMasterIdle"); #ifdef WINDOWS_VST_SUPPORT fst_audio_master_idle(); #endif if (vst_effect) { vst_effect->process_user_interface(); } return 0; case audioMasterWantMidi: DEBUG_CALLBACK("audioMasterWantMidi"); return 1; case audioMasterGetTime: return (intptr_t)NULL; //no time #if 0 DEBUG_CALLBACK ("audioMasterGetTime"); newflags = kVstNanosValid | kVstAutomationWriting | kVstAutomationReading; timeinfo->nanoSeconds = g_get_monotonic_time () * 1000; if (plug && session) { samplepos_t now = plug->transport_sample(); timeinfo->samplePos = now; timeinfo->sampleRate = session->sample_rate(); if (value & (kVstTempoValid)) { const Tempo& t (session->tempo_map().tempo_at_sample (now)); timeinfo->tempo = t.quarter_notes_per_minute (); newflags |= (kVstTempoValid); } if (value & (kVstTimeSigValid)) { const MeterSection& ms (session->tempo_map().meter_section_at_sample (now)); timeinfo->timeSigNumerator = ms.divisions_per_bar (); timeinfo->timeSigDenominator = ms.note_divisor (); newflags |= (kVstTimeSigValid); } if ((value & (kVstPpqPosValid)) || (value & (kVstBarsValid))) { Timecode::BBT_Time bbt; try { bbt = session->tempo_map().bbt_at_sample_rt (now); bbt.beats = 1; bbt.ticks = 0; /* exact quarter note */ double ppqBar = session->tempo_map().quarter_note_at_bbt_rt (bbt); /* quarter note at sample position (not rounded to note subdivision) */ double ppqPos = session->tempo_map().quarter_note_at_sample_rt (now); if (value & (kVstPpqPosValid)) { timeinfo->ppqPos = ppqPos; newflags |= kVstPpqPosValid; } if (value & (kVstBarsValid)) { timeinfo->barStartPos = ppqBar; newflags |= kVstBarsValid; } } catch (...) { /* relax */ } } if (value & (kVstSmpteValid)) { Timecode::Time t; session->timecode_time (now, t); timeinfo->smpteOffset = (t.hours * t.rate * 60.0 * 60.0) + (t.minutes * t.rate * 60.0) + (t.seconds * t.rate) + (t.frames) + (t.subframes); timeinfo->smpteOffset *= 80.0; /* VST spec is 1/80th samples */ if (session->timecode_drop_frames()) { if (session->timecode_frames_per_second() == 30.0) { timeinfo->smpteFrameRate = 5; } else { timeinfo->smpteFrameRate = 4; /* 29.97 assumed, thanks VST */ } } else { if (session->timecode_frames_per_second() == 24.0) { timeinfo->smpteFrameRate = 0; } else if (session->timecode_frames_per_second() == 24.975) { timeinfo->smpteFrameRate = 2; } else if (session->timecode_frames_per_second() == 25.0) { timeinfo->smpteFrameRate = 1; } else { timeinfo->smpteFrameRate = 3; /* 30 fps */ } } newflags |= (kVstSmpteValid); } if (session->actively_recording ()) { newflags |= kVstTransportRecording; } if (plug->transport_speed () != 0.0f) { newflags |= kVstTransportPlaying; } if (session->get_play_loop ()) { newflags |= kVstTransportCycleActive; Location * looploc = session->locations ()->auto_loop_location (); if (looploc) try { timeinfo->cycleStartPos = session->tempo_map ().quarter_note_at_sample_rt (looploc->start ()); timeinfo->cycleEndPos = session->tempo_map ().quarter_note_at_sample_rt (looploc->end ()); newflags |= kVstCyclePosValid; } catch (...) { } } } else { timeinfo->samplePos = 0; timeinfo->sampleRate = AudioEngine::instance()->sample_rate(); } if ((timeinfo->flags & (kVstTransportPlaying | kVstTransportRecording | kVstTransportCycleActive)) != (newflags & (kVstTransportPlaying | kVstTransportRecording | kVstTransportCycleActive))) { newflags |= kVstTransportChanged; } timeinfo->flags = newflags; return (intptr_t) timeinfo; #endif case audioMasterProcessEvents: DEBUG_CALLBACK("audioMasterProcessEvents"); #if 0 // VstEvents* in if (plug && plug->midi_buffer()) { VstEvents* v = (VstEvents*)ptr; for (int n = 0 ; n < v->numEvents; ++n) { VstMidiEvent *vme = (VstMidiEvent*) (v->events[n]->dump); if (vme->type == kVstMidiType) { plug->midi_buffer()->push_back(vme->deltaSamples, 3, (uint8_t*)vme->midiData); } } } #endif return 0; #if 0 case audioMasterSetTime: DEBUG_CALLBACK ("audioMasterSetTime"); // VstTimenfo* in , filter in , not supported return 0; case audioMasterTempoAt: DEBUG_CALLBACK ("audioMasterTempoAt"); // returns tempo (in bpm * 10000) at sample sample location passed in if (session) { const Tempo& t (session->tempo_map().tempo_at_sample (value)); return t.quarter_notes_per_minute() * 1000; } else { return 0; } break; case audioMasterGetNumAutomatableParameters: DEBUG_CALLBACK ("audioMasterGetNumAutomatableParameters"); return 0; case audioMasterGetParameterQuantization: DEBUG_CALLBACK ("audioMasterGetParameterQuantization"); // returns the integer value for +1.0 representation, // or 1 if full single float precision is maintained // in automation. parameter index in (-1: all, any) return 0; #endif case audioMasterIOChanged: DEBUG_CALLBACK("audioMasterIOChanged"); // numInputs and/or numOutputs has changed return 0; #if 0 case audioMasterNeedIdle: DEBUG_CALLBACK ("audioMasterNeedIdle"); // plug needs idle calls (outside its editor window) if (plug) { plug->state()->wantIdle = 1; } return 0; #endif case audioMasterSizeWindow: DEBUG_CALLBACK("audioMasterSizeWindow"); if (vst_effect) { int w = index; int h = value; if (vst_effect->resize_callback) { vst_effect->resize_callback(vst_effect->resize_userdata, w, h); } return 1; } case audioMasterGetSampleRate: DEBUG_CALLBACK("audioMasterGetSampleRate"); if (vst_effect) { return vst_effect->sampling_rate; } return 0; case audioMasterGetBlockSize: DEBUG_CALLBACK("audioMasterGetBlockSize"); if (vst_effect) { return vst_effect->process_block_size; } return 0; case audioMasterGetInputLatency: DEBUG_CALLBACK("audioMasterGetInputLatency"); return 0; case audioMasterGetOutputLatency: DEBUG_CALLBACK("audioMasterGetOutputLatency"); return 0; #if 0 case audioMasterGetPreviousPlug: DEBUG_CALLBACK ("audioMasterGetPreviousPlug"); // input pin in (-1: first to come), returns cEffect* return 0; case audioMasterGetNextPlug: DEBUG_CALLBACK ("audioMasterGetNextPlug"); // output pin in (-1: first to come), returns cEffect* return 0; case audioMasterWillReplaceOrAccumulate: DEBUG_CALLBACK ("audioMasterWillReplaceOrAccumulate"); // returns: 0: not supported, 1: replace, 2: accumulate return 0; #endif case audioMasterGetCurrentProcessLevel: DEBUG_CALLBACK("audioMasterGetCurrentProcessLevel"); // returns: 0: not supported, // 1: currently in user thread (gui) // 2: currently in audio thread (where process is called) // 3: currently in 'sequencer' thread (midi, timer etc) // 4: currently offline processing and thus in user thread // other: not defined, but probably pre-empting user thread. return 0; case audioMasterGetAutomationState: DEBUG_CALLBACK("audioMasterGetAutomationState"); // returns 0: not supported, 1: off, 2:read, 3:write, 4:read/write // offline return 0; case audioMasterOfflineStart: DEBUG_CALLBACK("audioMasterOfflineStart"); return 0; case audioMasterOfflineRead: DEBUG_CALLBACK("audioMasterOfflineRead"); // ptr points to offline structure, see below. return 0: error, 1 ok return 0; case audioMasterOfflineWrite: DEBUG_CALLBACK("audioMasterOfflineWrite"); // same as read return 0; case audioMasterOfflineGetCurrentPass: DEBUG_CALLBACK("audioMasterOfflineGetCurrentPass"); return 0; case audioMasterOfflineGetCurrentMetaPass: DEBUG_CALLBACK("audioMasterOfflineGetCurrentMetaPass"); return 0; #if 0 case audioMasterSetOutputSampleRate: DEBUG_CALLBACK ("audioMasterSetOutputSampleRate"); // for variable i/o, sample rate in return 0; case audioMasterGetSpeakerArrangement: DEBUG_CALLBACK ("audioMasterGetSpeakerArrangement"); // (long)input in , output in return 0; #endif case audioMasterGetVendorString: DEBUG_CALLBACK("audioMasterGetVendorString"); // fills with a string identifying the vendor (max 64 char) strcpy((char *)ptr, "ZyTrax"); return 1; case audioMasterGetProductString: DEBUG_CALLBACK("audioMasterGetProductString"); // fills with a string with product name (max 64 char) strcpy((char *)ptr, "ZyTrax"); return 1; case audioMasterGetVendorVersion: DEBUG_CALLBACK("audioMasterGetVendorVersion"); // returns vendor-specific version return 900; case audioMasterVendorSpecific: DEBUG_CALLBACK("audioMasterVendorSpecific"); // no definition, vendor specific handling return 0; #if 0 case audioMasterSetIcon: DEBUG_CALLBACK ("audioMasterSetIcon"); // void* in , format not defined yet return 0; #endif case audioMasterCanDo: DEBUG_CALLBACK("audioMasterCanDo"); if (strcmp((char *)ptr, "supplyIdle") == 0 || strcmp((char *)ptr, "sendVstTimeInfo") == 0 || strcmp((char *)ptr, "sendVstEvents") == 0 || strcmp((char *)ptr, "sendVstMidiEvent") == 0 || strcmp((char *)ptr, "sizeWindow") == 0) { return 1; } else { return 0; } case audioMasterGetLanguage: DEBUG_CALLBACK("audioMasterGetLanguage"); // see enum return kVstLangEnglish; #if 0 case audioMasterOpenWindow: DEBUG_CALLBACK ("audioMasterOpenWindow"); // returns platform specific ptr return 0; case audioMasterCloseWindow: DEBUG_CALLBACK ("audioMasterCloseWindow"); // close window, platform specific handle in return 0; #endif case audioMasterGetDirectory: DEBUG_CALLBACK("audioMasterGetDirectory"); // get plug directory, FSSpec on MAC, else char* return 0; case audioMasterUpdateDisplay: DEBUG_CALLBACK("audioMasterUpdateDisplay"); // something has changed, update 'multi-fx' display if (vst_effect) { // } return 0; case audioMasterBeginEdit: DEBUG_CALLBACK("audioMasterBeginEdit"); if (index >= 0 && index < vst_effect->control_ports.size()) { vst_effect->control_ports[index].editing = true; } // begin of automation session (when mouse down), parameter index in return 0; case audioMasterEndEdit: DEBUG_CALLBACK("audioMasterEndEdit"); // end of automation session (when mouse up), parameter index in if (index >= 0 && index < vst_effect->control_ports.size()) { vst_effect->control_ports[index].editing = false; vst_effect->control_ports[index].ui_changed_notify(); } return 0; case audioMasterOpenFileSelector: DEBUG_CALLBACK("audioMasterOpenFileSelector"); // open a fileselector window with VstFileSelect* in return 0; default: printf("Unhandled in dispatcher: %i\n", opcode); break; } return 0; } /* Load/Save */ JSON::Node AudioEffectVST2::_internal_to_json() const { JSON::Node node = JSON::object(); node.add("type", "vst2"); //save the program first, if programs exist if (effect->numPrograms > 0) { int program_number = effect->dispatcher(effect, effGetProgram, 0, 0, NULL, 0.0f); node.add("program", program_number); } //check whether to use chunks or params if (effect->flags & effFlagsProgramChunks) { unsigned char *data; int data_size = effect->dispatcher(effect, effGetChunk, 1, 0, &data, 0); if (data_size) { Vector datav; datav.resize(data_size); memcpy(&datav[0], data, data_size); node.add("chunk", base64_encode(datav)); } } else { //I guess VST warrants that these keep their indices JSON::Node states = JSON::array(); for (int i = 0; i < control_ports.size(); i++) { states.append(control_ports[i].get()); } node.add("param_states", states); } return node; } Error AudioEffectVST2::_internal_from_json(const JSON::Node &node) { ERR_FAIL_COND_V(!node.has("type"), ERR_FILE_CORRUPT); ERR_FAIL_COND_V(node.get("type").toString() != "vst2", ERR_FILE_CORRUPT); if (node.has("program")) { int program = node.get("program").toInt(); effect->dispatcher(effect, effSetProgram, 0, program, NULL, 0.0f); } if (node.has("chunk")) { std::string chunk_str = node.get("chunk").toString(); Vector data = base64_decode(chunk_str); effect->dispatcher(effect, effSetChunk, 1, data.size(), &data[0], 0); } else if (node.has("param_states")) { JSON::Node states = node.get("param_states"); for (int i = 0; i < states.getCount(); i++) { if (i >= control_ports.size()) { break; } float value = states.get(i).toFloat(); control_ports[i].set(value); } } return OK; } String AudioEffectVST2::get_path() const { return path; } float AudioEffectVST2::ControlPortVST2::get() const { return effect->getParameter(effect, index); } void AudioEffectVST2::ControlPortVST2::set(float p_val) { if (editing) { //being edited return; } setting = true; //lock used to avoid rogue effects emitting automated when being set. effect->setParameter(effect, index, p_val); setting = false; } String AudioEffectVST2::ControlPortVST2::get_value_as_text() const { char label[kVstMaxLabelLen + 1]; effect->dispatcher(effect, effGetParamDisplay, index, 0, label, 0); label[kVstMaxLabelLen] = 0; String s; s.parse_utf8(label); return s + label; } Error AudioEffectVST2::open(const String &p_path, const String &p_unique_id, const String &p_name, const String &p_provider_id) { name = p_name; unique_id = p_unique_id; path = p_path; provider_id = p_provider_id; #ifdef WINDOWS_ENABLED libhandle = LoadLibraryW(p_path.c_str()); #else libhandle = dlopen(p_path.utf8().get_data(), RTLD_LOCAL | RTLD_LAZY); #endif effect = AudioEffectProviderVST2::open_vst_from_lib_handle(libhandle, host); if (!effect) { return ERR_CANT_OPEN; } effect->resvd1 = this; effect->dispatcher(effect, effOpen, 0, 0, NULL, 0.0f); vst_version = effect->dispatcher(effect, effGetVstVersion, 0, 0, NULL, 0.0f); effect->dispatcher(effect, effMainsChanged, 0, 0, NULL, 0.0f); control_ports.resize(effect->numParams); for (int i = 0; i < effect->numParams; i++) { ControlPortVST2 *cp = &control_ports[i]; cp->visible = true; //bleh just allow all - i < 100; //the first 50 are visible, the rest are not cp->index = i; cp->effect = effect; cp->setting = false; cp->editing = false; char label[kVstMaxLabelLen + 1]; effect->dispatcher(effect, effGetParamLabel, i, 0, label, 0); //just crap label[kVstMaxLabelLen] = 0; cp->label.parse_utf8(label); cp->label = " " + cp->label; effect->dispatcher(effect, effGetParamName, i, 0, label, 0); //just crap label[kVstMaxLabelLen] = 0; cp->name.parse_utf8(label); cp->identifier = cp->name.to_lower(); cp->identifier.replace(" ", "_"); cp->identifier = "vst_param_" + cp->identifier; float value = effect->getParameter(effect, i); cp->value = value; } effect->dispatcher(effect, effSetSampleRate, 0, 0, NULL, sampling_rate); effect->dispatcher(effect, effSetBlockSize, 0, process_block_size, NULL, 0.0f); _update_buffers(); effect->dispatcher(effect, effSetProcessPrecision, 0, 0, NULL, 0.0f); effect->dispatcher(effect, effMainsChanged, 0, 1, NULL, 0.0f); effect->dispatcher(effect, effStartProcess, 0, 0, NULL, 0.0f); return OK; } void AudioEffectVST2::set_resize_callback(ResizeCallback p_callback, void *p_userdata) { resize_callback = p_callback; resize_userdata = p_userdata; } AudioEffectVST2::AudioEffectVST2() { libhandle = NULL; effect = NULL; resize_callback = NULL; resize_userdata = NULL; process_block_size = 128; sampling_rate = 44100; has_side_input = false; stop_all_notes = false; event_pointer_data = new unsigned char[sizeof(int32_t) + sizeof(intptr_t) + sizeof(VstEvent *) * MAX_INPUT_EVENTS]; event_pointers = (VstEvents *)event_pointer_data; last_midi_channel = 0; event_pointers->numEvents = 0; event_pointers->reserved = 0; for (int i = 0; i < MAX_INPUT_EVENTS; i++) { event_array[i].type = kVstMidiType; event_array[i].byteSize = 24; event_array[i].deltaFrames = 0; event_array[i].flags = 0; ///< @see VstMidiEventFlags event_array[i].noteLength = 0; ///< (in sample frames) of entire note, if available, event_array[i].noteOffset = 0; ///< offset (in sample frames) into note from note event_array[i].midiData[0] = 0; event_array[i].midiData[1] = 0; event_array[i].midiData[2] = 0; event_array[i].midiData[3] = 0; event_array[i].detune = 0; event_array[i].noteOffVelocity = 0; event_array[i].reserved1 = 0; event_pointers->events[i] = (VstEvent *)&event_array[i]; } } AudioEffectVST2::~AudioEffectVST2() { _clear_buffers(); delete[] event_pointer_data; if (effect) { effect->dispatcher(effect, effClose, 0, 0, NULL, 0.0f); } if (libhandle) { #ifdef WINDOWS_ENABLED FreeLibrary(libhandle); #else dlclose(libhandle); #endif } } zytrax-master/drivers/vst2/audio_effect_vst2.h000066400000000000000000000064651347722000700220450ustar00rootroot00000000000000#ifndef AUDIO_EFFECT_VST2_H #define AUDIO_EFFECT_VST2_H #include "engine/audio_effect_midi.h" #include "globals/map.h" #include "vestige.h" #ifdef WINDOWS_ENABLED #define WIN32_LEAN_AND_MEAN #include #else #include #endif class AudioEffectVST2 : public AudioEffectMIDI { public: typedef void (*ResizeCallback)(void *, int, int); private: enum { MAX_INPUT_EVENTS = 8192 }; VstMidiEvent event_array[MAX_INPUT_EVENTS]; unsigned char *event_pointer_data; VstEvents *event_pointers; Vector in_buffers; Vector out_buffers; bool has_side_input; String path; int vst_version; String unique_id; String name; String provider_id; AEffect *effect; #ifdef WINDOWS_ENABLED HINSTANCE libhandle; #else void *libhandle; #endif static intptr_t VESTIGECALLBACK host(AEffect *effect, int32_t opcode, int32_t index, intptr_t value, void *ptr, float opt); ResizeCallback resize_callback; void *resize_userdata; struct ControlPortVST2 : public ControlPort { AEffect *effect; int index; String name; String identifier; String label; float value; bool visible; bool setting; bool editing; virtual Hint get_hint() const { return HINT_RANGE_NORMALIZED; } virtual String get_name() const { return name; } virtual String get_identifier() const { return identifier; } virtual float get_min() const { return 0; } virtual float get_max() const { return 1; } virtual float get_step() const { return 0.0001; } virtual bool is_visible() const { return visible; } virtual float get() const; virtual void set(float p_val); virtual String get_value_as_text() const; }; Vector control_ports; int process_block_size; int sampling_rate; void _update_buffers(); void _process(const Event *p_events, int p_event_count); bool stop_all_notes; uint8_t last_midi_channel; void _clear_buffers(); public: virtual int _get_internal_control_port_count() const; virtual ControlPort *_get_internal_control_port(int p_index); virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); virtual void reset(); bool has_user_interface() const; void get_user_interface_size(int &r_width, int &r_height); void resize_user_interface(int p_width, int p_height); #ifdef WINDOWS_ENABLED void open_user_interface(void *p_window_ptr); #else void open_user_interface(long p_logint, void *p_window_ptr); #endif void process_user_interface(); void close_user_interface(); /*info */ virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; /* Load/Save */ virtual JSON::Node _internal_to_json() const; virtual Error _internal_from_json(const JSON::Node &node); void set_resize_callback(ResizeCallback p_callback, void *p_userdata); String get_path() const; Error open(const String &p_path, const String &p_unique_id, const String &p_name, const String &p_provider_id); AudioEffectVST2(); ~AudioEffectVST2(); }; #endif // AUDIO_EFFECT_VST2_H zytrax-master/drivers/vst2/effect_editor_vst2.cpp000066400000000000000000000210101347722000700225440ustar00rootroot00000000000000#include "effect_editor_vst2.h" //keep this for order #include "audio_effect_provider_vst2.h" #include "effect_editor_x11.h" #ifdef WINDOWS_ENABLED void EffectPlaceholderVST2Win32::_vst_resize(void *self, int w, int h) { EffectPlaceholderVST2Win32 *ph = (EffectPlaceholderVST2Win32 *)self; return; } void EffectPlaceholderVST2Win32::resize_editor(int left, int top, int right, int bottom) { if (vst_window) { RECT rc; rc.left = left; rc.right = right; rc.bottom = bottom; rc.top = top; const auto style = GetWindowLongPtr(vst_window, GWL_STYLE); const auto exStyle = GetWindowLongPtr(vst_window, GWL_EXSTYLE); const BOOL fMenu = GetMenu(vst_window) != nullptr; AdjustWindowRectEx(&rc, style, fMenu, exStyle); MoveWindow(vst_window, rc.left, rc.top, rc.right - rc.left, rc.bottom - rc.top, TRUE); } } void EffectPlaceholderVST2Win32::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } bool EffectPlaceholderVST2Win32::_update_window_position() { bool visible = is_visible(); if (visible) { //make sure it really is.. GtkWidget *p = gobj(); GtkWidget *w = gtk_widget_get_parent(p); while (w) { if (GTK_IS_NOTEBOOK(w)) { GtkNotebook *notebook = GTK_NOTEBOOK(w); int cpage = gtk_notebook_get_current_page(notebook); if (p != gtk_notebook_get_nth_page(notebook, cpage)) { visible = false; break; } } p = w; w = gtk_widget_get_parent(p); } } GtkWidget *toplevel = gtk_widget_get_toplevel(gobj()); ERR_FAIL_COND_V(!GTK_IS_WINDOW(toplevel), false); int root_x, root_y; //gtk_window_get_position(GTK_WINDOW(toplevel), &root_x, &root_y); HWND hwnd = gdk_win32_window_get_impl_hwnd(gtk_widget_get_window(gobj())); /* RECT r; GetWindowRect(hwnd, &r); root_x = r.left; root_y = r.top; */ POINT p; p.x = 0; p.y = 0; ClientToScreen(hwnd, &p); root_x = p.x; root_y = p.y; //method below is not compatible with multiple monitors { int widget_x, widget_y; gdk_window_get_origin(gtk_widget_get_window(gobj()), &widget_x, &widget_y); int toplevel_x, toplevel_y; gdk_window_get_origin(gtk_widget_get_window(toplevel), &toplevel_x, &toplevel_y); root_x += widget_x - toplevel_x; root_y += widget_y - toplevel_y; } //gdk_window_get_origin(gtk_widget_get_window(gobj()), &root_x, &root_y); int tlx = 0, tly = 0; /*gtk_widget_translate_coordinates(gobj(), gtk_widget_get_toplevel(gobj()), 0, 0, &tlx, &tly); root_x += tlx; root_y += tlx;*/ if (root_x != prev_x || root_y != prev_y || prev_w != vst_w || prev_h != vst_h) { resize_editor(root_x, root_y, root_x + vst_w, root_y + vst_h); prev_x = root_x; prev_y = root_y; prev_w = vst_w; prev_h = vst_h; } if (prev_visible != visible) { ShowWindow(vst_window, visible ? SW_SHOW : SW_HIDE); prev_visible = visible; } if (visible) { vst_effect->process_user_interface(); } return true; } void EffectPlaceholderVST2Win32::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | /* Gdk::EXPOSURE_MASK |*/ Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); /* Set the Window Parentship and update timer */ } if (m_refGdkWindow) { //hwnd for gtk window HWND hwnd = gdk_win32_window_get_impl_hwnd(m_refGdkWindow->gobj()); //set as parent. This works, while SetParent DOES NOT. SetWindowLongPtr(vst_window, GWLP_HWNDPARENT, (LONG_PTR)hwnd); /*SetParent((HWND)vst_window, (HWND)hwnd);*/ //turn on update timer to reposition the Window //sorry, this is the only way I found.. update_timer = Glib::signal_timeout().connect(sigc::mem_fun(*this, &EffectPlaceholderVST2Win32::_update_window_position), 50, Glib::PRIORITY_DEFAULT); //Show the Window ShowWindow(vst_window, SW_SHOW); prev_visible = true; } } void EffectPlaceholderVST2Win32::on_unrealize() { //clear the window if (m_refGdkWindow) { //clear parenthood SetWindowLongPtr(vst_window, GWLP_HWNDPARENT, (LONG_PTR)NULL); //disconnect timer update_timer.disconnect(); //Hide the Window ShowWindow(vst_window, SW_HIDE); prev_visible = false; } m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } bool EffectPlaceholderVST2Win32::on_visibility_notify_event(GdkEventVisibility *visibility_event) { /* if (m_refGdkWindow) { if (visibility_event->state == GDK_VISIBILITY_FULLY_OBSCURED) { ShowWindow(vst_window, SW_HIDE); } else { ShowWindow(vst_window, SW_SHOW); } } */ return false; } bool EffectPlaceholderVST2Win32::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); return false; Gdk::RGBA rgba; rgba.set_red(0); rgba.set_green(0); rgba.set_blue(0); rgba.set_alpha(1); Gdk::Cairo::set_source_rgba(cr, rgba); cr->rectangle(0, 0, allocation.get_width(), allocation.get_height()); cr->fill(); } EffectPlaceholderVST2Win32::EffectPlaceholderVST2Win32(AudioEffectVST2 *p_vst_effect) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("filler"), Gtk::Widget() { vst_effect = p_vst_effect; vst_w = 1; vst_h = 1; vst_window = NULL; //create the window, but don't use it. const auto style = WS_POPUP; vst_window = CreateWindowExW(0, L"VST_HOST", vst_effect->get_path().c_str(), style, 0, 0, 0, 0, NULL, 0, 0, 0); //open the user interface (it won't be visible though. vst_effect->open_user_interface(vst_window); //allocate size for this VST here vst_effect->get_user_interface_size(vst_w, vst_h); set_size_request(vst_w, vst_h); prev_x = prev_y = prev_w = prev_h = -1; prev_visible = false; } EffectPlaceholderVST2Win32::~EffectPlaceholderVST2Win32() { vst_effect->close_user_interface(); DestroyWindow(vst_window); } #endif void initialize_vst2_editor() { #ifdef WINDOWS_ENABLED HMODULE hInst = GetModuleHandleA(NULL); ERR_FAIL_COND(!hInst); WNDCLASSEXW wcex{ sizeof(wcex) }; wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH); wcex.lpfnWndProc = DefWindowProc; wcex.hInstance = GetModuleHandle(0); wcex.lpszClassName = L"VST_HOST"; if (!RegisterClassExW(&wcex)) { ERR_PRINT("Error in initialize_vst2_editor(): (class registration failed"); return; } #endif #ifdef FREEDESKTOP_ENABLED vstfx_init(); #endif } void finalize_vst2_editor() { #ifdef FREEDESKTOP_ENABLED vstfx_exit(); #endif } bool EffectEditorVST2::initialize() { #ifdef FREEDESKTOP_ENABLED socket.add_id(xid); int w, h; vstfx_get_window_size(vst_effect, &w, &h); socket.set_size_request(w, h); #endif return false; } EffectEditorVST2::EffectEditorVST2(AudioEffectVST2 *p_vst, EffectEditor *p_editor) : effect_editor_midi(p_vst, p_editor) #ifdef WINDOWS_ENABLED , vst_placeholder(p_vst) #endif { vst_effect = p_vst; #ifdef WINDOWS_ENABLED effect_editor_midi.prepend_page(vst_placeholder, "VST2 Plugin"); #endif pack_start(effect_editor_midi, Gtk::PACK_EXPAND_WIDGET); #ifdef FREEDESKTOP_ENABLED effect_editor_midi.prepend_page(socket, "VST2 Plugin"); xid = vstfx_run_editor(p_vst, this); #endif //need window to be mapped, so wait init_timer = Glib::signal_timeout().connect(sigc::mem_fun(*this, &EffectEditorVST2::initialize), 50, Glib::PRIORITY_DEFAULT); show_all_children(); } EffectEditorVST2::~EffectEditorVST2() { #ifdef FREEDESKTOP_ENABLED vstfx_destroy_editor(vst_effect); #endif } zytrax-master/drivers/vst2/effect_editor_vst2.h000066400000000000000000000030311347722000700222140ustar00rootroot00000000000000#ifndef EFFECT_EDITOR_VST2_H #define EFFECT_EDITOR_VST2_H #include "gui/effect_editor_midi.h" #ifdef WINDOWS_ENABLED #include #else #endif #ifdef FREEDESKTOP_ENABLED #include #endif class AudioEffectVST2; #ifdef WINDOWS_ENABLED class EffectPlaceholderVST2Win32 : public Gtk::Widget { AudioEffectVST2 *vst_effect; Glib::RefPtr m_refGdkWindow; int vst_w; int vst_h; HWND vst_window; sigc::connection update_timer; static void _vst_resize(void *self, int w, int h); bool _update_window_position(); void resize_editor(int left, int top, int right, int bottom); int prev_x, prev_y, prev_w, prev_h; bool prev_visible; public: void on_size_allocate(Gtk::Allocation &allocation) override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; bool on_visibility_notify_event(GdkEventVisibility *visibility_event) override; EffectPlaceholderVST2Win32(AudioEffectVST2 *p_vst_effect); ~EffectPlaceholderVST2Win32(); }; #endif class EffectEditorVST2 : public Gtk::VBox { AudioEffectVST2 *vst_effect; EffectEditorMIDI effect_editor_midi; #ifdef WINDOWS_ENABLED EffectPlaceholderVST2Win32 vst_placeholder; #endif #ifdef FREEDESKTOP_ENABLED int xid; Gtk::Socket socket; #endif sigc::connection init_timer; bool initialize(); public: EffectEditorVST2(AudioEffectVST2 *p_vst, EffectEditor *p_editor); ~EffectEditorVST2(); }; void initialize_vst2_editor(); void finalize_vst2_editor(); #endif // EFFECT_EDITOR_VST2_H zytrax-master/drivers/vst2/effect_editor_x11.cpp000066400000000000000000000552011347722000700222700ustar00rootroot00000000000000#include "effect_editor_x11.h" #include "audio_effect_vst2.h" #include "effect_editor_vst2.h" #include "list.h" //This code was adapted from Ardour /* Copyright (C) 2012 Paul Davis Based on code by Paul Davis, Torben Hohn as part of FST This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ #ifdef FREEDESKTOP_ENABLED #include #include #include #include #include #include #include #include #include static pthread_mutex_t plugin_mutex; struct VSTState { pthread_mutex_t lock; pthread_cond_t window_status_change; VSTState *next; Window linux_window; Window linux_plugin_ui_window; AudioEffectVST2 *effect_vst; EffectEditorVST2 *editor_vst; int width, height; void (*eventProc)(void *event); ///< X11 UI _XEventProc int destroy; int been_activated; int xid; }; VSTState *vstfx_first = NULL; const char magic[] = "VSTFX Plugin State v002"; static volatile int gui_quit = 0; /*This will be our connection to X*/ static Display *LXVST_XDisplay = NULL; /*The thread handle for the GUI event loop*/ static pthread_t LXVST_gui_event_thread; /*Util functions to get the value of a property attached to an XWindow*/ static bool LXVST_xerror; int TempErrorHandler(Display *display, XErrorEvent *e) { LXVST_xerror = true; return 0; } #if __x86_64__ /********************************************************************/ /* This is untested - have no 64Bit plugins which use this */ /* system of passing an eventProc address */ /********************************************************************/ long getXWindowProperty(Window window, Atom atom) { long result = 0; int userSize; unsigned long bytes; unsigned long userCount; unsigned char *data; Atom userType; LXVST_xerror = false; /*Use our own Xerror handler while we're in here - in an * attempt to stop the brain dead default Xerror behaviour of * qutting the entire application because of e.g. an invalid * window ID */ XErrorHandler olderrorhandler = XSetErrorHandler(TempErrorHandler); XGetWindowProperty(LXVST_XDisplay, window, atom, 0, 2, false, AnyPropertyType, &userType, &userSize, &userCount, &bytes, &data); if (LXVST_xerror == false && userCount == 1) { result = *(long *)data; } XSetErrorHandler(olderrorhandler); /*Hopefully this will return zero if the property is not set*/ return result; } #else int getXWindowProperty(Window window, Atom atom) { int result = 0; int userSize; unsigned long bytes; unsigned long userCount; unsigned char *data; Atom userType; LXVST_xerror = false; /*Use our own Xerror handler while we're in here - in an attempt to stop the brain dead default Xerror behaviour of qutting the entire application because of e.g. an invalid window ID*/ XErrorHandler olderrorhandler = XSetErrorHandler(TempErrorHandler); XGetWindowProperty(LXVST_XDisplay, // The display window, // The Window atom, // The property 0, // Offset into the data 1, // Number of 32Bit chunks of data false, // false = don't delete the property AnyPropertyType, // Required property type mask &userType, // Actual type returned &userSize, // Actual format returned &userCount, // Actual number of items stored in the returned data &bytes, // Number of bytes remaining if a partial read &data); // The actual data read if (LXVST_xerror == false && userCount == 1) { result = *(int *)data; } XSetErrorHandler(olderrorhandler); /*Hopefully this will return zero if the property is not set*/ return result; } #endif int vstfx_launch_editor(VSTState *vstfx); /*The event handler - called from within the main GUI thread to dispatch events to any VST UIs which have callbacks stuck to them*/ static void dispatch_x_events(XEvent *event, VSTState *vstfx) { /*Handle some of the Events we might be interested in*/ switch (event->type) { /*Configure event - when the window is resized or first drawn*/ case ConfigureNotify: { Window window = event->xconfigure.event; int width = event->xconfigure.width; int height = event->xconfigure.height; /* If we get a config notify on the parent window XID then we need to see * if the size has been changed - some plugins re-size their UI window e.g. * when opening a preset manager. * * if the size has changed, we flag this so that in lxvst_pluginui.cc * we can make the change to the GTK parent window in ardour, from its UI thread */ if (window == (Window)(vstfx->linux_window)) { #ifndef NDEBUG printf("dispatch_x_events: ConfigureNotify cfg: (%d %d) plugin: (%d %d)\n", width, height, vstfx->width, vstfx->height); #endif if (width != vstfx->width || height != vstfx->height) { vstfx->width = width; vstfx->height = height; //vstfx->window_size_changed_or_something(); } /* QUIRK : Loomer plugins not only resize the UI but throw it into some random * position at the same time. We need to re-position the window at the origin of * the parent window*/ if (vstfx->linux_plugin_ui_window) { XMoveWindow(LXVST_XDisplay, vstfx->linux_plugin_ui_window, 0, 0); } } break; } /*Reparent Notify - when the plugin UI is reparented into our Host Window we will get an event here... probably... */ case ReparentNotify: { Window ParentWindow = event->xreparent.parent; /*If the ParentWindow matches the window for the vstfx instance then the Child window must be the XID of the pluginUI window created by the plugin, so we need to see if it has a callback stuck to it, and if so set that up in the vstfx */ /***********************************************************/ /* 64Bit --- This mechanism is not 64Bit compatible at the */ /* present time */ /***********************************************************/ if (ParentWindow == (Window)(vstfx->linux_window)) { Window PluginUIWindowID = event->xreparent.window; vstfx->linux_plugin_ui_window = PluginUIWindowID; #ifdef __x86_64__ long result = getXWindowProperty(PluginUIWindowID, XInternAtom(LXVST_XDisplay, "_XEventProc", false)); if (result == 0) { vstfx->eventProc = NULL; } else { vstfx->eventProc = (void (*)(void *event))result; } #else int result = getXWindowProperty(PluginUIWindowID, XInternAtom(LXVST_XDisplay, "_XEventProc", false)); if (result == 0) { vstfx->eventProc = NULL; } else { vstfx->eventProc = (void (*)(void *event))result; } #endif } break; } case ClientMessage: { Window window = event->xany.window; Atom message_type = event->xclient.message_type; /*The only client message we are interested in is to signal that the plugin parent window is now valid and can be passed to effEditOpen when the editor is launched*/ if (window == (Window)(vstfx->linux_window)) { char *message = XGetAtomName(LXVST_XDisplay, message_type); if (strcmp(message, "LaunchEditor") == 0) { if (event->xclient.data.l[0] == 0x0FEEDBAC) { vstfx_launch_editor(vstfx); } } XFree(message); } break; } default: break; } /* Some VSTs built with toolkits e.g. JUCE will manager their own UI autonomously in the plugin, running the UI in its own thread, so once we have created a parent window for the plugin, its UI takes care of itself.*/ /*Other types register a callback as an Xwindow property on the plugin UI window after they create it. If that is the case, we need to call it here, passing the XEvent into it*/ if (vstfx->eventProc == NULL) { return; } vstfx->eventProc((void *)event); } void vstfx_event_loop_remove_plugin(VSTState *vstfx); int vstfx_create_editor(VSTState *vstfx); /** This is the main gui event loop for the plugin, we also need to pass any Xevents to all the UI callbacks plugins 'may' have registered on their windows, that is if they don't manage their own UIs **/ void *gui_event_loop(void *ptr) { VSTState *vstfx; int LXVST_sched_timer_interval = 40; //ms, 25fps XEvent event; uint64_t clock1, clock2; clock1 = g_get_monotonic_time(); /*The 'Forever' loop - runs the plugin UIs etc - based on the FST gui event loop*/ while (!gui_quit) { /* handle window creation requests, destroy requests, and run idle callbacks */ /*Look at the XEvent queue - if there are any XEvents we need to handle them, including passing them to all the plugin (eventProcs) we are currently managing*/ bool may_sleep = true; if (LXVST_XDisplay) { /*See if there are any events in the queue*/ int num_events = XPending(LXVST_XDisplay); if (num_events > 0) { // keep dispatching events as fast as possible may_sleep = false; } /*process them if there are any*/ while (num_events) { XNextEvent(LXVST_XDisplay, &event); /*Call dispatch events, with the event, for each plugin in the linked list*/ for (vstfx = vstfx_first; vstfx; vstfx = vstfx->next) { pthread_mutex_lock(&vstfx->lock); dispatch_x_events(&event, vstfx); pthread_mutex_unlock(&vstfx->lock); } num_events--; } } /*We don't want to use all the CPU.. */ Glib::usleep(1000); /*See if its time for us to do a scheduled event pass on all the plugins*/ clock2 = g_get_monotonic_time(); const int64_t elapsed_time_ms = (clock2 - clock1) / 1000; if ((LXVST_sched_timer_interval != 0) && elapsed_time_ms >= LXVST_sched_timer_interval) { //printf ("elapsed %d ms ^= %.2f Hz\n", elapsed_time_ms, 1000.0/ (double)elapsed_time_ms); // DEBUG pthread_mutex_lock(&plugin_mutex); again: /*Parse through the linked list of plugins*/ for (vstfx = vstfx_first; vstfx; vstfx = vstfx->next) { pthread_mutex_lock(&vstfx->lock); /*Window scheduled for destruction*/ if (vstfx->destroy) { if (vstfx->linux_window) { vstfx->effect_vst->close_user_interface(); XDestroyWindow(LXVST_XDisplay, vstfx->linux_window); /* FIXME - probably safe to assume we never have an XID of 0 but not explicitly true */ vstfx->linux_window = 0; vstfx->destroy = FALSE; } vstfx_event_loop_remove_plugin(vstfx); vstfx->been_activated = FALSE; pthread_cond_signal(&vstfx->window_status_change); pthread_mutex_unlock(&vstfx->lock); goto again; } /*Window does not yet exist - scheduled for creation*/ /* FIXME - probably safe to assume 0 is not a valid XID but not explicitly true */ if (vstfx->linux_window == 0) { if (vstfx_create_editor(vstfx)) { printf("** ERROR ** VSTFX : Cannot create editor for plugin %s\n", vstfx->effect_vst->get_name().utf8().get_data()); vstfx_event_loop_remove_plugin(vstfx); pthread_cond_signal(&vstfx->window_status_change); pthread_mutex_unlock(&vstfx->lock); goto again; } else { /* condition/unlock: it was signalled & unlocked in fst_create_editor() */ } } #if 0 vststate_maybe_set_program(vstfx); vstfx->want_program = -1; vstfx->want_chunk = 0; /*scheduled call to dispatcher*/ if (vstfx->dispatcher_wantcall) { vstfx->dispatcher_retval = vstfx->plugin->dispatcher( vstfx->plugin, vstfx->dispatcher_opcode, vstfx->dispatcher_index, vstfx->dispatcher_val, vstfx->dispatcher_ptr, vstfx->dispatcher_opt); vstfx->dispatcher_wantcall = 0; pthread_cond_signal(&vstfx->plugin_dispatcher_called); } #endif /*Call the editor Idle function in the plugin*/ vstfx->effect_vst->process_user_interface(); //->dispatcher(vstfx->plugin, effEditIdle, 0, 0, NULL, 0); /*if (vstfx->wantIdle) { vstfx->plugin->dispatcher(vstfx->plugin, 53, 0, 0, NULL, 0); }*/ pthread_mutex_unlock(&vstfx->lock); } pthread_mutex_unlock(&plugin_mutex); clock1 = g_get_monotonic_time(); } if (!gui_quit && may_sleep && elapsed_time_ms + 1 < LXVST_sched_timer_interval) { Glib::usleep(1000 * (LXVST_sched_timer_interval - elapsed_time_ms - 1)); } } if (LXVST_XDisplay) { XCloseDisplay(LXVST_XDisplay); LXVST_XDisplay = 0; } /* some plugin UIs (looking at you, u-he^abique), do set thread-keys * and free, but not unset them. * * This leads to a double-free in __nptl_deallocate_tsd * nptl/pthread_create.c:175 __pthread_keys[idx].destr (data); * when the event-loop thread is joined. * * This workaround is dedicated to all the plugin-UI-devs * who think their UI owns the complete process memory-space. * * NB. ardour itself does not use thread-keys for the * VST event-loop thread, and anyway, this thread is joined * only when ardour exit()s. If this would result in a leak, * nobody will care. */ if (!getenv("ARDOUR_RUNNING_UNDER_VALGRIND")) { for (pthread_key_t i = 0; i < PTHREAD_KEYS_MAX; ++i) { if (pthread_getspecific(i)) { pthread_setspecific(i, NULL); } } } return NULL; } /*The VSTFX Init function - this needs to be called before the VSTFX engine can be accessed, it gets the UI thread running, opens a connection to X etc normally started in globals.cc*/ int vstfx_init() { assert(gui_quit == 0); pthread_mutex_init(&plugin_mutex, NULL); int thread_create_result; pthread_attr_t thread_attributes; /*Init the attribs to defaults*/ pthread_attr_init(&thread_attributes); /*Make sure the thread is joinable - this should be the default anyway - so we can join to it on vstfx_exit*/ pthread_attr_setdetachstate(&thread_attributes, PTHREAD_CREATE_JOINABLE); /*This is where we need to open a connection to X, and start the GUI thread*/ /*Open our connection to X - all linuxVST plugin UIs handled by the LXVST engine will talk to X down this connection - X cannot handle multi-threaded access via the same Display* */ if (LXVST_XDisplay == NULL) { LXVST_XDisplay = XOpenDisplay(NULL); //We might be able to make this open a specific screen etc } /*Drop out and report the error if we fail to connect to X */ if (LXVST_XDisplay == NULL) { printf("** ERROR ** VSTFX: Failed opening connection to X\n"); return -1; } /*We have a connection to X - so start the gui event loop*/ /*Create the thread - use default attrs for now, don't think we need anything special*/ thread_create_result = pthread_create(&LXVST_gui_event_thread, &thread_attributes, gui_event_loop, NULL); if (thread_create_result != 0) { /*There was a problem starting the GUI event thread*/ printf("** ERROR ** VSTFX: Failed starting GUI event thread\n"); XCloseDisplay(LXVST_XDisplay); LXVST_XDisplay = 0; gui_quit = 1; return -1; } return 0; } /*The vstfx Quit function*/ void vstfx_exit() { if (gui_quit) { return; } gui_quit = 1; /*We need to pthread_join the gui_thread here so we know when it has stopped*/ pthread_join(LXVST_gui_event_thread, NULL); pthread_mutex_destroy(&plugin_mutex); } /*Adds a new plugin (VSTFX) instance to the linked list*/ void vstfx_get_window_size(AudioEffectVST2 *p_effect, int *w, int *h) { *w = 0; *h = 0; pthread_mutex_lock(&plugin_mutex); VSTState *p = vstfx_first; while (p) { if (p->effect_vst == p_effect) { *w = p->width; *h = p->height; } p = p->next; } pthread_mutex_unlock(&plugin_mutex); } int vstfx_run_editor(AudioEffectVST2 *p_effect, EffectEditorVST2 *p_editor) { VSTState *vstfx = (VSTState *)calloc(1, sizeof(VSTState)); vstfx->effect_vst = p_effect; vstfx->editor_vst = p_editor; pthread_mutex_init(&vstfx->lock, 0); //pthread_mutex_init (&vstfx->vstfx_lock, 0); pthread_cond_init(&vstfx->window_status_change, 0); //pthread_cond_init (&vstfx->plugin_dispatcher_called, 0); //pthread_cond_init (&vstfx->window_created, 0); pthread_mutex_lock(&plugin_mutex); /* Add the new VSTFX instance to the linked list */ if (vstfx_first == NULL) { vstfx_first = vstfx; } else { VSTState *p = vstfx_first; while (p->next) { p = p->next; } p->next = vstfx; /* Mark the new end of the list */ vstfx->next = NULL; } pthread_mutex_unlock(&plugin_mutex); /* wait for the plugin editor window to be created (or not) */ pthread_mutex_lock(&vstfx->lock); if (!vstfx->linux_window) { pthread_cond_wait(&vstfx->window_status_change, &vstfx->lock); } pthread_mutex_unlock(&vstfx->lock); if (!vstfx->linux_window) { return -1; } return vstfx->xid; } /*Creates an editor for the plugin - normally called from within the gui event loop after run_editor has added the plugin (editor) to the linked list*/ int vstfx_create_editor(VSTState *vstfx) { Window parent_window; int x_size = 1; int y_size = 1; /* Note: vstfx->lock is held while this function is called */ if (!(vstfx->effect_vst->has_user_interface())) { printf("** ERROR ** VSTFX: Plugin \"%s\" has no editor\n", vstfx->effect_vst->get_name().utf8().get_data()); return -1; } /*Create an XWindow for the plugin to inhabit*/ parent_window = XCreateSimpleWindow( LXVST_XDisplay, DefaultRootWindow(LXVST_XDisplay), 0, 0, x_size, y_size, 0, 0, 0); /*Select the events we are interested in receiving - we need Substructure notify so that if the plugin resizes its window - e.g. Loomer Manifold then we get a message*/ XSelectInput(LXVST_XDisplay, parent_window, SubstructureNotifyMask | ButtonPressMask | ButtonReleaseMask | ButtonMotionMask | ExposureMask); vstfx->linux_window = parent_window; vstfx->xid = parent_window; //vstfx->xid will be referenced to connect to GTK UI in ardour later /*Because the plugin may be operating on a different Display* to us, and therefore the two event queues can be asynchronous, although we have created the window on our display, we can't guarantee it exists in the server yet, which will cause BadWindow crashes if the plugin tries to use it. It would be nice to use CreateNotify events here, but they don't get through on all window managers, so instead we pass a client message into out queue, after the XCreateWindow. When this message pops out in our event handler, it will trigger the second stage of plugin Editor instantiation, and by then the Window should be valid...*/ XClientMessageEvent event; /*Create an atom to identify our message (only if it doesn't already exist)*/ Atom WindowActiveAtom = XInternAtom(LXVST_XDisplay, "LaunchEditor", false); event.type = ClientMessage; event.send_event = true; event.window = parent_window; event.message_type = WindowActiveAtom; event.format = 32; //Data format event.data.l[0] = 0x0FEEDBAC; //Something we can recognize later /*Push the event into the queue on our Display*/ XSendEvent(LXVST_XDisplay, parent_window, FALSE, NoEventMask, (XEvent *)&event); return 0; } int vstfx_launch_editor(VSTState *vstfx) { /*This is the second stage of launching the editor (see vstfx_create editor) we get called here in response to receiving the ClientMessage on our Window, therefore it's about as safe (as can be) to assume that the Window we created is now valid in the XServer and can be passed to the plugin in effEditOpen without generating BadWindow errors when the plugin reparents itself into our parent window*/ if (vstfx->been_activated) return 0; Window parent_window; struct ERect *er = NULL; int x_size = 1; int y_size = 1; parent_window = vstfx->linux_window; /*Open the editor - Bah! we have to pass the int windowID as a void pointer - yuck it gets cast back to an int as the parent window XID in the plugin - and we have to pass the Display* as a long */ /**************************************************************/ /* 64Bit --- parent window is an int passed as a void* so */ /* that should be ok for 64Bit machines */ /* */ /* Display is passed in as a long - ok on arch's where sizeof */ /* long = 8 */ /* */ /* Most linux VST plugins open a connection to X on their own */ /* Display anyway so it may not matter */ /* */ /* linuxDSP VSTs don't use the host Display* at all */ /**************************************************************/ vstfx->effect_vst->open_user_interface((long)LXVST_XDisplay, (void *)(parent_window)); /*QUIRK - some plugins need a slight delay after opening the editor before you can ask the window size or they might return zero - specifically discoDSP */ Glib::usleep(100000); /*Now we can find out how big the parent window should be (and try) to resize it*/ vstfx->effect_vst->get_user_interface_size(x_size, y_size); vstfx->width = x_size; vstfx->height = y_size; XResizeWindow(LXVST_XDisplay, parent_window, x_size, y_size); XFlush(LXVST_XDisplay); /*Not sure if we need to map the window or if the plugin will do it for us it should be ok because XReparentWindow generates a Map event*/ /*mark the editor as activated - mainly so that vstfx_get_XID will know it is valid*/ vstfx->been_activated = TRUE; pthread_cond_signal(&vstfx->window_status_change); return 0; } /** Destroy the editor window */ void vstfx_destroy_editor(AudioEffectVST2 *p_effect) { VSTState *vstfx = vstfx_first; while (vstfx) { if (vstfx->effect_vst == p_effect) { break; } vstfx = vstfx->next; } if (!vstfx) { printf("destroyed invalid effect..\n"); return; } pthread_mutex_lock(&vstfx->lock); if (vstfx->linux_window) { vstfx->destroy = TRUE; pthread_cond_wait(&vstfx->window_status_change, &vstfx->lock); } pthread_mutex_unlock(&vstfx->lock); } /** Remove a vstfx instance from the linked list parsed by the event loop */ void vstfx_event_loop_remove_plugin(VSTState *vstfx) { /* This only ever gets called from within our GUI thread so we don't need to lock here - if we did there would be a deadlock anyway */ VSTState *p; VSTState *prev; for (p = vstfx_first, prev = NULL; p; prev = p, p = p->next) { if (p == vstfx) { if (prev) { prev->next = p->next; break; } } } // if this function is called, there must be // at least one plugin in the linked list assert(vstfx_first); if (vstfx_first == vstfx) { vstfx_first = vstfx_first->next; } } #endif zytrax-master/drivers/vst2/effect_editor_x11.h000066400000000000000000000006311347722000700217320ustar00rootroot00000000000000#ifndef EFFECT_EDITOR_X11_H #define EFFECT_EDITOR_X11_H #ifdef FREEDESKTOP_ENABLED #include "drivers/vst2/effect_editor_vst2.h" int vstfx_init(); void vstfx_exit(); int vstfx_run_editor(AudioEffectVST2 *p_effect, EffectEditorVST2 *p_editor); void vstfx_get_window_size(AudioEffectVST2 *p_effect, int *w, int *h); void vstfx_destroy_editor(AudioEffectVST2 *p_effect); #endif #endif // EFFECT_EDITOR_X11_H zytrax-master/drivers/vst2/factory_wrapper_vst2.cpp000066400000000000000000000011521347722000700231560ustar00rootroot00000000000000 #include "effect_editor_vst2.h" //for include order, do not delete this comment #include "audio_effect_provider_vst2.h" #include "factory_wrapper_vst2.h" AudioEffectProvider *create_vst2_provider() { initialize_vst2_editor(); return new AudioEffectProviderVST2; } static Gtk::Widget *create_vst2_editor(AudioEffect *p_vst, EffectEditor *p_editor) { if (p_vst->get_provider_id() != AudioEffectProviderVST2::singleton->get_id()) { return NULL; } return new EffectEditorVST2(static_cast(p_vst), p_editor); } EffectEditorPluginFunc get_vst2_editor_function() { return &create_vst2_editor; } zytrax-master/drivers/vst2/factory_wrapper_vst2.h000066400000000000000000000003721347722000700226260ustar00rootroot00000000000000#ifndef FACTORY_WRAPPER_VST2_H #define FACTORY_WRAPPER_VST2_H #include "engine/song.h" #include "gui/effect_editor.h" AudioEffectProvider *create_vst2_provider(); EffectEditorPluginFunc get_vst2_editor_function(); #endif // FACTORY_WRAPPER_VST2_H zytrax-master/drivers/vst2/vestige.h000066400000000000000000000221261347722000700201100ustar00rootroot00000000000000/* * aeffectx.h - simple header to allow VeSTige compilation and eventually work * * Copyright (c) 2006 Javier Serrano Polo * * This file is part of Linux MultiMedia Studio - http://lmms.sourceforge.net * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public * License along with this program (see COPYING); if not, write to the * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, * Boston, MA 02110-1301 USA. * */ #ifndef _AEFFECTX_H #define _AEFFECTX_H #include #include #include #ifdef WINDOWS_ENABLED #define VESTIGECALLBACK __cdecl #else #define VESTIGECALLBACK #endif #define CCONST(a, b, c, d) ((((int)a) << 24) | \ (((int)b) << 16) | \ (((int)c) << 8) | \ (((int)d) << 0)) const int audioMasterAutomate = 0; const int audioMasterVersion = 1; const int audioMasterCurrentId = 2; const int audioMasterIdle = 3; const int audioMasterPinConnected = 4; // unsupported? 5 const int audioMasterWantMidi = 6; const int audioMasterGetTime = 7; const int audioMasterProcessEvents = 8; const int audioMasterSetTime = 9; const int audioMasterTempoAt = 10; const int audioMasterGetNumAutomatableParameters = 11; const int audioMasterGetParameterQuantization = 12; const int audioMasterIOChanged = 13; const int audioMasterNeedIdle = 14; const int audioMasterSizeWindow = 15; const int audioMasterGetSampleRate = 16; const int audioMasterGetBlockSize = 17; const int audioMasterGetInputLatency = 18; const int audioMasterGetOutputLatency = 19; const int audioMasterGetPreviousPlug = 20; const int audioMasterGetNextPlug = 21; const int audioMasterWillReplaceOrAccumulate = 22; const int audioMasterGetCurrentProcessLevel = 23; const int audioMasterGetAutomationState = 24; const int audioMasterOfflineStart = 25; const int audioMasterOfflineRead = 26; const int audioMasterOfflineWrite = 27; const int audioMasterOfflineGetCurrentPass = 28; const int audioMasterOfflineGetCurrentMetaPass = 29; const int audioMasterSetOutputSampleRate = 30; // unsupported? 31 const int audioMasterGetSpeakerArrangement = 31; // deprecated in 2.4? const int audioMasterGetVendorString = 32; const int audioMasterGetProductString = 33; const int audioMasterGetVendorVersion = 34; const int audioMasterVendorSpecific = 35; const int audioMasterSetIcon = 36; const int audioMasterCanDo = 37; const int audioMasterGetLanguage = 38; const int audioMasterOpenWindow = 39; const int audioMasterCloseWindow = 40; const int audioMasterGetDirectory = 41; const int audioMasterUpdateDisplay = 42; const int audioMasterBeginEdit = 43; const int audioMasterEndEdit = 44; const int audioMasterOpenFileSelector = 45; const int audioMasterCloseFileSelector = 46; // currently unused const int audioMasterEditFile = 47; // currently unused const int audioMasterGetChunkFile = 48; // currently unused const int audioMasterGetInputSpeakerArrangement = 49; // currently unused const int effFlagsHasEditor = 1; const int effFlagsCanReplacing = 1 << 4; // very likely const int effFlagsProgramChunks = 1 << 5; // from Ardour const int effFlagsIsSynth = 1 << 8; // currently unused const int effFlagsCanDoubleReplacing = 1 << 12; const int effOpen = 0; const int effClose = 1; // currently unused const int effSetProgram = 2; // currently unused const int effGetProgram = 3; // currently unused // The next one was gleaned from http://www.kvraudio.com/forum/viewtopic.php?p=1905347 const int effSetProgramName = 4; const int effGetProgramName = 5; // currently unused // The next two were gleaned from http://www.kvraudio.com/forum/viewtopic.php?p=1905347 const int effGetParamLabel = 6; const int effGetParamDisplay = 7; const int effGetParamName = 8; // currently unused const int effSetSampleRate = 10; const int effSetBlockSize = 11; const int effMainsChanged = 12; const int effEditGetRect = 13; const int effEditOpen = 14; const int effEditClose = 15; const int effEditIdle = 19; const int effEditTop = 20; const int effIdentify = 22; // from http://www.asseca.org/vst-24-specs/efIdentify.html const int effGetChunk = 23; // from Ardour const int effSetChunk = 24; // from Ardour const int effProcessEvents = 25; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efCanBeAutomated.html const int effCanBeAutomated = 26; // The next one was gleaned from http://www.kvraudio.com/forum/viewtopic.php?p=1905347 const int effGetProgramNameIndexed = 29; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efGetPlugCategory.html const int effGetPlugCategory = 35; const int effGetEffectName = 45; const int effGetParameterProperties = 56; // missing const int effGetVendorString = 47; const int effGetProductString = 48; const int effGetVendorVersion = 49; const int effCanDo = 51; // currently unused // The next one was gleaned from http://www.asseca.org/vst-24-specs/efIdle.html const int effIdle = 53; const int effGetVstVersion = 58; // currently unused // The next one was gleaned from http://www.asseca.org/vst-24-specs/efBeginSetProgram.html const int effBeginSetProgram = 67; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efEndSetProgram.html const int effEndSetProgram = 68; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efShellGetNextPlugin.html const int effShellGetNextPlugin = 70; // The next two were gleaned from http://www.kvraudio.com/forum/printview.php?t=143587&start=0 const int effStartProcess = 71; const int effStopProcess = 72; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efBeginLoadBank.html const int effBeginLoadBank = 75; // The next one was gleaned from http://www.asseca.org/vst-24-specs/efBeginLoadProgram.html const int effBeginLoadProgram = 76; const int effSetProcessPrecision = 77; const int kEffectMagic = CCONST('V', 's', 't', 'P'); const int kVstLangEnglish = 1; const int kVstMidiType = 1; const int kVstNanosValid = 1 << 8; const int kVstPpqPosValid = 1 << 9; const int kVstTempoValid = 1 << 10; const int kVstBarsValid = 1 << 11; const int kVstCyclePosValid = 1 << 12; const int kVstTimeSigValid = 1 << 13; const int kVstSmpteValid = 1 << 14; // from Ardour const int kVstClockValid = 1 << 15; // from Ardour const int kVstTransportPlaying = 1 << 1; const int kVstTransportCycleActive = 1 << 2; const int kVstTransportChanged = 1; const int kVstSysExType = 6; const int kVstMaxVendorStrLen = 64; const int kVstMaxEffectNameLen = 32; const int kVstMaxProgNameLen = 24; const int kVstVersion = 2400; const int kVstMaxLabelLen = 64; class VstMidiEvent { public: // 00 int type; // 04 int byteSize; // 08 int deltaFrames; // 0c? int flags; // 10? int noteLength; // 14? int noteOffset; // 18 char midiData[4]; // 1c? char detune; // 1d? char noteOffVelocity; // 1e? char reserved1; // 1f? char reserved2; }; class VstEvent { public: // 00 int type; // 04 int byteSize; // 08 int deltaFrames; // 0c int flags; // 10 char empty5[16]; }; class VstEvents { public: // 00 int numEvents; // 04 void *reserved; // 08 VstEvent *events[2]; }; class AEffect { public: // Never use virtual functions!!! // 00-03 int magic; // dispatcher 04-07 intptr_t VESTIGECALLBACK (*dispatcher)(AEffect *, int, int, intptr_t, void *, float); // process, quite sure 08-0b void VESTIGECALLBACK (*process)(AEffect *, float **, float **, int); // setParameter 0c-0f void VESTIGECALLBACK (*setParameter)(AEffect *, int, float); // getParameter 10-13 float VESTIGECALLBACK (*getParameter)(AEffect *, int); // programs 14-17 int numPrograms; // Params 18-1b int numParams; // Input 1c-1f int numInputs; // Output 20-23 int numOutputs; // flags 24-27 int flags; // Fill somewhere 28-2b void *resvd1; void *resvd2; int initialDelay; // Zeroes 34-37 38-3b int empty3a; int empty3b; // 1.0f 3c-3f float unkown_float; // An object? pointer 40-43 void *object; // Zeroes 44-47 void *user; // Id 48-4b int32_t uniqueID; int32_t version; // processReplacing 50-53 void VESTIGECALLBACK (*processReplacing)(AEffect *, float **, float **, int); void VESTIGECALLBACK (*processDoubleReplacing)(AEffect *, double **, double **, int); char empty6[56]; }; class VstTimeInfo { public: // 00 double samplePos; // 08 double sampleRate; // 10 double nanoSeconds; // 18 double ppqPos; // 20? double tempo; // 28 double barStartPos; // 30? double cycleStartPos; // 38? double cycleEndPos; // 40? int timeSigNumerator; // 44? int timeSigDenominator; // unconfirmed 48 4c 50 char empty3[4 + 4 + 4]; // 54 int flags; }; typedef intptr_t VESTIGECALLBACK (*audioMasterCallback)(AEffect *, int32_t, int32_t, intptr_t, void *, float); class ERect { public: short top; short left; short bottom; short right; }; #endif zytrax-master/dsp/000077500000000000000000000000001347722000700145005ustar00rootroot00000000000000zytrax-master/dsp/SCsub000066400000000000000000000001641347722000700154430ustar00rootroot00000000000000Import('env'); Export('env'); targets=[] env.add_sources(targets,"*.cpp") env.libs+=env.Library('dsp', targets); zytrax-master/dsp/db.h000066400000000000000000000004671347722000700152450ustar00rootroot00000000000000#ifndef DB_H #define DB_H #include "typedefs.h" #include static _FORCE_INLINE_ float linear2db(float p_linear) { return log(p_linear) * 8.6858896380650365530225783783321; } static _FORCE_INLINE_ float db2linear(float p_db) { return exp(p_db * 0.11512925464970228420089957273422); } #endif // DB_H zytrax-master/dsp/filter.cpp000066400000000000000000000142101347722000700164670ustar00rootroot00000000000000#include "filter.h" #include void Filter::set_mode(Mode p_mode) { mode = p_mode; } void Filter::set_cutoff(float p_cutoff) { cutoff = p_cutoff; } void Filter::set_resonance(float p_resonance) { resonance = p_resonance; } void Filter::set_gain(float p_gain) { gain = p_gain; } void Filter::set_sampling_rate(float p_srate) { sampling_rate = p_srate; } void Filter::prepare_coefficients(Coeffs *p_coeffs) { int sr_limit = (sampling_rate / 2) + 512; double final_cutoff = (cutoff > sr_limit) ? sr_limit : cutoff; if (final_cutoff < 1) final_cutoff = 1; //don't allow less than this double omega = 2.0 * M_PI * final_cutoff / sampling_rate; double sin_v = sin(omega); double cos_v = cos(omega); double Q = resonance; if (Q <= 0.0) { Q = 0.0001; } if (mode == BANDPASS) Q *= 2.0; else if (mode == PEAK) Q *= 3.0; double tmpgain = gain; if (tmpgain < 0.001) tmpgain = 0.001; if (stages > 1) { Q = (Q > 1.0 ? pow(Q, 1.0 / stages) : Q); tmpgain = pow(tmpgain, 1.0 / (stages + 1)); } double alpha = sin_v / (2 * Q); double a0 = 1.0 + alpha; switch (mode) { case LOWPASS: { p_coeffs->b0 = (1.0 - cos_v) / 2.0; p_coeffs->b1 = 1.0 - cos_v; p_coeffs->b2 = (1.0 - cos_v) / 2.0; p_coeffs->a1 = -2.0 * cos_v; p_coeffs->a2 = 1.0 - alpha; } break; case HIGHPASS: { p_coeffs->b0 = (1.0 + cos_v) / 2.0; p_coeffs->b1 = -(1.0 + cos_v); p_coeffs->b2 = (1.0 + cos_v) / 2.0; p_coeffs->a1 = -2.0 * cos_v; p_coeffs->a2 = 1.0 - alpha; } break; case BANDPASS: { p_coeffs->b0 = alpha * sqrt(Q + 1); p_coeffs->b1 = 0.0; p_coeffs->b2 = -alpha * sqrt(Q + 1); p_coeffs->a1 = -2.0 * cos_v; p_coeffs->a2 = 1.0 - alpha; } break; case NOTCH: { p_coeffs->b0 = 1.0; p_coeffs->b1 = -2.0 * cos_v; p_coeffs->b2 = 1.0; p_coeffs->a1 = -2.0 * cos_v; p_coeffs->a2 = 1.0 - alpha; } break; case PEAK: { p_coeffs->b0 = (1.0 + alpha * tmpgain); p_coeffs->b1 = (-2.0 * cos_v); p_coeffs->b2 = (1.0 - alpha * tmpgain); p_coeffs->a1 = -2 * cos_v; p_coeffs->a2 = (1 - alpha / tmpgain); } break; case BANDLIMIT: { //this one is extra tricky double hicutoff = resonance; double centercutoff = (cutoff + resonance) / 2.0; double bandwidth = (log(centercutoff) - log(hicutoff)) / log((double)2); omega = 2.0 * M_PI * centercutoff / sampling_rate; alpha = sin(omega) * sinh(log((double)2) / 2 * bandwidth * omega / sin(omega)); a0 = 1 + alpha; p_coeffs->b0 = alpha; p_coeffs->b1 = 0; p_coeffs->b2 = -alpha; p_coeffs->a1 = -2 * cos(omega); p_coeffs->a2 = 1 - alpha; } break; case LOWSHELF: { double tmpq = sqrt(Q); if (tmpq <= 0) tmpq = 0.001; alpha = sin_v / (2 * tmpq); double beta = sqrt(tmpgain) / tmpq; a0 = (tmpgain + 1.0) + (tmpgain - 1.0) * cos_v + beta * sin_v; p_coeffs->b0 = tmpgain * ((tmpgain + 1.0) - (tmpgain - 1.0) * cos_v + beta * sin_v); p_coeffs->b1 = 2.0 * tmpgain * ((tmpgain - 1.0) - (tmpgain + 1.0) * cos_v); p_coeffs->b2 = tmpgain * ((tmpgain + 1.0) - (tmpgain - 1.0) * cos_v - beta * sin_v); p_coeffs->a1 = -2.0 * ((tmpgain - 1.0) + (tmpgain + 1.0) * cos_v); p_coeffs->a2 = ((tmpgain + 1.0) + (tmpgain - 1.0) * cos_v - beta * sin_v); } break; case HIGHSHELF: { double tmpq = sqrt(Q); if (tmpq <= 0) tmpq = 0.001; alpha = sin_v / (2 * tmpq); double beta = sqrt(tmpgain) / tmpq; a0 = (tmpgain + 1.0) - (tmpgain - 1.0) * cos_v + beta * sin_v; p_coeffs->b0 = tmpgain * ((tmpgain + 1.0) + (tmpgain - 1.0) * cos_v + beta * sin_v); p_coeffs->b1 = -2.0 * tmpgain * ((tmpgain - 1.0) + (tmpgain + 1.0) * cos_v); p_coeffs->b2 = tmpgain * ((tmpgain + 1.0) + (tmpgain - 1.0) * cos_v - beta * sin_v); p_coeffs->a1 = 2.0 * ((tmpgain - 1.0) - (tmpgain + 1.0) * cos_v); p_coeffs->a2 = ((tmpgain + 1.0) - (tmpgain - 1.0) * cos_v - beta * sin_v); } break; }; p_coeffs->b0 /= a0; p_coeffs->b1 /= a0; p_coeffs->b2 /= a0; p_coeffs->a1 /= 0.0 - a0; p_coeffs->a2 /= 0.0 - a0; //undenormalise /* p_coeffs->b0=undenormalise(p_coeffs->b0); p_coeffs->b1=undenormalise(p_coeffs->b1); p_coeffs->b2=undenormalise(p_coeffs->b2); p_coeffs->a1=undenormalise(p_coeffs->a1); p_coeffs->a2=undenormalise(p_coeffs->a2);*/ } void Filter::set_stages(int p_stages) { //adjust for multiple stages stages = p_stages; } /* Fouriertransform kernel to obtain response */ float Filter::get_response(float p_freq, Coeffs *p_coeffs) { float freq = p_freq / sampling_rate * M_PI * 2.0f; float cx = p_coeffs->b0, cy = 0.0; cx += cos(freq) * p_coeffs->b1; cy -= sin(freq) * p_coeffs->b1; cx += cos(2 * freq) * p_coeffs->b2; cy -= sin(2 * freq) * p_coeffs->b2; float H = cx * cx + cy * cy; cx = 1.0; cy = 0.0; cx -= cos(freq) * p_coeffs->a1; cy += sin(freq) * p_coeffs->a1; cx -= cos(2 * freq) * p_coeffs->a2; cy += sin(2 * freq) * p_coeffs->a2; H = H / (cx * cx + cy * cy); return H; } Filter::Filter() { sampling_rate = 44100; resonance = 0.5; cutoff = 5000; gain = 1.0; mode = LOWPASS; stages = 1; } Filter::Processor::Processor() { set_filter(NULL); } void Filter::Processor::set_filter(Filter *p_filter, bool p_clear_history) { if (p_clear_history) { ha1 = ha2 = hb1 = hb2 = 0; } filter = p_filter; } void Filter::Processor::update_coeffs(int p_interp_buffer_len) { if (!filter) return; if (p_interp_buffer_len) { //interpolate Coeffs old_coeffs = coeffs; filter->prepare_coefficients(&coeffs); incr_coeffs.a1 = (coeffs.a1 - old_coeffs.a1) / p_interp_buffer_len; incr_coeffs.a2 = (coeffs.a2 - old_coeffs.a2) / p_interp_buffer_len; incr_coeffs.b0 = (coeffs.b0 - old_coeffs.b0) / p_interp_buffer_len; incr_coeffs.b1 = (coeffs.b1 - old_coeffs.b1) / p_interp_buffer_len; incr_coeffs.b2 = (coeffs.b2 - old_coeffs.b2) / p_interp_buffer_len; coeffs = old_coeffs; } else { filter->prepare_coefficients(&coeffs); } } void Filter::Processor::process(float *p_samples, int p_amount, int p_stride, bool p_interpolate) { if (!filter) return; if (p_interpolate) { for (int i = 0; i < p_amount; i++) { process_one_interp(*p_samples); p_samples += p_stride; } } else { for (int i = 0; i < p_amount; i++) { process_one(*p_samples); p_samples += p_stride; } } } zytrax-master/dsp/filter.h000066400000000000000000000041741347722000700161440ustar00rootroot00000000000000#ifndef FILTER_H #define FILTER_H #include "globals/typedefs.h" class Filter { public: struct Coeffs { float a1, a2; float b0, b1, b2; //bool operator==(const Coeffs &p_rv) { return (FLOATS_EQ(a1,p_rv.a1) && FLOATS_EQ(a2,p_rv.a2) && FLOATS_EQ(b1,p_rv.b1) && FLOATS_EQ(b2,p_rv.b2) && FLOATS_EQ(b0,p_rv.b0) ); } Coeffs() { a1 = a2 = b0 = b1 = b2 = 0.0; } }; enum Mode { BANDPASS, HIGHPASS, LOWPASS, NOTCH, PEAK, BANDLIMIT, LOWSHELF, HIGHSHELF }; class Processor { // simple filter processor Filter *filter; Coeffs coeffs; float ha1, ha2, hb1, hb2; //history Coeffs incr_coeffs; public: void set_filter(Filter *p_filter, bool p_clear_history = true); void process(float *p_samples, int p_amount, int p_stride = 1, bool p_interpolate = false); void update_coeffs(int p_interp_buffer_len = 0); _FORCE_INLINE_ void process_one(float &p_sample); _FORCE_INLINE_ void process_one_interp(float &p_sample); void clear() { ha1 = ha2 = hb1 = hb2 = 0; } Processor(); }; private: float cutoff; float resonance; float gain; float sampling_rate; int stages; Mode mode; public: float get_response(float p_freq, Coeffs *p_coeffs); void set_mode(Mode p_mode); void set_cutoff(float p_cutoff); void set_resonance(float p_resonance); void set_gain(float p_gain); void set_sampling_rate(float p_srate); void set_stages(int p_stages); //adjust for multiple stages void prepare_coefficients(Coeffs *p_coeffs); Filter(); }; /* inline methods */ void Filter::Processor::process_one(float &p_sample) { float pre = p_sample; p_sample = (p_sample * coeffs.b0 + hb1 * coeffs.b1 + hb2 * coeffs.b2 + ha1 * coeffs.a1 + ha2 * coeffs.a2); ha2 = ha1; hb2 = hb1; hb1 = pre; ha1 = p_sample; } void Filter::Processor::process_one_interp(float &p_sample) { float pre = p_sample; p_sample = (p_sample * coeffs.b0 + hb1 * coeffs.b1 + hb2 * coeffs.b2 + ha1 * coeffs.a1 + ha2 * coeffs.a2); ha2 = ha1; hb2 = hb1; hb1 = pre; ha1 = p_sample; coeffs.b0 += incr_coeffs.b0; coeffs.b1 += incr_coeffs.b1; coeffs.b2 += incr_coeffs.b2; coeffs.a1 += incr_coeffs.a1; coeffs.a2 += incr_coeffs.a2; } #endif // FILTER_H zytrax-master/dsp/frame.h000066400000000000000000000046721347722000700157540ustar00rootroot00000000000000#ifndef AudioFrame_H #define AudioFrame_H #include "typedefs.h" static inline float undenormalise(volatile float f) { union { uint32_t i; float f; } v; v.f = f; // original: return (v.i & 0x7f800000) == 0 ? 0.0f : f; // version from Tim Blechmann: return (v.i & 0x7f800000) < 0x08000000 ? 0.0f : f; } struct AudioFrame { float l, r; _FORCE_INLINE_ const float &operator[](int idx) const { return idx == 0 ? l : r; } _FORCE_INLINE_ float &operator[](int idx) { return idx == 0 ? l : r; } _FORCE_INLINE_ AudioFrame operator+(const AudioFrame &p_frame) const { return AudioFrame(l + p_frame.l, r + p_frame.r); } _FORCE_INLINE_ AudioFrame operator-(const AudioFrame &p_frame) const { return AudioFrame(l - p_frame.l, r - p_frame.r); } _FORCE_INLINE_ AudioFrame operator*(const AudioFrame &p_frame) const { return AudioFrame(l * p_frame.l, r * p_frame.r); } _FORCE_INLINE_ AudioFrame operator/(const AudioFrame &p_frame) const { return AudioFrame(l / p_frame.l, r / p_frame.r); } _FORCE_INLINE_ void operator+=(const AudioFrame &p_frame) { l += p_frame.l; r += p_frame.r; } _FORCE_INLINE_ void operator-=(const AudioFrame &p_frame) { l -= p_frame.l; r -= p_frame.r; } _FORCE_INLINE_ void operator*=(const AudioFrame &p_frame) { l *= p_frame.l; r *= p_frame.r; } _FORCE_INLINE_ void operator/=(const AudioFrame &p_frame) { l /= p_frame.l; r /= p_frame.r; } _FORCE_INLINE_ AudioFrame operator+(float p_frame) const { return AudioFrame(l + p_frame, r + p_frame); } _FORCE_INLINE_ AudioFrame operator-(float p_frame) const { return AudioFrame(l - p_frame, r - p_frame); } _FORCE_INLINE_ AudioFrame operator*(float p_frame) const { return AudioFrame(l * p_frame, r * p_frame); } _FORCE_INLINE_ AudioFrame operator/(float p_frame) const { return AudioFrame(l / p_frame, r / p_frame); } _FORCE_INLINE_ void operator+=(float p_frame) { l += p_frame; r += p_frame; } _FORCE_INLINE_ void operator-=(float p_frame) { l -= p_frame; r -= p_frame; } _FORCE_INLINE_ void operator*=(float p_frame) { l *= p_frame; r *= p_frame; } _FORCE_INLINE_ void operator/=(float p_frame) { l /= p_frame; r /= p_frame; } _FORCE_INLINE_ AudioFrame(float p_l, float p_r) { l = p_l; r = p_r; } _FORCE_INLINE_ AudioFrame(const AudioFrame &p_frame) { l = p_frame.l; r = p_frame.r; } _FORCE_INLINE_ void undenormalise() { l = ::undenormalise(l); r = ::undenormalise(r); } _FORCE_INLINE_ AudioFrame() {} }; #endif // AudioFrame_H zytrax-master/dsp/midi_event.cpp000066400000000000000000000041511347722000700173300ustar00rootroot00000000000000#include "midi_event.h" const char *MIDIEvent::cc_names[CC_MAX]{ "BankSelectMSB", "Modulation", "Breath", "Foot", "PortamentoTime", "DataEntryMSB", "MainVolume", "Pan", "Expression", "BankSelectLSB", "DataEntryLSB", "DamperPedalToggle", "PortamentoToggle", "SostenutoToggle", "SoftPedalToggle", "FilterCutoff", "ReleaseTime", "AttackTime", "FilterResonance", "DecayTime", "VibratoDepth", "VibratoRate", "VibratoDelay", "PortamentoControl", "ReverbSend", "Fx2Send", "ChorusSend", "Fx4Send", "DataIncrement", "DataDecrement", "NRPN_LSB", "NRPN_MSB", "RPN_LSB", "RPN_MSB", "AllSoundsOffCmd", "ResetAllCmd", "LocalCtrlToggle", "AllNotesOff" }; const unsigned char MIDIEvent::cc_indices[CC_MAX]{ 0, 1, 2, 4, 5, 6, 7, 10, 11, 32, 38, 64, 65, 66, 67, 71, 72, 73, 74, 75, 76, 77, 78, 84, 91, 92, 93, 94, 96, 97, 98, 99, 120, 121, 122, 123, 128 }; Error MIDIEvent::parse(unsigned char *p_raw) { switch (p_raw[0] >> 4) { case 0x8: { type = MIDI_NOTE_OFF; note.note = p_raw[1]; note.velocity = p_raw[2]; } break; case 0x9: { type = MIDI_NOTE_ON; note.note = p_raw[1]; note.velocity = p_raw[2]; } break; case 0xA: { type = MIDI_NOTE_PRESSURE; note.note = p_raw[1]; note.velocity = p_raw[2]; } break; case 0xB: { type = MIDI_CONTROLLER; control.index = p_raw[1]; control.parameter = p_raw[2]; } break; case 0xC: { type = MIDI_PATCH_SELECT; patch.index = p_raw[1]; } break; case 0xD: { type = MIDI_AFTERTOUCH; aftertouch.pressure = p_raw[1]; } break; case 0xE: { type = MIDI_PITCH_BEND; pitch_bend.bend = (short(p_raw[1]) << 7) | short(p_raw[2]); } break; default: { return ERR_INVALID_PARAMETER; } } channel = p_raw[0] & 0xF; return OK; } MIDIEvent::MIDIEvent() { type = NONE; raw.param1 = 0; raw.param2 = 0; } MIDIEvent::MIDIEvent(Type p_type, unsigned char p_chan, unsigned char data1, unsigned char data2) { type = p_type; raw.param1 = data1; raw.param2 = data2; } MIDIEvent::MIDIEvent(Type p_type, unsigned char p_chan, unsigned short data) { type = p_type; raw2.param = data; } zytrax-master/dsp/midi_event.h000066400000000000000000000054061347722000700170010ustar00rootroot00000000000000#ifndef MIDI_EVENT_H #define MIDI_EVENT_H #include "globals/error_list.h" struct MIDIEvent { public: enum CC { //enum of those supported, use cc_indices to get actual number CC_BANK_SELECT_MSB, CC_MODULATION, CC_BREATH, CC_FOOT, CC_PORTAMENTO_TIME, CC_DATA_ENTRY_MSB, CC_MAIN_VOLUME, CC_PAN, CC_EXPRESSION, CC_BANK_SELECT_LSB, CC_DATA_ENTRY_LSB, CC_DAMPER_PEDAL_TOGGLE, CC_PORTAMENTO_TOGGLE, CC_SOSTENUTO_TOGGLE, CC_SOFT_PEDAL_TOGGLE, CC_FILTER_CUTOFF, CC_RELEASE_TIME, CC_ATTACK_TIME, CC_FILTER_RESONANCE, CC_DECAY_TIME, CC_VIBRATO_DEPTH, CC_VIBRATO_RATE, CC_VIBRATO_DELAY, CC_PORTAMENTO_CONTROL, CC_REVERB_SEND, CC_FX2_SEND, CC_CHORUS_SEND, CC_FX4_SEND, CC_DATA_INCREMENT, CC_DATA_DECREMENT, CC_NRPN_LSB, CC_NRPN_MSB, CC_RPN_LSB, CC_RPN_MSB, CC_ALL_SOUNDS_OFF_CMD, CC_RESET_ALL_CC_CMD, CC_LOCAL_CTRL_TOGGLE, CC_ALL_NOTES_OFF, CC_MAX }; static const char *cc_names[CC_MAX]; static const unsigned char cc_indices[CC_MAX]; enum Type { NONE = 0x0, SEQ_TEMPO = 0x1, SEQ_SIGNATURE = 0x2, SEQ_BAR = 0x3, SEQ_BEAT = 0x4, SEQ_SCALE = 0x5, STREAM_TAIL = 0x7, // end of stream, for stream buffers MIDI_NOTE_OFF = 0x8, MIDI_NOTE_ON = 0x9, MIDI_NOTE_PRESSURE = 0xA, MIDI_CONTROLLER = 0xB, MIDI_PATCH_SELECT = 0xC, MIDI_AFTERTOUCH = 0xD, //channel pressure MIDI_PITCH_BEND = 0xE, MIDI_SYSEX = 0xF, //this will not be used here for now anway }; unsigned char type; //see Type enum unsigned char channel; // 0 - 15 union { struct { unsigned char param1; unsigned char param2; } raw; struct { /* raw, 2 bytes */ unsigned short param; } raw2; struct { /* Note On / Note Off / Note Pressure */ unsigned char note; unsigned char velocity; } note; struct { /* Controller */ unsigned char index; //see cc_indices[CC] enum unsigned char parameter; } control; struct { /* Channel Pressure */ unsigned char pressure; } aftertouch; struct { /* Patch */ unsigned char index; } patch; struct { /* Pitch Bend */ unsigned short bend; /* 0 - 0x3999 */ } pitch_bend; struct { unsigned short tempo; } tempo; struct { unsigned char num; unsigned char denom; } signature; struct { unsigned short bar; // warning, max is 65535, It's a high number but may roll around } bar; struct { unsigned char beat; } beat; struct { enum ScaleType { SCALE_MAJOR, SCALE_MINOR, /* Will have add more later */ }; unsigned char scale_type; char key_note; /* 0 .. 11 */ } scale; }; Error parse(unsigned char *p_raw); MIDIEvent(); MIDIEvent(Type p_type, unsigned char p_chan, unsigned char data1, unsigned char data2); MIDIEvent(Type p_type, unsigned char p_chan, unsigned short data); }; #endif // EVENT_H zytrax-master/effects/000077500000000000000000000000001347722000700153315ustar00rootroot00000000000000zytrax-master/effects/SCsub000066400000000000000000000003141347722000700162710ustar00rootroot00000000000000Import('env'); Export('env'); targets=[] env.add_sources(targets,"internal/*.cpp") #env.add_sources(targets,"sampler/*.cpp") env.add_sources(targets,"*.cpp") env.libs+=env.Library('effects', targets); zytrax-master/effects/effects.cpp000066400000000000000000000307401347722000700174600ustar00rootroot00000000000000#include "effects.h" #include "effects/internal/effect_amplifier.h" #include "effects/internal/effect_chorus.h" #include "effects/internal/effect_compressor.h" #include "effects/internal/effect_delay.h" #include "effects/internal/effect_equalizer.h" #include "effects/internal/effect_filter.h" #include "effects/internal/effect_note_puncher.h" #include "effects/internal/effect_panner.h" #include "effects/internal/effect_phaser.h" #include "effects/internal/effect_reverb.h" #include "effects/internal/effect_stereo_enhancer.h" #include "gui/interface.h" class AudioEffectProviderInternal : public AudioEffectProvider { public: virtual AudioEffect *instantiate_effect(const AudioEffectInfo *p_info) { if (p_info->unique_ID == "reverb") { return new AudioEffectReverb; } if (p_info->unique_ID == "chorus") { return new AudioEffectChorus; } if (p_info->unique_ID == "compressor") { return new AudioEffectCompressor(false); } if (p_info->unique_ID == "sc_compressor") { return new AudioEffectCompressor(true); } if (p_info->unique_ID == "bpm_delay") { return new AudioEffectDelay(true); } if (p_info->unique_ID == "delay") { return new AudioEffectDelay(false); } if (p_info->unique_ID == "eq_6") { return new AudioEffectEqualizer(EQ::PRESET_6_BANDS); } if (p_info->unique_ID == "eq_10") { return new AudioEffectEqualizer(EQ::PRESET_10_BANDS); } if (p_info->unique_ID == "eq_21") { return new AudioEffectEqualizer(EQ::PRESET_21_BANDS); } if (p_info->unique_ID == "panner") { return new AudioEffectPanner; } if (p_info->unique_ID == "amplifier") { return new AudioEffectAmplifier; } if (p_info->unique_ID == "stereo_enhancer") { return new AudioEffectStereoEnhancer; } if (p_info->unique_ID == "phaser") { return new AudioEffectPhaser; } if (p_info->unique_ID == "filter_band_pass") { return new AudioEffectFilter(Filter::BANDPASS); } if (p_info->unique_ID == "filter_high_pass") { return new AudioEffectFilter(Filter::HIGHPASS); } if (p_info->unique_ID == "filter_low_pass") { return new AudioEffectFilter(Filter::LOWPASS); } if (p_info->unique_ID == "filter_notch") { return new AudioEffectFilter(Filter::NOTCH); } if (p_info->unique_ID == "filter_peak") { return new AudioEffectFilter(Filter::PEAK); } if (p_info->unique_ID == "filter_band_limit") { return new AudioEffectFilter(Filter::BANDLIMIT); } if (p_info->unique_ID == "filter_low_shelf") { return new AudioEffectFilter(Filter::LOWSHELF); } if (p_info->unique_ID == "filter_high_shelf") { return new AudioEffectFilter(Filter::HIGHSHELF); } if (p_info->unique_ID == "note_puncher") { return new AudioEffectNotePuncher; } return NULL; } virtual void scan_effects(AudioEffectFactory *p_factory, ScanCallback p_callback, void *p_userdata) { //these are not scanned } virtual String get_id() const { return "internal"; } String get_name() const { return "Internal"; } }; AudioEffectProviderInternal internal_provider; void register_effects(AudioEffectFactory *p_factory) { p_factory->add_provider(&internal_provider); { //Reverb AudioEffectInfo info; info.caption = "Reverb"; info.description = "Standard Comb/Allpass filter based reverb."; info.author = "Juan Linietsky"; info.unique_ID = "reverb"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Chorus AudioEffectInfo info; info.caption = "Chorus"; info.description = "Standard LFO-Based Multi-Voice chorus."; info.author = "Juan Linietsky"; info.unique_ID = "chorus"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Compressor AudioEffectInfo info; info.caption = "Compressor"; info.description = "Standard Threshold/Ratio envelope based compressor"; info.author = "Juan Linietsky"; info.unique_ID = "compressor"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Sidechain Compressor AudioEffectInfo info; info.caption = "Compressor (Sidechain)"; info.description = "Standard Threshold/Ratio envelope based sidechain compressor"; info.author = "Juan Linietsky"; info.unique_ID = "sc_compressor"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Delay AudioEffectInfo info; info.caption = "Delay"; info.description = "Standard delay with 4 taps"; info.author = "Juan Linietsky"; info.unique_ID = "delay"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Delay AudioEffectInfo info; info.caption = "Delay (BPM)"; info.description = "Standard BPM-synced delay with 4 taps"; info.author = "Juan Linietsky"; info.unique_ID = "bpm_delay"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Equalizer AudioEffectInfo info; info.caption = "Equalizer (6 Bands)"; info.description = "Standard Equalizer"; info.author = "Juan Linietsky"; info.unique_ID = "eq_6"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Equalizer AudioEffectInfo info; info.caption = "Equalizer (10 Bands)"; info.description = "Standard Equalizer"; info.author = "Juan Linietsky"; info.unique_ID = "eq_10"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Equalizer AudioEffectInfo info; info.caption = "Equalizer (21 Bands)"; info.description = "Standard Equalizer"; info.author = "Juan Linietsky"; info.unique_ID = "eq_21"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Panner AudioEffectInfo info; info.caption = "Panner"; info.description = "Change panning"; info.author = "Juan Linietsky"; info.unique_ID = "panner"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Amplifier AudioEffectInfo info; info.caption = "Amplifier"; info.description = "Standard amplifier"; info.author = "Juan Linietsky"; info.unique_ID = "amplifier"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Stereo Enhancer AudioEffectInfo info; info.caption = "Stereo Enhancer"; info.description = "Stereo Enhancer using multiple techniques"; info.author = "Juan Linietsky"; info.unique_ID = "stereo_enhancer"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //Phaser AudioEffectInfo info; info.caption = "Phaser"; info.description = "Simpler phaser effect"; info.author = "Juan Linietsky"; info.unique_ID = "phaser"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //puncher AudioEffectInfo info; info.caption = "Note Puncher"; info.description = "Adds a small punch envelope to notes"; info.author = "Juan Linietsky"; info.unique_ID = "note_puncher"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Low Pass Filter"; info.description = "Standard low pass filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_low_pass"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "High Pass Filter"; info.description = "Standard high pass filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_high_pass"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Band Pass Filter"; info.description = "Standard band pass filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_band_pass"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Notch Filter"; info.description = "Standard notch filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_notch"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Peak Filter"; info.description = "Standard peak filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_peak"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Band Limit Filter"; info.description = "Standard band limit filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_band_limit"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "Low Shelf Filter"; info.description = "Standard low shelf filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_low_shelf"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } { //filter AudioEffectInfo info; info.caption = "High Shelf Filter"; info.description = "Standard high shelf filter"; info.author = "Juan Linietsky"; info.unique_ID = "filter_high_shelf"; info.provider_caption = "Internal"; info.category = "Internal Effects"; info.version = "1.0"; info.synth = false; info.has_ui = false; info.internal = true; info.provider_id = "internal"; p_factory->add_audio_effect(info); } } zytrax-master/effects/effects.h000066400000000000000000000002221347722000700171150ustar00rootroot00000000000000#ifndef EFFECTS_H #define EFFECTS_H #include "engine/audio_effect.h" void register_effects(AudioEffectFactory *p_factory); #endif // EFFECTS_H zytrax-master/effects/internal/000077500000000000000000000000001347722000700171455ustar00rootroot00000000000000zytrax-master/effects/internal/effect_amplifier.cpp000066400000000000000000000043571347722000700231460ustar00rootroot00000000000000#include "effect_amplifier.h" #include "dsp/db.h" //process bool AudioEffectAmplifier::has_secondary_input() const { return false; } void AudioEffectAmplifier::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { float amp = db2linear(control_ports[CONTROL_PORT_AMPLIFY_DB].value); for (int i = 0; i < block_size; i++) { p_out[i].l = p_in[i].l * amp; p_out[i].r = p_in[i].r * amp; } } void AudioEffectAmplifier::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectAmplifier::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectAmplifier::set_sampling_rate(int p_hz) { } //info String AudioEffectAmplifier::get_name() const { return "Amplifier"; } String AudioEffectAmplifier::get_unique_id() const { return "amplifier"; } String AudioEffectAmplifier::get_provider_id() const { return "internal"; } int AudioEffectAmplifier::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectAmplifier::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectAmplifier::reset() { } /* Load/Save */ JSON::Node AudioEffectAmplifier::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectAmplifier::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } AudioEffectAmplifier::AudioEffectAmplifier() { control_ports[CONTROL_PORT_AMPLIFY_DB].name = "Amplify (db)"; control_ports[CONTROL_PORT_AMPLIFY_DB].identifier = "amplify"; control_ports[CONTROL_PORT_AMPLIFY_DB].min = -60; control_ports[CONTROL_PORT_AMPLIFY_DB].max = 24; control_ports[CONTROL_PORT_AMPLIFY_DB].step = 0.1; control_ports[CONTROL_PORT_AMPLIFY_DB].value = 0; block_size = 128; } AudioEffectAmplifier::~AudioEffectAmplifier() { } zytrax-master/effects/internal/effect_amplifier.h000066400000000000000000000022301347722000700225770ustar00rootroot00000000000000#ifndef EFFECT_AMPLIFIER_H #define EFFECT_AMPLIFIER_H #include "engine/audio_effect.h" class AudioEffectAmplifier : public AudioEffect { enum ControlPorts { CONTROL_PORT_AMPLIFY_DB, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; int block_size; public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectAmplifier(); ~AudioEffectAmplifier(); }; #endif // EFFECT_AMPLIFIER_H zytrax-master/effects/internal/effect_chorus.cpp000066400000000000000000000231101347722000700224650ustar00rootroot00000000000000#include "effect_chorus.h" #include "dsp/db.h" #include //process bool AudioEffectChorus::has_secondary_input() const { return false; } void AudioEffectChorus::_process_chunk(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count) { float base_wet = control_ports[CONTROL_PORT_WET].value; float base_dry = control_ports[CONTROL_PORT_DRY].value; //fill ringbuffer for (int i = 0; i < p_frame_count; i++) { audio_buffer[(buffer_pos + i) & buffer_mask] = p_src_frames[i]; p_dst_frames[i] = p_src_frames[i] * base_dry; } float mix_rate = sampling_rate; /* process voices */ int stride = CONTROL_PORT_VOICE2_ENABLED - CONTROL_PORT_VOICE1_ENABLED; for (int vc = 0; vc < 4; vc++) { int ofs = vc * stride; if (control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].value < 0.5) { continue; //voice disabled; } float v_rate = control_ports[CONTROL_PORT_VOICE1_HZ + ofs].value; float v_delay = control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].value; float v_depth = control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].value; float v_cutoff = control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].value; float v_pan = control_ports[CONTROL_PORT_VOICE1_PAN + ofs].value; float v_level = control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].value; double time_to_mix = (float)p_frame_count / mix_rate; double cycles_to_mix = time_to_mix * v_rate; unsigned int local_rb_pos = buffer_pos; AudioFrame *dst_buff = p_dst_frames; AudioFrame *rb_buff = &audio_buffer[0]; double delay_msec = v_delay; unsigned int delay_frames = llrint((delay_msec / 1000.0) * mix_rate); float max_depth_frames = (v_depth / 1000.0) * mix_rate; uint64_t local_cycles = cycles[vc]; uint64_t increment = llrint(cycles_to_mix / (double)p_frame_count * (double)(1 << AudioEffectChorus::CYCLES_FRAC)); //check the LFO doesn't read ahead of the write pos if ((((unsigned int)max_depth_frames) + 10) > delay_frames) { //10 as some threshold to avoid precision stuff delay_frames += (int)max_depth_frames - delay_frames; delay_frames += 10; //threshold to avoid precision stuff } //low pass filter if (v_cutoff == 0) continue; float auxlp = expf(-2.0 * M_PI * v_cutoff / mix_rate); float c1 = 1.0 - auxlp; float c2 = auxlp; AudioFrame h = filter_h[vc]; if (v_cutoff >= AudioEffectChorus::MS_CUTOFF_MAX) { c1 = 1.0; c2 = 0.0; } //vol modifier AudioFrame vol_modifier = AudioFrame(base_wet, base_wet) * db2linear(v_level); vol_modifier.l *= CLAMP(1.0 - v_pan, 0, 1); vol_modifier.r *= CLAMP(1.0 + v_pan, 0, 1); for (int i = 0; i < p_frame_count; i++) { /** COMPUTE WAVEFORM **/ float phase = (float)(local_cycles & AudioEffectChorus::CYCLES_MASK) / (float)(1 << AudioEffectChorus::CYCLES_FRAC); float wave_delay = sinf(phase * 2.0 * M_PI) * max_depth_frames; int wave_delay_frames = lrint(floor(wave_delay)); float wave_delay_frac = wave_delay - (float)wave_delay_frames; /** COMPUTE RINGBUFFER POS**/ unsigned int rb_source = local_rb_pos; rb_source -= delay_frames; rb_source -= wave_delay_frames; /** READ FROM RINGBUFFER, LINEARLY INTERPOLATE */ AudioFrame val = rb_buff[rb_source & buffer_mask]; AudioFrame val_next = rb_buff[(rb_source - 1) & buffer_mask]; val += (val_next - val) * wave_delay_frac; val = val * c1 + h * c2; h = val; /** MIX VALUE TO OUTPUT **/ dst_buff[i] += val * vol_modifier; local_cycles += increment; local_rb_pos++; } filter_h[vc] = h; cycles[vc] += lrint(cycles_to_mix * (double)(1 << AudioEffectChorus::CYCLES_FRAC)); } buffer_pos += p_frame_count; } void AudioEffectChorus::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { int todo = block_size; while (todo) { int to_mix = MIN(todo, 256); //can't mix too much _process_chunk(p_in, p_out, to_mix); p_in += to_mix; p_out += to_mix; todo -= to_mix; } } void AudioEffectChorus::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectChorus::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectChorus::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectChorus::get_name() const { return "Chorus"; } String AudioEffectChorus::get_unique_id() const { return "chorus"; } String AudioEffectChorus::get_provider_id() const { return "internal"; } int AudioEffectChorus::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectChorus::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectChorus::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectChorus::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectChorus::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectChorus::_update_buffers() { float ring_buffer_max_size = AudioEffectChorus::MAX_DELAY_MS + AudioEffectChorus::MAX_DEPTH_MS + AudioEffectChorus::MAX_WIDTH_MS; ring_buffer_max_size *= 2; //just to avoid complications ring_buffer_max_size /= 1000.0; //convert to seconds ring_buffer_max_size *= sampling_rate; int ringbuff_size = ring_buffer_max_size; int bits = 0; while (ringbuff_size > 0) { bits++; ringbuff_size /= 2; } ringbuff_size = 1 << bits; buffer_mask = ringbuff_size - 1; buffer_pos = 0; audio_buffer.resize(ringbuff_size); for (int i = 0; i < ringbuff_size; i++) { audio_buffer[i] = AudioFrame(0, 0); } } AudioEffectChorus::AudioEffectChorus() { control_ports[CONTROL_PORT_DRY].name = "Dry"; control_ports[CONTROL_PORT_DRY].identifier = "dry"; control_ports[CONTROL_PORT_DRY].min = 0; control_ports[CONTROL_PORT_DRY].max = 1; control_ports[CONTROL_PORT_DRY].step = 0.01; control_ports[CONTROL_PORT_DRY].value = 1; control_ports[CONTROL_PORT_WET].name = "Wet"; control_ports[CONTROL_PORT_WET].identifier = "wet"; control_ports[CONTROL_PORT_WET].min = 0; control_ports[CONTROL_PORT_WET].max = 1; control_ports[CONTROL_PORT_WET].step = 0.01; control_ports[CONTROL_PORT_WET].value = 0.5; int stride = CONTROL_PORT_VOICE2_ENABLED - CONTROL_PORT_VOICE1_ENABLED; for (int i = 0; i < 4; i++) { String pp = "Voice " + String::num(i + 1) + " "; String ppid = "voice_" + String::num(i + 0) + "_"; int ofs = i * stride; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].name = pp + "Enabled"; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].identifier = ppid + "enabled"; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].hint = ControlPort::HINT_TOGGLE; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].max = 1; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].value = i < 2 ? 1 : 0; ; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].name = pp + "Delay (ms)"; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].identifier = ppid + "delay"; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].max = MAX_DELAY_MS; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].value = 15 + i * 5; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].name = pp + "Rate (hz)"; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].identifier = ppid + "rate"; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].min = 0.01; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].max = 20; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].value = 0.8 + i * 0.4; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].step = 0.01; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].name = pp + "Depth (ms)"; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].identifier = ppid + "depth"; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].max = MAX_DEPTH_MS; ; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].value = 2 + i; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].name = pp + "Level (db)"; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].identifier = ppid + "level"; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].min = -60; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].max = 0; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].value = -2; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].name = pp + "Cutoff (hz)"; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].identifier = ppid + "cutoff"; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].min = 1; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].max = MS_CUTOFF_MAX; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].value = 8000; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].step = 1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].name = pp + "Pan"; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].identifier = ppid + "pan"; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].min = -1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].max = 1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].value = 0.5 * ((i & 1) ? -1.0 : 1.0); control_ports[CONTROL_PORT_VOICE1_PAN + ofs].step = 0.01; } block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectChorus::~AudioEffectChorus() { } zytrax-master/effects/internal/effect_chorus.h000066400000000000000000000046741347722000700221500ustar00rootroot00000000000000#ifndef EFFECT_CHORUS_H #define EFFECT_CHORUS_H #include "engine/audio_effect.h" class AudioEffectChorus : public AudioEffect { enum { MAX_DELAY_MS = 50, MAX_DEPTH_MS = 20, MAX_WIDTH_MS = 50, MAX_VOICES = 4, CYCLES_FRAC = 16, CYCLES_MASK = (1 << CYCLES_FRAC) - 1, MAX_CHANNELS = 4, MS_CUTOFF_MAX = 16000 }; Vector audio_buffer; unsigned int buffer_pos; unsigned int buffer_mask; AudioFrame filter_h[4]; uint64_t cycles[4]; void _process_chunk(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count); int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_WET, CONTROL_PORT_DRY, CONTROL_PORT_VOICE1_ENABLED, CONTROL_PORT_VOICE1_DELAY, CONTROL_PORT_VOICE1_HZ, CONTROL_PORT_VOICE1_DEPTH_MS, CONTROL_PORT_VOICE1_LEVEL_DB, CONTROL_PORT_VOICE1_CUTOFF_HZ, CONTROL_PORT_VOICE1_PAN, CONTROL_PORT_VOICE2_ENABLED, CONTROL_PORT_VOICE2_DELAY, CONTROL_PORT_VOICE2_HZ, CONTROL_PORT_VOICE2_DEPTH_MS, CONTROL_PORT_VOICE2_LEVEL_DB, CONTROL_PORT_VOICE2_CUTOFF_HZ, CONTROL_PORT_VOICE2_PAN, CONTROL_PORT_VOICE3_ENABLED, CONTROL_PORT_VOICE3_DELAY, CONTROL_PORT_VOICE3_HZ, CONTROL_PORT_VOICE3_DEPTH_MS, CONTROL_PORT_VOICE3_LEVEL_DB, CONTROL_PORT_VOICE3_CUTOFF_HZ, CONTROL_PORT_VOICE3_PAN, CONTROL_PORT_VOICE4_ENABLED, CONTROL_PORT_VOICE4_DELAY, CONTROL_PORT_VOICE4_HZ, CONTROL_PORT_VOICE4_DEPTH_MS, CONTROL_PORT_VOICE4_LEVEL_DB, CONTROL_PORT_VOICE4_CUTOFF_HZ, CONTROL_PORT_VOICE4_PAN, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectChorus(); ~AudioEffectChorus(); }; #endif // EFFECT_CHORUS_H zytrax-master/effects/internal/effect_compressor.cpp000066400000000000000000000150061347722000700233630ustar00rootroot00000000000000#include "effect_compressor.h" #include "dsp/db.h" #include //process bool AudioEffectCompressor::has_secondary_input() const { return sidechain; } void AudioEffectCompressor::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { process_with_secondary(p_events, p_event_count, p_in, NULL, p_out, p_prev_active); } void AudioEffectCompressor::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { float threshold = db2linear(control_ports[CONTROL_PORT_THRESHOLD_DB].value); float sample_rate = sampling_rate; float ratatcoef = exp(-1 / (0.00001f * sample_rate)); float ratrelcoef = exp(-1 / (0.5f * sample_rate)); float attime = control_ports[CONTROL_PORT_ATTACK_MS].value / 1000.0; float reltime = control_ports[CONTROL_PORT_RELEASE_MS].value / 1000.0; float atcoef = exp(-1 / (attime * sample_rate)); float relcoef = exp(-1 / (reltime * sample_rate)); float pre = db2linear(control_ports[CONTROL_PORT_PRE_GAIN_DB].value); float makeup = db2linear(control_ports[CONTROL_PORT_POST_GAIN_DB].value); float mix = control_ports[CONTROL_PORT_MIX].value; float gr_meter_decay = exp(1 / (1 * sample_rate)); float ratio = control_ports[CONTROL_PORT_RATIO].value; for (int i = 0; i < block_size; i++) { AudioFrame s = p_in[i]; s *= pre; //convert to positive s.l = abs(s.l); s.r = abs(s.r); float peak = MAX(s.l, s.r); float overdb = 2.08136898f * linear2db(peak / threshold); if (overdb < 0.0) //we only care about what goes over to compress overdb = 0.0; if (overdb - rundb > 5) // diffeence is too large averatio = 4; if (overdb > rundb) { rundb = overdb + atcoef * (rundb - overdb); runratio = averatio + ratatcoef * (runratio - averatio); } else { rundb = overdb + relcoef * (rundb - overdb); runratio = averatio + ratrelcoef * (runratio - averatio); } overdb = rundb; averatio = runratio; float cratio; if (false) { //rato all-in cratio = 12 + averatio; } else { cratio = ratio; } float gr = -overdb * (cratio - 1) / cratio; float grv = db2linear(gr); runmax = maxover + relcoef * (runmax - maxover); // highest peak for setting att/rel decays in reltime maxover = runmax; if (grv < gr_meter) { gr_meter = grv; } else { gr_meter *= gr_meter_decay; if (gr_meter > 1) gr_meter = 1; } if (p_secondary) { p_out[i] = p_secondary[i] * grv * makeup * mix + p_secondary[i] * (1.0 - mix); p_out[i] += p_in[i]; } else { p_out[i] = p_in[i] * grv * makeup * mix + p_in[i] * (1.0 - mix); } } } void AudioEffectCompressor::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectCompressor::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectCompressor::get_name() const { return sidechain ? "Compressor (SideC)" : "Compressor"; } String AudioEffectCompressor::get_unique_id() const { return sidechain ? "sc_compressor" : "compressor"; } String AudioEffectCompressor::get_provider_id() const { return "internal"; } int AudioEffectCompressor::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectCompressor::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectCompressor::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectCompressor::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectCompressor::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectCompressor::_update_buffers() { rundb = 0; runratio = 0; averatio = 0; runmax = 0; maxover = 0; gr_meter = 1.0; current_channel = -1; } AudioEffectCompressor::AudioEffectCompressor(bool p_sidechain) { sidechain = p_sidechain; control_ports[CONTROL_PORT_PRE_GAIN_DB].name = "Pre Gain (db)"; control_ports[CONTROL_PORT_PRE_GAIN_DB].identifier = "pre_gain"; control_ports[CONTROL_PORT_PRE_GAIN_DB].min = -24; control_ports[CONTROL_PORT_PRE_GAIN_DB].max = 24; control_ports[CONTROL_PORT_PRE_GAIN_DB].step = 0.1; control_ports[CONTROL_PORT_PRE_GAIN_DB].value = 0; control_ports[CONTROL_PORT_THRESHOLD_DB].name = "Threshold (db)"; control_ports[CONTROL_PORT_THRESHOLD_DB].identifier = "threshold"; control_ports[CONTROL_PORT_THRESHOLD_DB].min = -60; control_ports[CONTROL_PORT_THRESHOLD_DB].max = 0; control_ports[CONTROL_PORT_THRESHOLD_DB].step = 0.1; control_ports[CONTROL_PORT_THRESHOLD_DB].value = -6; control_ports[CONTROL_PORT_RATIO].name = "Ratio"; control_ports[CONTROL_PORT_RATIO].identifier = "ratio"; control_ports[CONTROL_PORT_RATIO].max = 48; control_ports[CONTROL_PORT_RATIO].step = 0.1; control_ports[CONTROL_PORT_RATIO].value = 4; control_ports[CONTROL_PORT_ATTACK_MS].name = "Attack (ms)"; control_ports[CONTROL_PORT_ATTACK_MS].identifier = "attack"; control_ports[CONTROL_PORT_ATTACK_MS].min = 0.01; control_ports[CONTROL_PORT_ATTACK_MS].max = 20; control_ports[CONTROL_PORT_ATTACK_MS].step = 0.01; control_ports[CONTROL_PORT_ATTACK_MS].value = 0.2; control_ports[CONTROL_PORT_RELEASE_MS].name = "Release (ms)"; control_ports[CONTROL_PORT_RELEASE_MS].identifier = "release"; control_ports[CONTROL_PORT_RELEASE_MS].min = 20; control_ports[CONTROL_PORT_RELEASE_MS].max = 2000; control_ports[CONTROL_PORT_RELEASE_MS].step = 0.1; control_ports[CONTROL_PORT_RELEASE_MS].value = 250; control_ports[CONTROL_PORT_POST_GAIN_DB].name = "Post Gain (db)"; control_ports[CONTROL_PORT_POST_GAIN_DB].identifier = "post_gain"; control_ports[CONTROL_PORT_POST_GAIN_DB].min = -60; control_ports[CONTROL_PORT_POST_GAIN_DB].max = 24; control_ports[CONTROL_PORT_POST_GAIN_DB].step = 0.1; control_ports[CONTROL_PORT_POST_GAIN_DB].value = 0; control_ports[CONTROL_PORT_MIX].name = "Mix"; control_ports[CONTROL_PORT_MIX].identifier = "mix"; control_ports[CONTROL_PORT_MIX].max = 1; control_ports[CONTROL_PORT_MIX].step = 0.01; control_ports[CONTROL_PORT_MIX].value = 1; block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectCompressor::~AudioEffectCompressor() { } zytrax-master/effects/internal/effect_compressor.h000066400000000000000000000027351347722000700230350ustar00rootroot00000000000000#ifndef EFFECT_COMPRESSOR_H #define EFFECT_COMPRESSOR_H #include "engine/audio_effect.h" class AudioEffectCompressor : public AudioEffect { float rundb, averatio, runratio, runmax, maxover, gr_meter; int current_channel; int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_PRE_GAIN_DB, CONTROL_PORT_THRESHOLD_DB, CONTROL_PORT_RATIO, CONTROL_PORT_ATTACK_MS, CONTROL_PORT_RELEASE_MS, CONTROL_PORT_POST_GAIN_DB, CONTROL_PORT_MIX, CONTROL_PORT_MAX }; bool sidechain; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectCompressor(bool p_sidechain); ~AudioEffectCompressor(); }; #endif // EFFECT_COMPRESSOR_H zytrax-master/effects/internal/effect_delay.cpp000066400000000000000000000242011347722000700222620ustar00rootroot00000000000000#include "effect_delay.h" #include "dsp/db.h" #include //process bool AudioEffectDelay::has_secondary_input() const { return false; } void AudioEffectDelay::_process_chunk(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count) { float mix_rate = sampling_rate; float tap_level_f[MAX_TAPS]; AudioFrame tap_pan_f[MAX_TAPS]; unsigned int tap_delay_frames[MAX_TAPS]; float tap_feedback_level_f[MAX_TAPS]; float tap_feedback_stereo[MAX_TAPS]; float lpf_c[MAX_TAPS]; float lpf_ic[MAX_TAPS]; AudioFrame *fb_buf[MAX_TAPS]; bool enabled[MAX_TAPS]; int stride = CONTROL_PORT_TAP2_ENABLED - CONTROL_PORT_TAP1_ENABLED; for (int i = 0; i < MAX_TAPS; i++) { int ofs = i * stride; enabled[i] = control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].value > 0.5; float levelf = control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].value; tap_level_f[i] = levelf; float pan = control_ports[CONTROL_PORT_TAP1_PAN + ofs].value; tap_pan_f[i].l = CLAMP(1.0 - pan, 0, 1); tap_pan_f[i].r = CLAMP(1.0 + pan, 0, 1); float delay_ms; if (bpm_sync) { int subdiv_idx = control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].value; if (subdiv_idx < 0) { subdiv_idx = 0; } if (subdiv_idx >= BEAT_SUBDIV_MAX) { subdiv_idx = BEAT_SUBDIV_MAX - 1; } static const float subdiv_values[BEAT_SUBDIV_MAX] = { 1, 2, 3, 4, 6, 8, 12, 16 }; delay_ms = ((60000.0 / bpm) / subdiv_values[subdiv_idx]) * control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].value; } else { delay_ms = control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].value; } tap_delay_frames[i] = int((delay_ms / 1000.0) * mix_rate); tap_feedback_level_f[i] = control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].value; tap_feedback_stereo[i] = control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].value; lpf_c[i] = expf(-2.0 * M_PI * control_ports[CONTROL_PORT_TAP1_LPF + ofs].value / mix_rate); lpf_ic[i] = 1.0 - lpf_c[i]; fb_buf[i] = &taps[i].feedback_buffer[0]; } const AudioFrame *src = p_src_frames; AudioFrame *dst = p_dst_frames; AudioFrame *rb_buf = &ring_buffer[0]; for (int i = 0; i < p_frame_count; i++) { rb_buf[ring_buffer_pos & ring_buffer_mask] = src[i]; AudioFrame out = src[i]; for (int t = 0; t < MAX_TAPS; t++) { if (!enabled[t]) { continue; } AudioFrame tap_in = rb_buf[uint32_t(ring_buffer_pos - tap_delay_frames[t]) & ring_buffer_mask] * tap_pan_f[t]; tap_in += fb_buf[t][taps[t].feedback_buffer_pos]; tap_in *= tap_level_f[t]; tap_in = tap_in * lpf_ic[t] + taps[t].h * lpf_c[t]; taps[t].h = tap_in; out += tap_in; AudioFrame fb_in = tap_in * tap_feedback_level_f[t]; fb_in.undenormalise(); //fb stereo AudioFrame fb_in_aux = fb_in; fb_in.l = tap_feedback_stereo[t] * fb_in_aux.r + (1.0 - tap_feedback_stereo[t]) * fb_in_aux.l; fb_in.r = tap_feedback_stereo[t] * fb_in_aux.l + (1.0 - tap_feedback_stereo[t]) * fb_in_aux.r; fb_buf[t][taps[t].feedback_buffer_pos] = fb_in; if ((++taps[t].feedback_buffer_pos) >= tap_delay_frames[t]) { taps[t].feedback_buffer_pos = 0; } } dst[i] = out; ring_buffer_pos++; } } void AudioEffectDelay::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_BPM) { bpm = p_events[i].param8; } } int todo = block_size; while (todo) { int to_mix = MIN(todo, 256); //can't mix too much _process_chunk(p_in, p_out, to_mix); p_in += to_mix; p_out += to_mix; todo -= to_mix; } } void AudioEffectDelay::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectDelay::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectDelay::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectDelay::get_name() const { return bpm_sync ? "Delay (BPM)" : "Delay"; } String AudioEffectDelay::get_unique_id() const { return bpm_sync ? "bpm_delay" : "delay"; } String AudioEffectDelay::get_provider_id() const { return "internal"; } int AudioEffectDelay::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectDelay::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectDelay::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectDelay::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectDelay::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectDelay::_update_buffers() { float ring_buffer_max_size = MAX_DELAY_MS + 100; //add 100ms of extra room, just in case ring_buffer_max_size /= 1000.0; //convert to seconds ring_buffer_max_size *= sampling_rate; int ringbuff_size = ring_buffer_max_size; int bits = 0; while (ringbuff_size > 0) { bits++; ringbuff_size /= 2; } ringbuff_size = 1 << bits; ring_buffer_mask = ringbuff_size - 1; ring_buffer_pos = 0; ring_buffer.resize(ringbuff_size); for (int i = 0; i < MAX_TAPS; i++) { taps[i].feedback_buffer.resize(ringbuff_size); taps[i].feedback_buffer_pos = 0; taps[i].h = AudioFrame(0, 0); } } AudioEffectDelay::AudioEffectDelay(bool p_bpm_sync) { bpm_sync = p_bpm_sync; int stride = CONTROL_PORT_TAP2_ENABLED - CONTROL_PORT_TAP1_ENABLED; for (int i = 0; i < 4; i++) { String pp = "Tap " + String::num(i + 1) + " "; String ppid = "tap_" + String::num(i + 0) + "_"; int ofs = i * stride; control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].name = pp + "Enabled"; control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].identifier = ppid + "enabled"; control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].hint = ControlPort::HINT_TOGGLE; control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].max = 1; control_ports[CONTROL_PORT_TAP1_ENABLED + ofs].value = i == 0 ? 1 : 0; static const char *beat_subdiv_str[BEAT_SUBDIV_MAX] = { "1 Beat", "1/2 Beat", "1/3 Beat", "1/4 Beat", "1/6 Beat", "1/8 Beat", "1/12 Beat", "1/16 Beat" }; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].name = pp + "Unit Size"; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].identifier = ppid + "unit_size"; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].max = BEAT_SUBDIV_MAX - 1; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].value = 1; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].step = 1; control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].hint = ControlPort::HINT_ENUM; for (int j = 0; j < BEAT_SUBDIV_MAX; j++) { control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].enum_values.push_back(beat_subdiv_str[j]); } if (!bpm_sync) { control_ports[CONTROL_PORT_TAP1_BEAT_SUBDIV + ofs].visible = false; } if (bpm_sync) { control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].name = pp + "Delay (units)"; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].identifier = ppid + "delay"; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].min = 1; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].max = MAX_DELAY_UNITS; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].value = 1 + i; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].step = 1; } else { control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].name = pp + "Delay (ms)"; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].identifier = ppid + "delay"; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].min = 1; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].max = MAX_DELAY_MS; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].value = 100 + 100 * i; control_ports[CONTROL_PORT_TAP1_DELAY_UNITS + ofs].step = 1; } control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].name = pp + "Volume"; control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].identifier = ppid + "volume"; control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].min = 0; control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].max = 0.99; control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].value = 0.8; control_ports[CONTROL_PORT_TAP1_VOLUME + ofs].step = 0.01; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].name = pp + "Feedback"; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].identifier = ppid + "feedback"; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].min = 0; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].max = 1; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].value = 0; control_ports[CONTROL_PORT_TAP1_FEEDBACK + ofs].step = 0.01; control_ports[CONTROL_PORT_TAP1_LPF + ofs].name = pp + "Feedback LPF (hz)"; control_ports[CONTROL_PORT_TAP1_LPF + ofs].identifier = ppid + "feedback_lpf"; control_ports[CONTROL_PORT_TAP1_LPF + ofs].min = 1; control_ports[CONTROL_PORT_TAP1_LPF + ofs].max = MS_CUTOFF_MAX; control_ports[CONTROL_PORT_TAP1_LPF + ofs].value = 8000; control_ports[CONTROL_PORT_TAP1_LPF + ofs].step = 1; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].name = pp + "Feedback Stereo Swap"; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].identifier = ppid + "feedback_sswap"; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].min = 0; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].max = 1; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].value = 0; control_ports[CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP + ofs].step = 0.01; control_ports[CONTROL_PORT_TAP1_PAN + ofs].name = pp + "Pan"; control_ports[CONTROL_PORT_TAP1_PAN + ofs].identifier = ppid + "pan"; control_ports[CONTROL_PORT_TAP1_PAN + ofs].min = -1; control_ports[CONTROL_PORT_TAP1_PAN + ofs].max = 1; control_ports[CONTROL_PORT_TAP1_PAN + ofs].value = 0; control_ports[CONTROL_PORT_TAP1_PAN + ofs].step = 0.01; } block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); bpm = 125; } AudioEffectDelay::~AudioEffectDelay() { } zytrax-master/effects/internal/effect_delay.h000066400000000000000000000053701347722000700217350ustar00rootroot00000000000000#ifndef EFFECT_DELAY_H #define EFFECT_DELAY_H #include "engine/audio_effect.h" class AudioEffectDelay : public AudioEffect { int block_size; int sampling_rate; bool bpm_sync; float bpm; enum { MAX_DELAY_UNITS = 48, MAX_DELAY_MS = 4000, MS_CUTOFF_MAX = 16000, MAX_TAPS = 4 }; Vector ring_buffer; unsigned int ring_buffer_pos; unsigned int ring_buffer_mask; struct Tap { AudioFrame h; Vector feedback_buffer; unsigned int feedback_buffer_pos; }; void _process_chunk(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count); Tap taps[MAX_TAPS]; enum BeatSubdiv { BEAT_SUBDIV_1, BEAT_SUBDIV_2, BEAT_SUBDIV_3, BEAT_SUBDIV_4, BEAT_SUBDIV_6, BEAT_SUBDIV_8, BEAT_SUBDIV_12, BEAT_SUBDIV_16, BEAT_SUBDIV_MAX }; enum ControlPorts { CONTROL_PORT_TAP1_ENABLED, CONTROL_PORT_TAP1_BEAT_SUBDIV, CONTROL_PORT_TAP1_DELAY_UNITS, CONTROL_PORT_TAP1_VOLUME, CONTROL_PORT_TAP1_FEEDBACK, CONTROL_PORT_TAP1_FEEDBACK_STEREO_SWAP, CONTROL_PORT_TAP1_LPF, CONTROL_PORT_TAP1_PAN, CONTROL_PORT_TAP2_ENABLED, CONTROL_PORT_TAP2_BEAT_SUBDIV, CONTROL_PORT_TAP2_DELAY_UNITS, CONTROL_PORT_TAP2_VOLUME, CONTROL_PORT_TAP2_FEEDBACK, CONTROL_PORT_TAP2_FEEDBACK_STEREO_SWAP, CONTROL_PORT_TAP2_LPF, CONTROL_PORT_TAP2_PAN, CONTROL_PORT_TAP3_ENABLED, CONTROL_PORT_TAP3_BEAT_SUBDIV, CONTROL_PORT_TAP3_DELAY_UNITS, CONTROL_PORT_TAP3_VOLUME, CONTROL_PORT_TAP3_FEEDBACK, CONTROL_PORT_TAP3_FEEDBACK_STEREO_SWAP, CONTROL_PORT_TAP3_LPF, CONTROL_PORT_TAP3_PAN, CONTROL_PORT_TAP4_ENABLED, CONTROL_PORT_TAP4_BEAT_SUBDIV, CONTROL_PORT_TAP4_DELAY_UNITS, CONTROL_PORT_TAP4_VOLUME, CONTROL_PORT_TAP4_FEEDBACK, CONTROL_PORT_TAP4_FEEDBACK_STEREO_SWAP, CONTROL_PORT_TAP4_LPF, CONTROL_PORT_TAP4_PAN, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectDelay(bool p_bpm_sync); ~AudioEffectDelay(); }; #endif // EFFECT_DELAY_H zytrax-master/effects/internal/effect_equalizer.cpp000066400000000000000000000063121347722000700231700ustar00rootroot00000000000000#include "effect_equalizer.h" #include "dsp/db.h" #include //process bool AudioEffectEqualizer::has_secondary_input() const { return false; } void AudioEffectEqualizer::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { int band_count = bands[0].size(); EQ::BandProcess *proc_l = &bands[0][0]; EQ::BandProcess *proc_r = &bands[1][0]; float *bgain = &gains[0]; for (int i = 0; i < band_count; i++) { bgain[i] = db2linear(control_ports[i].value); } for (int i = 0; i < block_size; i++) { AudioFrame src = p_in[i]; AudioFrame dst = AudioFrame(0, 0); for (int j = 0; j < band_count; j++) { float l = src.l; float r = src.r; proc_l[j].process_one(l); proc_r[j].process_one(r); dst.l += l * bgain[j]; dst.r += r * bgain[j]; } p_out[i] = dst; } } void AudioEffectEqualizer::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectEqualizer::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectEqualizer::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectEqualizer::get_name() const { return "EQ (" + String::num(eq.get_band_count()) + " Band)"; } String AudioEffectEqualizer::get_unique_id() const { return "eq_" + String::num(eq.get_band_count()); } String AudioEffectEqualizer::get_provider_id() const { return "internal"; } int AudioEffectEqualizer::get_control_port_count() const { return control_ports.size(); } ControlPort *AudioEffectEqualizer::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, control_ports.size(), NULL); return &control_ports[p_port]; } void AudioEffectEqualizer::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectEqualizer::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < control_ports.size(); i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectEqualizer::from_json(const JSON::Node &node) { for (int i = 0; i < control_ports.size(); i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectEqualizer::_update_buffers() { eq.set_mix_rate(sampling_rate); for (int i = 0; i < 2; i++) { bands[i].clear(); for (int j = 0; j < eq.get_band_count(); j++) { bands[i].push_back(eq.get_band_processor(j)); } } gains.resize(eq.get_band_count()); } AudioEffectEqualizer::AudioEffectEqualizer(EQ::Preset p_preset) { eq.set_preset_band_mode(p_preset); for (int i = 0; i < eq.get_band_count(); i++) { ControlPortDefault port; port.name = String::num(eq.get_band_frequency(i), 0) + " hz (db)"; port.identifier = "band_" + String::num(i); port.min = -48; port.max = 12; port.step = 0.1; port.value = 0; control_ports.push_back(port); } block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectEqualizer::~AudioEffectEqualizer() { } zytrax-master/effects/internal/effect_equalizer.h000066400000000000000000000023771347722000700226440ustar00rootroot00000000000000#ifndef EFFECT_EQUALIZER_H #define EFFECT_EQUALIZER_H #include "effects/internal/eq.h" #include "engine/audio_effect.h" class AudioEffectEqualizer : public AudioEffect { int block_size; int sampling_rate; EQ eq; Vector bands[2]; Vector control_ports; Vector gains; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectEqualizer(EQ::Preset p_preset = EQ::PRESET_6_BANDS); ~AudioEffectEqualizer(); }; #endif // EFFECT_EQUALIZER_H zytrax-master/effects/internal/effect_filter.cpp000066400000000000000000000132511347722000700224540ustar00rootroot00000000000000#include "effect_filter.h" #include "dsp/db.h" #include //process bool AudioEffectFilter::has_secondary_input() const { return false; } template void AudioEffectFilter::_process_filter(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count) { for (int i = 0; i < p_frame_count; i++) { float f = p_src_frames[i].l; filter_process[0][0].process_one(f); if (S > 1) filter_process[0][1].process_one(f); if (S > 2) filter_process[0][2].process_one(f); if (S > 3) filter_process[0][3].process_one(f); p_dst_frames[i].l = f; } for (int i = 0; i < p_frame_count; i++) { float f = p_src_frames[i].r; filter_process[1][0].process_one(f); if (S > 1) filter_process[1][1].process_one(f); if (S > 2) filter_process[1][2].process_one(f); if (S > 3) filter_process[1][3].process_one(f); p_dst_frames[i].r = f; } } void AudioEffectFilter::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { filter.set_cutoff(control_ports[CONTROL_PORT_CUTOFF_HZ].value); filter.set_gain(control_ports[CONTROL_PORT_GAIN].value); filter.set_resonance(control_ports[CONTROL_PORT_RESONANCE].value); filter.set_mode(filter_mode); int stages = int(control_ports[CONTROL_PORT_DB].value) + 1; filter.set_stages(stages); filter.set_sampling_rate(sampling_rate); for (int i = 0; i < 2; i++) { for (int j = 0; j < 4; j++) { filter_process[i][j].update_coeffs(); } } if (stages == 1) { _process_filter<1>(p_in, p_out, block_size); } else if (stages == 2) { _process_filter<2>(p_in, p_out, block_size); } else if (stages == 3) { _process_filter<3>(p_in, p_out, block_size); } else if (stages == 4) { _process_filter<4>(p_in, p_out, block_size); } } void AudioEffectFilter::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectFilter::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectFilter::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectFilter::get_name() const { static const char *mode_names[] = { "BandPass", "HighPass", "LowPass", "Notch", "Peak", "BandLimit", "LowShelf", "HighShelf" }; return mode_names[filter_mode]; } String AudioEffectFilter::get_unique_id() const { static const char *mode_names[] = { "filter_band_pass", "filter_high_pass", "filter_low_pass", "filter_notch", "filter_peak", "filter_band_limit", "filter_low_shelf", "filter_high_shelf" }; return mode_names[filter_mode]; } String AudioEffectFilter::get_provider_id() const { return "internal"; } int AudioEffectFilter::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectFilter::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectFilter::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectFilter::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectFilter::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectFilter::_update_buffers() { for (int i = 0; i < 2; i++) { for (int j = 0; j < 4; j++) { filter_process[i][j].clear(); } } } AudioEffectFilter::AudioEffectFilter(Filter::Mode p_filter_mode) { filter_mode = p_filter_mode; control_ports[CONTROL_PORT_CUTOFF_HZ].name = "Cutoff (hz)"; control_ports[CONTROL_PORT_CUTOFF_HZ].identifier = "cutoff"; control_ports[CONTROL_PORT_CUTOFF_HZ].min = 10; control_ports[CONTROL_PORT_CUTOFF_HZ].max = 20000; control_ports[CONTROL_PORT_CUTOFF_HZ].step = 1; control_ports[CONTROL_PORT_CUTOFF_HZ].value = 2000; control_ports[CONTROL_PORT_RESONANCE].name = "Resonance"; control_ports[CONTROL_PORT_RESONANCE].identifier = "resonance"; control_ports[CONTROL_PORT_RESONANCE].min = 0; control_ports[CONTROL_PORT_RESONANCE].max = 1; control_ports[CONTROL_PORT_RESONANCE].step = 0.01; control_ports[CONTROL_PORT_RESONANCE].value = 0.5; control_ports[CONTROL_PORT_GAIN].name = "Gain"; control_ports[CONTROL_PORT_GAIN].identifier = "gain"; control_ports[CONTROL_PORT_GAIN].min = 0; control_ports[CONTROL_PORT_GAIN].max = 4; control_ports[CONTROL_PORT_GAIN].step = 0.01; control_ports[CONTROL_PORT_GAIN].value = 1.0; if (filter_mode == Filter::LOWPASS || filter_mode == Filter::HIGHPASS || filter_mode == Filter::BANDPASS) { control_ports[CONTROL_PORT_GAIN].visible = false; } control_ports[CONTROL_PORT_DB].name = "Strength"; control_ports[CONTROL_PORT_DB].identifier = "strength"; control_ports[CONTROL_PORT_DB].min = 0; control_ports[CONTROL_PORT_DB].max = 3; control_ports[CONTROL_PORT_DB].step = 1; control_ports[CONTROL_PORT_DB].value = 0; control_ports[CONTROL_PORT_DB].hint = ControlPort::HINT_ENUM; control_ports[CONTROL_PORT_DB].enum_values.push_back("6 dB"); control_ports[CONTROL_PORT_DB].enum_values.push_back("12 dB"); control_ports[CONTROL_PORT_DB].enum_values.push_back("18 dB"); control_ports[CONTROL_PORT_DB].enum_values.push_back("24 dB"); for (int i = 0; i < 2; i++) { for (int j = 0; j < 4; j++) { filter_process[i][j].set_filter(&filter); } } block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectFilter::~AudioEffectFilter() { } zytrax-master/effects/internal/effect_filter.h000066400000000000000000000027631347722000700221270ustar00rootroot00000000000000#ifndef EFFECT_FILTER_H #define EFFECT_FILTER_H #include "dsp/filter.h" #include "engine/audio_effect.h" class AudioEffectFilter : public AudioEffect { Filter::Mode filter_mode; int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_CUTOFF_HZ, CONTROL_PORT_RESONANCE, CONTROL_PORT_GAIN, CONTROL_PORT_DB, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); Filter filter; Filter::Processor filter_process[2][4]; template void _process_filter(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectFilter(Filter::Mode p_filter_mode); ~AudioEffectFilter(); }; #endif // EFFECT_FILTER_H zytrax-master/effects/internal/effect_limiter.cpp000066400000000000000000000231621347722000700226360ustar00rootroot00000000000000#include "effect_limiter.h" #include "dsp/db.h" #include #if 0 //process bool AudioEffectLimiter::has_secondary_input() const { return false; } void AudioEffectLimiter::_process_chunk(const AudioFrame *p_src_frames, AudioFrame *p_dst_frames, int p_frame_count) { float base_wet = control_ports[CONTROL_PORT_WET].value; float base_dry = control_ports[CONTROL_PORT_DRY].value; //fill ringbuffer for (int i = 0; i < p_frame_count; i++) { audio_buffer[(buffer_pos + i) & buffer_mask] = p_src_frames[i]; p_dst_frames[i] = p_src_frames[i] * base_dry; } float mix_rate = sampling_rate; /* process voices */ int stride = CONTROL_PORT_VOICE2_ENABLED - CONTROL_PORT_VOICE1_ENABLED; for (int vc = 0; vc < 4; vc++) { int ofs = vc * stride; if (control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].value < 0.5) { continue; //voice disabled; } float v_rate = control_ports[CONTROL_PORT_VOICE1_HZ + ofs].value; float v_delay = control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].value; float v_depth = control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].value; float v_cutoff = control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].value; float v_pan = control_ports[CONTROL_PORT_VOICE1_PAN + ofs].value; float v_level = control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].value; double time_to_mix = (float)p_frame_count / mix_rate; double cycles_to_mix = time_to_mix * v_rate; unsigned int local_rb_pos = buffer_pos; AudioFrame *dst_buff = p_dst_frames; AudioFrame *rb_buff = &audio_buffer[0]; double delay_msec = v_delay; unsigned int delay_frames = llrint((delay_msec / 1000.0) * mix_rate); float max_depth_frames = (v_depth / 1000.0) * mix_rate; uint64_t local_cycles = cycles[vc]; uint64_t increment = llrint(cycles_to_mix / (double)p_frame_count * (double)(1 << AudioEffectLimiter::CYCLES_FRAC)); //check the LFO doesn't read ahead of the write pos if ((((unsigned int)max_depth_frames) + 10) > delay_frames) { //10 as some threshold to avoid precision stuff delay_frames += (int)max_depth_frames - delay_frames; delay_frames += 10; //threshold to avoid precision stuff } //low pass filter if (v_cutoff == 0) continue; float auxlp = expf(-2.0 * M_PI * v_cutoff / mix_rate); float c1 = 1.0 - auxlp; float c2 = auxlp; AudioFrame h = filter_h[vc]; if (v_cutoff >= AudioEffectLimiter::MS_CUTOFF_MAX) { c1 = 1.0; c2 = 0.0; } //vol modifier AudioFrame vol_modifier = AudioFrame(base_wet, base_wet) * db2linear(v_level); vol_modifier.l *= CLAMP(1.0 - v_pan, 0, 1); vol_modifier.r *= CLAMP(1.0 + v_pan, 0, 1); for (int i = 0; i < p_frame_count; i++) { /** COMPUTE WAVEFORM **/ float phase = (float)(local_cycles & AudioEffectLimiter::CYCLES_MASK) / (float)(1 << AudioEffectLimiter::CYCLES_FRAC); float wave_delay = sinf(phase * 2.0 * M_PI) * max_depth_frames; int wave_delay_frames = lrint(floor(wave_delay)); float wave_delay_frac = wave_delay - (float)wave_delay_frames; /** COMPUTE RINGBUFFER POS**/ unsigned int rb_source = local_rb_pos; rb_source -= delay_frames; rb_source -= wave_delay_frames; /** READ FROM RINGBUFFER, LINEARLY INTERPOLATE */ AudioFrame val = rb_buff[rb_source & buffer_mask]; AudioFrame val_next = rb_buff[(rb_source - 1) & buffer_mask]; val += (val_next - val) * wave_delay_frac; val = val * c1 + h * c2; h = val; /** MIX VALUE TO OUTPUT **/ dst_buff[i] += val * vol_modifier; local_cycles += increment; local_rb_pos++; } filter_h[vc] = h; cycles[vc] += lrint(cycles_to_mix * (double)(1 << AudioEffectLimiter::CYCLES_FRAC)); } buffer_pos += p_frame_count; } void AudioEffectLimiter::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { int todo = block_size; while (todo) { int to_mix = MIN(todo, 256); //can't mix too much _process_chunk(p_in, p_out, to_mix); p_in += to_mix; p_out += to_mix; todo -= to_mix; } } void AudioEffectLimiter::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectLimiter::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectLimiter::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectLimiter::get_name() const { return "Limiter"; } String AudioEffectLimiter::get_unique_id() const { return "chorus"; } String AudioEffectLimiter::get_provider_id() const { return "internal"; } int AudioEffectLimiter::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectLimiter::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectLimiter::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectLimiter::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectLimiter::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectLimiter::_update_buffers() { float ring_buffer_max_size = AudioEffectLimiter::MAX_DELAY_MS + AudioEffectLimiter::MAX_DEPTH_MS + AudioEffectLimiter::MAX_WIDTH_MS; ring_buffer_max_size *= 2; //just to avoid complications ring_buffer_max_size /= 1000.0; //convert to seconds ring_buffer_max_size *= sampling_rate; int ringbuff_size = ring_buffer_max_size; int bits = 0; while (ringbuff_size > 0) { bits++; ringbuff_size /= 2; } ringbuff_size = 1 << bits; buffer_mask = ringbuff_size - 1; buffer_pos = 0; audio_buffer.resize(ringbuff_size); for (int i = 0; i < ringbuff_size; i++) { audio_buffer[i] = AudioFrame(0, 0); } } AudioEffectLimiter::AudioEffectLimiter() { control_ports[CONTROL_PORT_DRY].name = "Dry"; control_ports[CONTROL_PORT_DRY].identifier = "dry"; control_ports[CONTROL_PORT_DRY].min = 0; control_ports[CONTROL_PORT_DRY].max = 1; control_ports[CONTROL_PORT_DRY].step = 0.01; control_ports[CONTROL_PORT_DRY].value = 1; control_ports[CONTROL_PORT_WET].name = "Wet"; control_ports[CONTROL_PORT_WET].identifier = "wet"; control_ports[CONTROL_PORT_WET].min = 0; control_ports[CONTROL_PORT_WET].max = 1; control_ports[CONTROL_PORT_WET].step = 0.01; control_ports[CONTROL_PORT_WET].value = 0.5; int stride = CONTROL_PORT_VOICE2_ENABLED - CONTROL_PORT_VOICE1_ENABLED; for (int i = 0; i < 4; i++) { String pp = "Voice " + String::num(i + 1) + " "; String ppid = "voice_" + String::num(i + 0) + "_"; int ofs = i * stride; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].name = pp + "Enabled"; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].identifier = ppid + "enabled"; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].hint = ControlPort::HINT_TOGGLE; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].max = 1; control_ports[CONTROL_PORT_VOICE1_ENABLED + ofs].value = i < 2 ? 1 : 0; ; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].name = pp + "Delay (ms)"; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].identifier = ppid + "delay"; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].max = MAX_DELAY_MS; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].value = 15 + i * 5; control_ports[CONTROL_PORT_VOICE1_DELAY + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].name = pp + "Rate (hz)"; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].identifier = ppid + "rate"; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].min = 0.01; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].max = 20; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].value = 0.8 + i * 0.4; control_ports[CONTROL_PORT_VOICE1_HZ + ofs].step = 0.01; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].name = pp + "Depth (ms)"; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].identifier = ppid + "depth"; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].max = MAX_DEPTH_MS; ; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].value = 2 + i; control_ports[CONTROL_PORT_VOICE1_DEPTH_MS + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].name = pp + "Level (db)"; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].identifier = ppid + "level"; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].min = -60; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].max = 0; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].value = -2; control_ports[CONTROL_PORT_VOICE1_LEVEL_DB + ofs].step = 0.1; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].name = pp + "Cutoff (hz)"; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].identifier = ppid + "cutoff"; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].min = 1; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].max = MS_CUTOFF_MAX; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].value = 8000; control_ports[CONTROL_PORT_VOICE1_CUTOFF_HZ + ofs].step = 1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].name = pp + "Pan"; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].identifier = ppid + "pan"; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].min = -1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].max = 1; control_ports[CONTROL_PORT_VOICE1_PAN + ofs].value = 0.5 * ((i & 1) ? -1.0 : 1.0); control_ports[CONTROL_PORT_VOICE1_PAN + ofs].step = 0.01; } block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectLimiter::~AudioEffectLimiter() { } #endif zytrax-master/effects/internal/effect_limiter.h000066400000000000000000000023611347722000700223010ustar00rootroot00000000000000#ifndef EFFECT_LIMITER_H #define EFFECT_LIMITER_H #include "engine/audio_effect.h" #if 0 class AudioEffectLimiter : public AudioEffect { enum ControlPorts { CONTROL_PORT_THRESHOLD_DB, CONTROL_PORT_CEILING_DB, CONTROL_PORT_SOFT_CLIP_DB, CONTROL_PORT_RATIO, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectLimiter(); ~AudioEffectLimiter(); }; #endif #endif // EFFECT_LIMITER_H zytrax-master/effects/internal/effect_note_puncher.cpp000066400000000000000000000112711347722000700236600ustar00rootroot00000000000000#include "effect_note_puncher.h" #include "dsp/db.h" #include //process bool AudioEffectNotePuncher::has_secondary_input() const { return false; } void AudioEffectNotePuncher::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { process_with_secondary(p_events, p_event_count, p_in, NULL, p_out, p_prev_active); } void AudioEffectNotePuncher::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { int attack_samples = (control_ports[CONTROL_PORT_ATTACK_MS].value / 1000.0) * sampling_rate; int decay_samples = (control_ports[CONTROL_PORT_DECAY_MS].value / 1000.0) * sampling_rate; float amplify_db = control_ports[CONTROL_PORT_PUNCH_DB].value; //check notes for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_NOTE) { //launch an envelope for (int j = 0; j < MAX_PUNCHES; j++) { if (!punches[j].active) { punches[j].active = true; punches[j].time = 0; break; } } } } //process envelope float *envelope = &envelope_block[0]; for (int i = 0; i < block_size; i++) { envelope[i] = -1.0; // if negative, this was unused (will convert to 1.0 later) } for (int j = 0; j < MAX_PUNCHES; j++) { if (!punches[j].active) { continue; } for (int i = 0; i < block_size; i++) { float amp = 1.0; if (punches[j].time < attack_samples) { amp = float(punches[j].time) / float(attack_samples); } else { int time = punches[j].time - attack_samples; if (time < decay_samples) { amp = 1.0 - float(time) / float(decay_samples); } else { punches[j].active = false; break; } } amp = db2linear(amplify_db * amp); //printf("%i att %i dec %i - %f\n", punches[j].time, attack_samples, decay_samples, amp); envelope[i] = MAX(envelope[i], amp); punches[j].time++; } } for (int i = 0; i < block_size; i++) { float amp = envelope[i] >= 0 ? envelope[i] : 1.0; p_out[i] = p_in[i] * amp; } } void AudioEffectNotePuncher::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectNotePuncher::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectNotePuncher::get_name() const { return "NotePuncher"; } String AudioEffectNotePuncher::get_unique_id() const { return "note_puncher"; } String AudioEffectNotePuncher::get_provider_id() const { return "internal"; } int AudioEffectNotePuncher::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectNotePuncher::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectNotePuncher::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectNotePuncher::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectNotePuncher::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectNotePuncher::_update_buffers() { envelope_block.resize(block_size); for (int i = 0; i < MAX_PUNCHES; i++) { punches[i].active = false; punches[i].time = 0; } } AudioEffectNotePuncher::AudioEffectNotePuncher() { control_ports[CONTROL_PORT_PUNCH_DB].name = "Punch Strength (db)"; control_ports[CONTROL_PORT_PUNCH_DB].identifier = "punch_strength"; control_ports[CONTROL_PORT_PUNCH_DB].min = -24; control_ports[CONTROL_PORT_PUNCH_DB].max = 24; control_ports[CONTROL_PORT_PUNCH_DB].step = 0.1; control_ports[CONTROL_PORT_PUNCH_DB].value = 6; control_ports[CONTROL_PORT_ATTACK_MS].name = "Attack (ms)"; control_ports[CONTROL_PORT_ATTACK_MS].identifier = "attack"; control_ports[CONTROL_PORT_ATTACK_MS].min = 0.1; control_ports[CONTROL_PORT_ATTACK_MS].max = 20; control_ports[CONTROL_PORT_ATTACK_MS].step = 0.1; control_ports[CONTROL_PORT_ATTACK_MS].value = 5; control_ports[CONTROL_PORT_DECAY_MS].name = "Decay (ms)"; control_ports[CONTROL_PORT_DECAY_MS].identifier = "decay"; control_ports[CONTROL_PORT_DECAY_MS].min = 5; control_ports[CONTROL_PORT_DECAY_MS].max = 200; control_ports[CONTROL_PORT_DECAY_MS].step = 0.1; control_ports[CONTROL_PORT_DECAY_MS].value = 50; block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectNotePuncher::~AudioEffectNotePuncher() { } zytrax-master/effects/internal/effect_note_puncher.h000066400000000000000000000026251347722000700233300ustar00rootroot00000000000000#ifndef EFFECT_NOTE_PUNCHER_H #define EFFECT_NOTE_PUNCHER_H #include "engine/audio_effect.h" class AudioEffectNotePuncher : public AudioEffect { enum { MAX_PUNCHES = 32 }; struct Punch { bool active; int time; }; Punch punches[MAX_PUNCHES]; int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_PUNCH_DB, CONTROL_PORT_ATTACK_MS, CONTROL_PORT_DECAY_MS, CONTROL_PORT_MAX }; Vector envelope_block; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectNotePuncher(); ~AudioEffectNotePuncher(); }; #endif // EFFECT_NOTE_PUNCHER_H zytrax-master/effects/internal/effect_panner.cpp000066400000000000000000000043521347722000700224540ustar00rootroot00000000000000#include "effect_panner.h" //process bool AudioEffectPanner::has_secondary_input() const { return false; } void AudioEffectPanner::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { float lvol = CLAMP(1.0 - control_ports[CONTROL_PORT_PAN].value, 0, 1); float rvol = CLAMP(1.0 + control_ports[CONTROL_PORT_PAN].value, 0, 1); for (int i = 0; i < block_size; i++) { p_out[i].l = p_in[i].l * lvol + p_in[i].r * (1.0 - rvol); p_out[i].r = p_in[i].r * rvol + p_in[i].l * (1.0 - lvol); } } void AudioEffectPanner::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectPanner::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectPanner::set_sampling_rate(int p_hz) { } //info String AudioEffectPanner::get_name() const { return "Panner"; } String AudioEffectPanner::get_unique_id() const { return "panner"; } String AudioEffectPanner::get_provider_id() const { return "internal"; } int AudioEffectPanner::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectPanner::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectPanner::reset() { } /* Load/Save */ JSON::Node AudioEffectPanner::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectPanner::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } AudioEffectPanner::AudioEffectPanner() { control_ports[CONTROL_PORT_PAN].name = "Pan"; control_ports[CONTROL_PORT_PAN].identifier = "pan"; control_ports[CONTROL_PORT_PAN].min = -1; control_ports[CONTROL_PORT_PAN].max = 1; control_ports[CONTROL_PORT_PAN].step = 0.01; control_ports[CONTROL_PORT_PAN].value = 0; block_size = 128; } AudioEffectPanner::~AudioEffectPanner() { } zytrax-master/effects/internal/effect_panner.h000066400000000000000000000021771347722000700221240ustar00rootroot00000000000000#ifndef EFFECT_PANNER_H #define EFFECT_PANNER_H #include "engine/audio_effect.h" class AudioEffectPanner : public AudioEffect { enum ControlPorts { CONTROL_PORT_PAN, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; int block_size; public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectPanner(); ~AudioEffectPanner(); }; #endif // EFFECT_PANNER_H zytrax-master/effects/internal/effect_phaser.cpp000066400000000000000000000115731347722000700224560ustar00rootroot00000000000000#include "effect_phaser.h" #include "dsp/db.h" #include //process bool AudioEffectPhaser::has_secondary_input() const { return false; } void AudioEffectPhaser::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { float dmin = control_ports[CONTROL_PORT_RANGE_MIN_HZ].value / (sampling_rate / 2.0); float dmax = control_ports[CONTROL_PORT_RANGE_MAX_HZ].value / (sampling_rate / 2.0); float increment = 2.f * M_PI * (control_ports[CONTROL_PORT_RATE_HZ].value / sampling_rate); float depth = control_ports[CONTROL_PORT_DEPTH].value; float feedback = control_ports[CONTROL_PORT_FEEDBACK].value; for (int i = 0; i < block_size; i++) { phase += increment; while (phase >= M_PI * 2.f) { phase -= M_PI * 2.f; } float d = dmin + (dmax - dmin) * ((sin(phase) + 1.f) / 2.f); //update filter coeffs for (int j = 0; j < 6; j++) { allpass[0][j].delay(d); allpass[1][j].delay(d); } //calculate output float y = allpass[0][0].update( allpass[0][1].update( allpass[0][2].update( allpass[0][3].update( allpass[0][4].update( allpass[0][5].update(p_in[i].l + h.l * feedback)))))); h.l = y; p_out[i].l = p_in[i].l + y * depth; y = allpass[1][0].update( allpass[1][1].update( allpass[1][2].update( allpass[1][3].update( allpass[1][4].update( allpass[1][5].update(p_in[i].r + h.r * feedback)))))); h.r = y; p_out[i].r = p_in[i].r + y * depth; } } void AudioEffectPhaser::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectPhaser::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectPhaser::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectPhaser::get_name() const { return "Phaser"; } String AudioEffectPhaser::get_unique_id() const { return "phaser"; } String AudioEffectPhaser::get_provider_id() const { return "internal"; } int AudioEffectPhaser::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectPhaser::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectPhaser::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectPhaser::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectPhaser::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectPhaser::_update_buffers() { h = AudioFrame(0, 0); phase = 0; for (int i = 0; i < 2; i++) { for (int j = 0; j < 6; j++) { allpass[i][j] = AllpassDelay(); } } } AudioEffectPhaser::AudioEffectPhaser() { control_ports[CONTROL_PORT_RANGE_MIN_HZ].name = "Range Min (hz)"; control_ports[CONTROL_PORT_RANGE_MIN_HZ].identifier = "range_min"; control_ports[CONTROL_PORT_RANGE_MIN_HZ].min = 10; control_ports[CONTROL_PORT_RANGE_MIN_HZ].max = 10000; control_ports[CONTROL_PORT_RANGE_MIN_HZ].step = 1; control_ports[CONTROL_PORT_RANGE_MIN_HZ].value = 440; control_ports[CONTROL_PORT_RANGE_MAX_HZ].name = "Range Max (hz)"; control_ports[CONTROL_PORT_RANGE_MAX_HZ].identifier = "range_max"; control_ports[CONTROL_PORT_RANGE_MAX_HZ].min = 10; control_ports[CONTROL_PORT_RANGE_MAX_HZ].max = 10000; control_ports[CONTROL_PORT_RANGE_MAX_HZ].step = 1; control_ports[CONTROL_PORT_RANGE_MAX_HZ].value = 1600; control_ports[CONTROL_PORT_RATE_HZ].name = "Rate (hz)"; control_ports[CONTROL_PORT_RATE_HZ].identifier = "rate"; control_ports[CONTROL_PORT_RATE_HZ].min = 0.01; control_ports[CONTROL_PORT_RATE_HZ].max = 20; control_ports[CONTROL_PORT_RATE_HZ].step = 0.01; control_ports[CONTROL_PORT_RATE_HZ].value = 0.5; control_ports[CONTROL_PORT_FEEDBACK].name = "Feedback"; control_ports[CONTROL_PORT_FEEDBACK].identifier = "feedback"; control_ports[CONTROL_PORT_FEEDBACK].min = 0.1; control_ports[CONTROL_PORT_FEEDBACK].max = 0.9; control_ports[CONTROL_PORT_FEEDBACK].step = 0.1; control_ports[CONTROL_PORT_FEEDBACK].value = 0.7; control_ports[CONTROL_PORT_DEPTH].name = "Depth"; control_ports[CONTROL_PORT_DEPTH].identifier = "depth"; control_ports[CONTROL_PORT_DEPTH].min = 0.1; control_ports[CONTROL_PORT_DEPTH].max = 4; control_ports[CONTROL_PORT_DEPTH].step = 0.1; control_ports[CONTROL_PORT_DEPTH].value = 1; block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectPhaser::~AudioEffectPhaser() { } zytrax-master/effects/internal/effect_phaser.h000066400000000000000000000031451347722000700221170ustar00rootroot00000000000000#ifndef EFFECT_PHASER_H #define EFFECT_PHASER_H #include "engine/audio_effect.h" class AudioEffectPhaser : public AudioEffect { class AllpassDelay { float a, h; public: _FORCE_INLINE_ void delay(float d) { a = (1.f - d) / (1.f + d); } _FORCE_INLINE_ float update(float s) { float y = s * -a + h; h = y * a + s; return y; } AllpassDelay() { a = 0; h = 0; } }; AllpassDelay allpass[2][6]; float phase; AudioFrame h; int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_RANGE_MIN_HZ, CONTROL_PORT_RANGE_MAX_HZ, CONTROL_PORT_RATE_HZ, CONTROL_PORT_FEEDBACK, CONTROL_PORT_DEPTH, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectPhaser(); ~AudioEffectPhaser(); }; #endif // EFFECT_PHASER_H zytrax-master/effects/internal/effect_reverb.cpp000066400000000000000000000160121347722000700224520ustar00rootroot00000000000000 #include "effect_reverb.h" #include //process bool AudioEffectReverb::has_secondary_input() const { return false; } void AudioEffectReverb::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { if (control_ports[CONTROL_PORT_PREDELAY_MSEC].was_set) { reverb[0].set_predelay(control_ports[CONTROL_PORT_PREDELAY_MSEC].value); reverb[1].set_predelay(control_ports[CONTROL_PORT_PREDELAY_MSEC].value); control_ports[CONTROL_PORT_PREDELAY_MSEC].was_set = false; } if (control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].was_set) { reverb[0].set_predelay_feedback(control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].value); reverb[1].set_predelay_feedback(control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].value); control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].was_set = false; } if (control_ports[CONTROL_PORT_ROOM_SIZE].was_set) { reverb[0].set_room_size(control_ports[CONTROL_PORT_ROOM_SIZE].value); reverb[1].set_room_size(control_ports[CONTROL_PORT_ROOM_SIZE].value); control_ports[CONTROL_PORT_ROOM_SIZE].was_set = false; } if (control_ports[CONTROL_PORT_DAMPING].was_set) { reverb[0].set_damp(control_ports[CONTROL_PORT_DAMPING].value); reverb[1].set_damp(control_ports[CONTROL_PORT_DAMPING].value); control_ports[CONTROL_PORT_DAMPING].was_set = false; } if (control_ports[CONTROL_PORT_SPREAD].was_set) { reverb[0].set_extra_spread(control_ports[CONTROL_PORT_SPREAD].value); reverb[1].set_extra_spread(control_ports[CONTROL_PORT_SPREAD].value); control_ports[CONTROL_PORT_SPREAD].was_set = false; } if (control_ports[CONTROL_PORT_DRY].was_set) { reverb[0].set_dry(control_ports[CONTROL_PORT_DRY].value); reverb[1].set_dry(control_ports[CONTROL_PORT_DRY].value); control_ports[CONTROL_PORT_DRY].was_set = false; } if (control_ports[CONTROL_PORT_WET].was_set) { printf("set wet: %f\n", control_ports[CONTROL_PORT_WET].value); reverb[0].set_wet(control_ports[CONTROL_PORT_WET].value); reverb[1].set_wet(control_ports[CONTROL_PORT_WET].value); control_ports[CONTROL_PORT_WET].was_set = false; } if (control_ports[CONTROL_PORT_HPF].was_set) { reverb[0].set_highpass(control_ports[CONTROL_PORT_HPF].value); reverb[1].set_highpass(control_ports[CONTROL_PORT_HPF].value); control_ports[CONTROL_PORT_HPF].was_set = false; } float *src_buf = &tmp_frames_src[0]; float *dst_buf = &tmp_frames_dst[0]; for (int i = 0; i < 2; i++) { for (int j = 0; j < block_size; j++) { src_buf[j] = p_in[j][i]; } reverb[i].process(src_buf, dst_buf, block_size); for (int j = 0; j < block_size; j++) { p_out[j][i] = dst_buf[j]; } } } void AudioEffectReverb::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectReverb::set_process_block_size(int p_size) { block_size = p_size; tmp_frames_src.resize(block_size); tmp_frames_dst.resize(block_size); } void AudioEffectReverb::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectReverb::get_name() const { return "Reverb"; } String AudioEffectReverb::get_unique_id() const { return "reverb"; } String AudioEffectReverb::get_provider_id() const { return "internal"; } int AudioEffectReverb::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectReverb::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectReverb::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectReverb::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectReverb::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectReverb::_update_buffers() { reverb[0].set_mix_rate(sampling_rate); reverb[0].set_extra_spread_base(0); reverb[1].set_mix_rate(sampling_rate); reverb[1].set_extra_spread_base(0.000521); } AudioEffectReverb::AudioEffectReverb() { control_ports[CONTROL_PORT_PREDELAY_MSEC].name = "Predelay (msec)"; control_ports[CONTROL_PORT_PREDELAY_MSEC].identifier = "predelay"; control_ports[CONTROL_PORT_PREDELAY_MSEC].min = 20; control_ports[CONTROL_PORT_PREDELAY_MSEC].max = 500; control_ports[CONTROL_PORT_PREDELAY_MSEC].step = 1; control_ports[CONTROL_PORT_PREDELAY_MSEC].value = 150; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].name = "Predelay Feedback"; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].identifier = "predelay_fbk"; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].min = 0; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].max = 0.98; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].step = 0.01; control_ports[CONTROL_PORT_PREDELAY_FEEDBACK].value = 0.4; control_ports[CONTROL_PORT_ROOM_SIZE].name = "Room Size"; control_ports[CONTROL_PORT_ROOM_SIZE].identifier = "room_size"; control_ports[CONTROL_PORT_ROOM_SIZE].min = 0; control_ports[CONTROL_PORT_ROOM_SIZE].max = 1; control_ports[CONTROL_PORT_ROOM_SIZE].step = 0.01; control_ports[CONTROL_PORT_ROOM_SIZE].value = 0.8; control_ports[CONTROL_PORT_DAMPING].name = "Damping"; control_ports[CONTROL_PORT_DAMPING].identifier = "damping"; control_ports[CONTROL_PORT_DAMPING].min = 0; control_ports[CONTROL_PORT_DAMPING].max = 1; control_ports[CONTROL_PORT_DAMPING].step = 0.01; control_ports[CONTROL_PORT_DAMPING].value = 0.5; control_ports[CONTROL_PORT_SPREAD].name = "Spread"; control_ports[CONTROL_PORT_SPREAD].identifier = "spread"; control_ports[CONTROL_PORT_SPREAD].min = 0; control_ports[CONTROL_PORT_SPREAD].max = 1; control_ports[CONTROL_PORT_SPREAD].step = 0.01; control_ports[CONTROL_PORT_SPREAD].value = 1.0; control_ports[CONTROL_PORT_HPF].name = "High Pass Filter"; control_ports[CONTROL_PORT_HPF].identifier = "hpf"; control_ports[CONTROL_PORT_HPF].min = 0; control_ports[CONTROL_PORT_HPF].max = 1; control_ports[CONTROL_PORT_HPF].step = 0.01; control_ports[CONTROL_PORT_HPF].value = 0.2; control_ports[CONTROL_PORT_DRY].name = "Dry"; control_ports[CONTROL_PORT_DRY].identifier = "dry"; control_ports[CONTROL_PORT_DRY].min = 0; control_ports[CONTROL_PORT_DRY].max = 1; control_ports[CONTROL_PORT_DRY].step = 0.01; control_ports[CONTROL_PORT_DRY].value = 1; control_ports[CONTROL_PORT_WET].name = "Wet"; control_ports[CONTROL_PORT_WET].identifier = "wet"; control_ports[CONTROL_PORT_WET].min = 0; control_ports[CONTROL_PORT_WET].max = 1; control_ports[CONTROL_PORT_WET].step = 0.01; control_ports[CONTROL_PORT_WET].value = 0.5; block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); for (int i = 0; i < CONTROL_PORT_MAX; i++) { control_ports[i].was_set = true; //ensure updating } } AudioEffectReverb::~AudioEffectReverb() { } zytrax-master/effects/internal/effect_reverb.h000066400000000000000000000027261347722000700221260ustar00rootroot00000000000000#ifndef EFFECT_REVERB_H #define EFFECT_REVERB_H #include "effects/internal/reverb.h" #include "engine/audio_effect.h" class AudioEffectReverb : public AudioEffect { Reverb reverb[2]; Vector tmp_frames_src; Vector tmp_frames_dst; int block_size; int sampling_rate; enum ControlPorts { CONTROL_PORT_PREDELAY_MSEC, CONTROL_PORT_PREDELAY_FEEDBACK, CONTROL_PORT_ROOM_SIZE, CONTROL_PORT_DAMPING, CONTROL_PORT_SPREAD, CONTROL_PORT_DRY, CONTROL_PORT_WET, CONTROL_PORT_HPF, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectReverb(); ~AudioEffectReverb(); }; #endif // EFFECT_REVERB_H zytrax-master/effects/internal/effect_stereo_enhancer.cpp000066400000000000000000000111571347722000700243360ustar00rootroot00000000000000#include "effect_stereo_enhancer.h" //process bool AudioEffectStereoEnhancer::has_secondary_input() const { return false; } void AudioEffectStereoEnhancer::process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) { float intensity = control_ports[CONTROL_PORT_PAN_PULLOUT].value; bool surround_mode = control_ports[CONTROL_PORT_SURROUND].value > 0; float surround_amount = control_ports[CONTROL_PORT_SURROUND].value; unsigned int delay_frames = (control_ports[CONTROL_PORT_SURROUND].value / 1000.0) * sampling_rate; for (int i = 0; i < block_size; i++) { float l = p_in[i].l; float r = p_in[i].r; float center = (l + r) / 2.0f; l = (center + (l - center) * intensity); r = (center + (r - center) * intensity); if (surround_mode) { float val = (l + r) / 2.0; delay_ringbuff[ringbuff_pos & ringbuff_mask] = val; float out = delay_ringbuff[(ringbuff_pos - delay_frames) & ringbuff_mask] * surround_amount; l += out; r += -out; } else { float val = r; delay_ringbuff[ringbuff_pos & ringbuff_mask] = val; //r is delayed r = delay_ringbuff[(ringbuff_pos - delay_frames) & ringbuff_mask]; ; } p_out[i].l = l; p_out[i].r = r; ringbuff_pos++; } } void AudioEffectStereoEnhancer::process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) { } void AudioEffectStereoEnhancer::set_process_block_size(int p_size) { block_size = p_size; } void AudioEffectStereoEnhancer::set_sampling_rate(int p_hz) { if (sampling_rate != p_hz) { sampling_rate = p_hz; _update_buffers(); } } //info String AudioEffectStereoEnhancer::get_name() const { return "StereoEnh"; } String AudioEffectStereoEnhancer::get_unique_id() const { return "stereo_enhancer"; } String AudioEffectStereoEnhancer::get_provider_id() const { return "internal"; } int AudioEffectStereoEnhancer::get_control_port_count() const { return CONTROL_PORT_MAX; } ControlPort *AudioEffectStereoEnhancer::get_control_port(int p_port) { ERR_FAIL_INDEX_V(p_port, CONTROL_PORT_MAX, NULL); return &control_ports[p_port]; } void AudioEffectStereoEnhancer::reset() { _update_buffers(); } /* Load/Save */ JSON::Node AudioEffectStereoEnhancer::to_json() const { JSON::Node node = JSON::object(); for (int i = 0; i < CONTROL_PORT_MAX; i++) { node.add(control_ports[i].identifier.utf8().get_data(), control_ports[i].value); } return node; } Error AudioEffectStereoEnhancer::from_json(const JSON::Node &node) { for (int i = 0; i < CONTROL_PORT_MAX; i++) { std::string key = control_ports[i].identifier.utf8().get_data(); if (node.has(key)) { control_ports[i].value = node.get(key).toFloat(); } } return OK; } void AudioEffectStereoEnhancer::_update_buffers() { float ring_buffer_max_size = MAX_DELAY_MS + 8; //pad ring_buffer_max_size /= 1000.0; //convert to seconds ring_buffer_max_size *= sampling_rate; int ringbuff_size = (int)ring_buffer_max_size; int bits = 0; while (ringbuff_size > 0) { bits++; ringbuff_size /= 2; } ringbuff_size = 1 << bits; ringbuff_mask = ringbuff_size - 1; ringbuff_pos = 0; if (delay_ringbuff) { delete[] delay_ringbuff; } delay_ringbuff = new float[ringbuff_size]; for (int i = 0; i < ringbuff_size; i++) { delay_ringbuff[i] = 0; } } AudioEffectStereoEnhancer::AudioEffectStereoEnhancer() { control_ports[CONTROL_PORT_PAN_PULLOUT].name = "Pan Pullout"; control_ports[CONTROL_PORT_PAN_PULLOUT].identifier = "pan_pullout"; control_ports[CONTROL_PORT_PAN_PULLOUT].min = 0; control_ports[CONTROL_PORT_PAN_PULLOUT].max = 4; control_ports[CONTROL_PORT_PAN_PULLOUT].step = 0.01; control_ports[CONTROL_PORT_PAN_PULLOUT].value = 1; control_ports[CONTROL_PORT_TIME_PULLOUT].name = "Time Pullout"; control_ports[CONTROL_PORT_TIME_PULLOUT].identifier = "time_pullout"; control_ports[CONTROL_PORT_TIME_PULLOUT].min = 0; control_ports[CONTROL_PORT_TIME_PULLOUT].max = MAX_DELAY_MS; control_ports[CONTROL_PORT_TIME_PULLOUT].step = 0.1; control_ports[CONTROL_PORT_TIME_PULLOUT].value = 0; control_ports[CONTROL_PORT_SURROUND].name = "Surround"; control_ports[CONTROL_PORT_SURROUND].identifier = "surround"; control_ports[CONTROL_PORT_SURROUND].min = 0; control_ports[CONTROL_PORT_SURROUND].max = 1; control_ports[CONTROL_PORT_SURROUND].step = 0.01; control_ports[CONTROL_PORT_SURROUND].value = 0; delay_ringbuff = NULL; block_size = 128; sampling_rate = 0; set_sampling_rate(44100); set_process_block_size(128); _update_buffers(); } AudioEffectStereoEnhancer::~AudioEffectStereoEnhancer() { if (delay_ringbuff) { delete[] delay_ringbuff; } } zytrax-master/effects/internal/effect_stereo_enhancer.h000066400000000000000000000026221347722000700240000ustar00rootroot00000000000000#ifndef EFFECT_STEREO_ENHANCER_H #define EFFECT_STEREO_ENHANCER_H #include "engine/audio_effect.h" class AudioEffectStereoEnhancer : public AudioEffect { enum { MAX_DELAY_MS = 50 }; int block_size; int sampling_rate; float *delay_ringbuff; unsigned int ringbuff_pos; unsigned int ringbuff_mask; enum ControlPorts { CONTROL_PORT_PAN_PULLOUT, CONTROL_PORT_TIME_PULLOUT, CONTROL_PORT_SURROUND, CONTROL_PORT_MAX }; ControlPortDefault control_ports[CONTROL_PORT_MAX]; void _update_buffers(); public: //process virtual bool has_secondary_input() const; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active); virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active); virtual void set_process_block_size(int p_size); virtual void set_sampling_rate(int p_hz); //info virtual String get_name() const; virtual String get_unique_id() const; virtual String get_provider_id() const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual void reset(); /* Load/Save */ virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); AudioEffectStereoEnhancer(); ~AudioEffectStereoEnhancer(); }; #endif // EFFECT_STEREO_ENHANCER_H zytrax-master/effects/internal/eq.cpp000066400000000000000000000102261347722000700202570ustar00rootroot00000000000000#include "eq.h" // Author: reduzio@gmail.com (C) 2006 #include "globals/error_macros.h" #include "eq.h" #include #define POW2(v) ((v) * (v)) #define SQRT12 0.7071067811865475244008443621048490 /* Helper */ static int solve_quadratic(double a, double b, double c, double *r1, double *r2) { //solves quadractic and returns number of roots double base = 2 * a; if (base == 0.0f) return 0; double squared = b * b - 4 * a * c; if (squared < 0.0) return 0; squared = sqrt(squared); *r1 = (-b + squared) / base; *r2 = (-b - squared) / base; if (*r1 == *r2) return 1; else return 2; } EQ::BandProcess::BandProcess() { c1 = c2 = c3 = history.a1 = history.a2 = history.a3 = 0; history.b1 = history.b2 = history.b3 = 0; } void EQ::recalculate_band_coefficients() { #define BAND_LOG(m_f) (log((m_f)) / log(2.)) for (int i = 0; i < band.size(); i++) { double octave_size; double frq = band[i].freq; if (i == 0) { octave_size = BAND_LOG(band[1].freq) - BAND_LOG(frq); } else if (i == (band.size() - 1)) { octave_size = BAND_LOG(frq) - BAND_LOG(band[i - 1].freq); } else { double next = BAND_LOG(band[i + 1].freq) - BAND_LOG(frq); double prev = BAND_LOG(frq) - BAND_LOG(band[i - 1].freq); octave_size = (next + prev) / 2.0; } double frq_l = round(frq / pow(2.0, octave_size / 2.0)); double side_gain2 = POW2(SQRT12); double th = 2.0 * M_PI * frq / mix_rate; double th_l = 2.0 * M_PI * frq_l / mix_rate; double c2a = side_gain2 * POW2(cos(th)) - 2.0 * side_gain2 * cos(th_l) * cos(th) + side_gain2 - POW2(sin(th_l)); double c2b = 2.0 * side_gain2 * POW2(cos(th_l)) + side_gain2 * POW2(cos(th)) - 2.0 * side_gain2 * cos(th_l) * cos(th) - side_gain2 + POW2(sin(th_l)); double c2c = 0.25 * side_gain2 * POW2(cos(th)) - 0.5 * side_gain2 * cos(th_l) * cos(th) + 0.25 * side_gain2 - 0.25 * POW2(sin(th_l)); //printf("band %i, precoefs = %f,%f,%f\n",i,c2a,c2b,c2c); double r1, r2; //roots int roots = solve_quadratic(c2a, c2b, c2c, &r1, &r2); ERR_CONTINUE(roots == 0); band[i].c1 = 2.0 * ((0.5 - r1) / 2.0); band[i].c2 = 2.0 * r1; band[i].c3 = 2.0 * (0.5 + r1) * cos(th); //printf("band %i, coefs = %f,%f,%f\n",i,(float)bands[i].c1,(float)bands[i].c2,(float)bands[i].c3); } } void EQ::set_preset_band_mode(Preset p_preset) { band.clear(); #define PUSH_BANDS(m_bands) \ for (int i = 0; i < m_bands; i++) { \ Band b; \ b.freq = bands[i]; \ band.push_back(b); \ } switch (p_preset) { case PRESET_6_BANDS: { static const double bands[] = { 32, 100, 320, 1e3, 3200, 10e3 }; PUSH_BANDS(6); } break; case PRESET_8_BANDS: { static const double bands[] = { 32, 72, 192, 512, 1200, 3000, 7500, 16e3 }; PUSH_BANDS(8); } break; case PRESET_10_BANDS: { static const double bands[] = { 31.25, 62.5, 125, 250, 500, 1e3, 2e3, 4e3, 8e3, 16e3 }; PUSH_BANDS(10); } break; case PRESET_21_BANDS: { static const double bands[] = { 22, 32, 44, 63, 90, 125, 175, 250, 350, 500, 700, 1e3, 1400, 2e3, 2800, 4e3, 5600, 8e3, 11e3, 16e3, 22e3 }; PUSH_BANDS(21); } break; case PRESET_31_BANDS: { static const double bands[] = { 20, 25, 31.5, 40, 50, 63, 80, 100, 125, 160, 200, 250, 315, 400, 500, 630, 800, 1e3, 1250, 1600, 2e3, 2500, 3150, 4e3, 5e3, 6300, 8e3, 10e3, 12500, 16e3, 20e3 }; PUSH_BANDS(31); } break; }; recalculate_band_coefficients(); } int EQ::get_band_count() const { return band.size(); } float EQ::get_band_frequency(int p_band) { ERR_FAIL_INDEX_V(p_band, band.size(), 0); return band[p_band].freq; } void EQ::set_bands(const Vector &p_bands) { band.resize(p_bands.size()); for (int i = 0; i < p_bands.size(); i++) { band[i].freq = p_bands[i]; } recalculate_band_coefficients(); } void EQ::set_mix_rate(float p_mix_rate) { mix_rate = p_mix_rate; recalculate_band_coefficients(); } EQ::BandProcess EQ::get_band_processor(int p_band) const { EQ::BandProcess band_proc; ERR_FAIL_INDEX_V(p_band, band.size(), band_proc); band_proc.c1 = band[p_band].c1; band_proc.c2 = band[p_band].c2; band_proc.c3 = band[p_band].c3; return band_proc; } EQ::EQ() { mix_rate = 44100; } EQ::~EQ() { } zytrax-master/effects/internal/eq.h000066400000000000000000000023451347722000700177270ustar00rootroot00000000000000#ifndef EQ_H #define EQ_H // Author: reduzio@gmail.com (C) 2006 #include "globals/typedefs.h" #include "globals/vector.h" /** @author Juan Linietsky */ class EQ { public: enum Preset { PRESET_6_BANDS, PRESET_8_BANDS, PRESET_10_BANDS, PRESET_21_BANDS, PRESET_31_BANDS }; class BandProcess { friend class EQ; float c1, c2, c3; struct History { float a1, a2, a3; float b1, b2, b3; } history; public: inline void process_one(float &p_data); BandProcess(); }; private: struct Band { float freq; float c1, c2, c3; }; Vector band; float mix_rate; void recalculate_band_coefficients(); public: void set_mix_rate(float p_mix_rate); int get_band_count() const; void set_preset_band_mode(Preset p_preset); void set_bands(const Vector &p_bands); BandProcess get_band_processor(int p_band) const; float get_band_frequency(int p_band); EQ(); ~EQ(); }; /* Inline Function */ inline void EQ::BandProcess::process_one(float &p_data) { history.a1 = p_data; history.b1 = c1 * (history.a1 - history.a3) + c3 * history.b2 - c2 * history.b3; p_data = history.b1; history.a3 = history.a2; history.a2 = history.a1; history.b3 = history.b2; history.b2 = history.b1; } #endif // EQ_H zytrax-master/effects/internal/reverb.cpp000066400000000000000000000153071347722000700211440ustar00rootroot00000000000000#include "reverb.h" #include "dsp/frame.h" #include const float Reverb::comb_tunings[MAX_COMBS] = { //freeverb comb tunings 0.025306122448979593f, 0.026938775510204082f, 0.028956916099773241f, 0.03074829931972789f, 0.032244897959183672f, 0.03380952380952381f, 0.035306122448979592f, 0.036666666666666667f }; const float Reverb::allpass_tunings[MAX_ALLPASS] = { //freeverb allpass tunings 0.0051020408163265302f, 0.007732426303854875f, 0.01f, 0.012607709750566893f }; void Reverb::process(float *p_src, float *p_dst, int p_frames) { if (p_frames > INPUT_BUFFER_MAX_SIZE) p_frames = INPUT_BUFFER_MAX_SIZE; int predelay_frames = lrint((params.predelay / 1000.0) * params.mix_rate); if (predelay_frames < 10) predelay_frames = 10; if (predelay_frames >= echo_buffer_size) predelay_frames = echo_buffer_size - 1; for (int i = 0; i < p_frames; i++) { if (echo_buffer_pos >= echo_buffer_size) echo_buffer_pos = 0; int read_pos = echo_buffer_pos - predelay_frames; while (read_pos < 0) read_pos += echo_buffer_size; float in = undenormalise(echo_buffer[read_pos] * params.predelay_fb + p_src[i]); echo_buffer[echo_buffer_pos] = in; input_buffer[i] = in; p_dst[i] = 0; //take the chance and clear this echo_buffer_pos++; } if (params.hpf > 0) { float hpaux = expf(-2.0 * M_PI * params.hpf * 6000 / params.mix_rate); float hp_a1 = (1.0 + hpaux) / 2.0; float hp_a2 = -(1.0 + hpaux) / 2.0; float hp_b1 = hpaux; for (int i = 0; i < p_frames; i++) { float in = input_buffer[i]; input_buffer[i] = in * hp_a1 + hpf_h1 * hp_a2 + hpf_h2 * hp_b1; hpf_h2 = input_buffer[i]; hpf_h1 = in; } } for (int i = 0; i < MAX_COMBS; i++) { Comb &c = comb[i]; int size_limit = c.size - lrintf((float)c.extra_spread_frames * (1.0 - params.extra_spread)); for (int j = 0; j < p_frames; j++) { if (c.pos >= size_limit) //reset this now just in case c.pos = 0; float out = undenormalise(c.buffer[c.pos] * c.feedback); out = out * (1.0 - c.damp) + c.damp_h * c.damp; //lowpass c.damp_h = out; c.buffer[c.pos] = input_buffer[j] + out; p_dst[j] += out; c.pos++; } } static const float allpass_feedback = 0.7; /* this one works, but the other version is just nicer.... int ap_size_limit[MAX_ALLPASS]; for (int i=0;ipos>=ap_size_limit[m_ap]) \ ap->pos=0; \ aux=undenormalise(ap->buffer[ap->pos]); \ in=sample; \ sample=-in+aux; \ ap->pos++; PROCESS_ALLPASS(0); PROCESS_ALLPASS(1); PROCESS_ALLPASS(2); PROCESS_ALLPASS(3); p_dst[i]=sample; } */ for (int i = 0; i < MAX_ALLPASS; i++) { AllPass &a = allpass[i]; int size_limit = a.size - lrintf((float)a.extra_spread_frames * (1.0 - params.extra_spread)); for (int j = 0; j < p_frames; j++) { if (a.pos >= size_limit) a.pos = 0; float aux = a.buffer[a.pos]; a.buffer[a.pos] = undenormalise(allpass_feedback * aux + p_dst[j]); p_dst[j] = aux - allpass_feedback * a.buffer[a.pos]; a.pos++; } } static const float wet_scale = 0.6; for (int i = 0; i < p_frames; i++) { p_dst[i] = p_dst[i] * params.wet * wet_scale + p_src[i] * params.dry; } } void Reverb::set_room_size(float p_size) { params.room_size = p_size; update_parameters(); } void Reverb::set_damp(float p_damp) { params.damp = p_damp; update_parameters(); } void Reverb::set_wet(float p_wet) { params.wet = p_wet; } void Reverb::set_dry(float p_dry) { params.dry = p_dry; } void Reverb::set_predelay(float p_predelay) { params.predelay = p_predelay; } void Reverb::set_predelay_feedback(float p_predelay_fb) { params.predelay_fb = p_predelay_fb; } void Reverb::set_highpass(float p_frq) { if (p_frq > 1) p_frq = 1; if (p_frq < 0) p_frq = 0; params.hpf = p_frq; } void Reverb::set_extra_spread(float p_spread) { params.extra_spread = p_spread; } void Reverb::set_mix_rate(float p_mix_rate) { params.mix_rate = p_mix_rate; configure_buffers(); } void Reverb::set_extra_spread_base(float p_sec) { params.extra_spread_base = p_sec; configure_buffers(); } void Reverb::configure_buffers() { clear_buffers(); //clear if necessary for (int i = 0; i < MAX_COMBS; i++) { Comb &c = comb[i]; c.extra_spread_frames = lrint(params.extra_spread_base * params.mix_rate); int len = lrint(comb_tunings[i] * params.mix_rate) + c.extra_spread_frames; if (len < 5) len = 5; //may this happen? c.buffer = new float[len]; c.pos = 0; for (int j = 0; j < len; j++) c.buffer[j] = 0; c.size = len; } for (int i = 0; i < MAX_ALLPASS; i++) { AllPass &a = allpass[i]; a.extra_spread_frames = lrint(params.extra_spread_base * params.mix_rate); int len = lrint(allpass_tunings[i] * params.mix_rate) + a.extra_spread_frames; if (len < 5) len = 5; //may this happen? a.buffer = new float[len]; a.pos = 0; for (int j = 0; j < len; j++) a.buffer[j] = 0; a.size = len; } echo_buffer_size = (int)(((float)MAX_ECHO_MS / 1000.0) * params.mix_rate + 1.0); echo_buffer = new float[echo_buffer_size]; for (int i = 0; i < echo_buffer_size; i++) { echo_buffer[i] = 0; } echo_buffer_pos = 0; } void Reverb::update_parameters() { //more freeverb derived constants static const float room_scale = 0.28f; static const float room_offset = 0.7f; for (int i = 0; i < MAX_COMBS; i++) { Comb &c = comb[i]; c.feedback = room_offset + params.room_size * room_scale; if (c.feedback < room_offset) c.feedback = room_offset; else if (c.feedback > (room_offset + room_scale)) c.feedback = (room_offset + room_scale); float auxdmp = params.damp / 2.0 + 0.5; //only half the range (0.5 .. 1.0 is enough) auxdmp *= auxdmp; c.damp = expf(-2.0 * M_PI * auxdmp * 10000 / params.mix_rate); // 0 .. 10khz } } void Reverb::clear_buffers() { if (echo_buffer) { delete[] echo_buffer; } for (int i = 0; i < MAX_COMBS; i++) { if (comb[i].buffer) { delete[] comb[i].buffer; } comb[i].buffer = 0; } for (int i = 0; i < MAX_ALLPASS; i++) { if (allpass[i].buffer) { delete[] allpass[i].buffer; } allpass[i].buffer = 0; } } Reverb::Reverb() { params.room_size = 0.8; params.damp = 0.5; params.dry = 1.0; params.wet = 0.0; params.mix_rate = 44100; params.extra_spread_base = 0; params.extra_spread = 1.0; params.predelay = 150; params.predelay_fb = 0.4; params.hpf = 0; hpf_h1 = 0; hpf_h2 = 0; input_buffer = new float[INPUT_BUFFER_MAX_SIZE]; echo_buffer = 0; configure_buffers(); update_parameters(); } Reverb::~Reverb() { delete[] input_buffer; clear_buffers(); } zytrax-master/effects/internal/reverb.h000066400000000000000000000031671347722000700206120ustar00rootroot00000000000000#ifndef REVERB_H #define REVERB_H #include "globals/typedefs.h" class Reverb { public: enum { INPUT_BUFFER_MAX_SIZE = 1024, }; private: enum { MAX_COMBS = 8, MAX_ALLPASS = 4, MAX_ECHO_MS = 500 }; static const float comb_tunings[MAX_COMBS]; static const float allpass_tunings[MAX_ALLPASS]; struct Comb { int size; float *buffer; float feedback; float damp; //lowpass float damp_h; //history int pos; int extra_spread_frames; Comb() { size = 0; buffer = 0; feedback = 0; damp_h = 0; pos = 0; } }; struct AllPass { int size; float *buffer; int pos; int extra_spread_frames; AllPass() { size = 0; buffer = 0; pos = 0; } }; Comb comb[MAX_COMBS]; AllPass allpass[MAX_ALLPASS]; float *input_buffer; float *echo_buffer; int echo_buffer_size; int echo_buffer_pos; float hpf_h1, hpf_h2; struct Parameters { float room_size; float damp; float wet; float dry; float mix_rate; float extra_spread_base; float extra_spread; float predelay; float predelay_fb; float hpf; } params; void configure_buffers(); void update_parameters(); void clear_buffers(); public: void set_room_size(float p_size); void set_damp(float p_damp); void set_wet(float p_wet); void set_dry(float p_dry); void set_predelay(float p_predelay); // in ms void set_predelay_feedback(float p_predelay_fb); // in ms void set_highpass(float p_frq); void set_mix_rate(float p_mix_rate); void set_extra_spread(float p_spread); void set_extra_spread_base(float p_sec); void process(float *p_src, float *p_dst, int p_frames); Reverb(); ~Reverb(); }; #endif // REVERB_H zytrax-master/engine/000077500000000000000000000000001347722000700151575ustar00rootroot00000000000000zytrax-master/engine/SCsub000066400000000000000000000001671347722000700161250ustar00rootroot00000000000000Import('env'); Export('env'); targets=[] env.add_sources(targets,"*.cpp") env.libs+=env.Library('engine', targets); zytrax-master/engine/audio_effect.cpp000066400000000000000000000052311347722000700203010ustar00rootroot00000000000000#include "audio_effect.h" String AudioEffectProvider::scan_paths[AudioEffectProvider::MAX_SCAN_PATHS]; void AudioEffectProvider::set_scan_path(int p_index, const String &p_scan) { ERR_FAIL_INDEX(p_index, MAX_SCAN_PATHS); scan_paths[p_index] = p_scan; } String AudioEffectProvider::get_scan_path(int p_index) { ERR_FAIL_INDEX_V(p_index, MAX_SCAN_PATHS, String()); return scan_paths[p_index]; } void ControlPort::set_normalized(float p_val) { p_val *= get_max() - get_min(); p_val += get_min(); set(p_val); } float ControlPort::get_normalized() const { float v = get(); v -= get_min(); v /= get_max() - get_min(); return v; } String ControlPort::get_value_as_text() const { return String::num(get()); } ControlPort::Hint ControlPort::get_hint() const { return HINT_RANGE; } void ControlPort::ui_changed_notify() { if (changed_callback) { changed_callback(changed_userdata); } } void ControlPort::set_ui_changed_callback(UIChangedCallback p_callback, void *p_userdata) { changed_callback = p_callback; changed_userdata = p_userdata; } ControlPort::ControlPort() { changed_callback = NULL; changed_userdata = NULL; command = 0; } ControlPort::~ControlPort() { } void ControlPort::set_command(char p_command) { command = p_command; } char ControlPort::get_command() const { return command; } void AudioEffect::set_skip(bool p_skip) { skip = p_skip; } bool AudioEffect::is_skipped() const { return skip; } AudioEffect::AudioEffect() { skip = false; } AudioEffect::~AudioEffect() { } void AudioEffectFactory::add_audio_effect(AudioEffectInfo p_info) { audio_effects.push_back(p_info); } int AudioEffectFactory::get_audio_effect_count() { return audio_effects.size(); } const AudioEffectInfo *AudioEffectFactory::get_audio_effect(int p_idx) { ERR_FAIL_INDEX_V(p_idx, audio_effects.size(), NULL); return &audio_effects[p_idx]; } AudioEffect *AudioEffectFactory::instantiate_effect(int p_idx) { ERR_FAIL_INDEX_V(p_idx, audio_effects.size(), NULL); for (int i = 0; i < providers.size(); i++) { if (providers[i]->get_id() == audio_effects[p_idx].provider_id) { return providers[i]->instantiate_effect(&audio_effects[p_idx]); } } return NULL; } void AudioEffectFactory::add_provider(AudioEffectProvider *p_provider) { providers.push_back(p_provider); } void AudioEffectFactory::rescan_effects(AudioEffectProvider::ScanCallback p_callback, void *p_userdata) { //remove non internal effects for (int i = 0; i < audio_effects.size(); i++) { if (!audio_effects[i].internal) { audio_effects.remove(i); i--; } } //audio_effects.clear(); for (int i = 0; i < providers.size(); i++) { providers[i]->scan_effects(this, p_callback, p_userdata); } } zytrax-master/engine/audio_effect.h000066400000000000000000000132061347722000700177470ustar00rootroot00000000000000#ifndef AUDIO_EFFECT_H #define AUDIO_EFFECT_H #include "dsp/frame.h" #include "dsp/midi_event.h" #include "globals/json.h" #include "rstring.h" #include "vector.h" class AudioEffect; class AudioEffectFactory; struct AudioEffectInfo; class AudioEffectProvider { public: enum { MAX_SCAN_PATHS = 256 }; private: static String scan_paths[MAX_SCAN_PATHS]; public: typedef void (*ScanCallback)(const String &, void *); static void set_scan_path(int p_index, const String &p_path); static String get_scan_path(int p_index); virtual AudioEffect *instantiate_effect(const AudioEffectInfo *p_info) = 0; virtual void scan_effects(AudioEffectFactory *p_factory, ScanCallback p_callback, void *p_userdata) = 0; virtual String get_id() const = 0; virtual String get_name() const = 0; }; struct AudioEffectInfo { String caption; ///< Caption of the Node (for node browser menu) String description; ///< Short description of the node (for node browser / node info ) String author; ///< plugin author String unique_ID; ///< Unique String ID of node (so it is reconizable when saving) String provider_caption; String category; ///< String to categorize this node (for node browser) String icon_string; ///< icon string (to look up for an bundled icon - internal nodes) String version; bool synth; bool has_ui; bool internal; String provider_id; String path; AudioEffectInfo() { has_ui = false; synth = false; internal = false; } }; struct PortRangeHint { float min, max, def; String max_str, min_str; }; class ControlPort { public: typedef void (*UIChangedCallback)(void *); private: UIChangedCallback changed_callback; void *changed_userdata; char command; public: enum Hint { HINT_RANGE, ///< just a range, trust min and max HINT_RANGE_NORMALIZED, // just a range, but min and max are normalized, so ask text somewhere else HINT_TOGGLE, ///< valid values 0.0f , 1.0f HINT_ENUM, ///< asking integer values to get_Value_as_text will return them }; virtual String get_name() const = 0; virtual String get_identifier() const = 0; virtual float get_min() const = 0; virtual float get_max() const = 0; virtual float get_step() const = 0; virtual float get() const = 0; virtual bool is_visible() const = 0; virtual void set(float p_val) = 0; //set, optionally make the value the default too virtual void set_normalized(float p_val); // set in range 0-1, internally converted to range virtual float get_normalized() const; virtual String get_value_as_text() const; virtual Hint get_hint() const; void ui_changed_notify(); void set_ui_changed_callback(UIChangedCallback p_callback, void *p_userdata); void set_command(char p_command); char get_command() const; ControlPort(); virtual ~ControlPort(); }; class AudioEffect { bool skip; public: struct Event { enum Type { TYPE_NOTE, TYPE_NOTE_OFF, TYPE_AFTERTOUCH, TYPE_BPM, }; enum { NOTE_MAX = 0x7F //for note }; Type type; uint32_t param8; //for note, BPM float paramf; // for note volume (0-1) uint32_t offset; //offset in samples (for anything that supports it) }; //process virtual bool has_secondary_input() const = 0; virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) = 0; virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) = 0; virtual void set_process_block_size(int p_size) = 0; virtual void set_sampling_rate(int p_hz) = 0; //info virtual String get_name() const = 0; virtual String get_unique_id() const = 0; virtual String get_provider_id() const = 0; virtual int get_control_port_count() const = 0; virtual ControlPort *get_control_port(int p_port) = 0; virtual void reset() = 0; /* Load/Save */ virtual JSON::Node to_json() const = 0; virtual Error from_json(const JSON::Node &node) = 0; void set_skip(bool p_skip); bool is_skipped() const; AudioEffect(); virtual ~AudioEffect(); }; class ControlPortDefault : public ControlPort { public: String name; String identifier; float min, max, step; float value; Hint hint; bool visible; bool was_set; Vector enum_values; virtual String get_name() const { return name; } virtual String get_identifier() const { return identifier; } virtual float get_min() const { return min; } virtual float get_max() const { return max; } virtual float get_step() const { return step; } virtual float get() const { return value; } virtual void set(float p_val) { value = p_val; was_set = true; } virtual Hint get_hint() const { return hint; } virtual bool is_visible() const { return visible; } virtual String get_value_as_text() const { if (hint == HINT_RANGE || hint == HINT_RANGE_NORMALIZED) { return String::num(value); } else if (hint == HINT_ENUM) { int v = int(value); if (v >= 0 && v < enum_values.size()) { return enum_values[v]; } else { String::num(v); } } else { if (value > 0.5) { return "Enabled"; } else { return "Disabled"; } } } ControlPortDefault() { value = 0; hint = HINT_RANGE; min = 0; max = 1; step = 1; visible = true; was_set = false; } }; class AudioEffectFactory { Vector audio_effects; Vector providers; public: void add_audio_effect(AudioEffectInfo p_info); int get_audio_effect_count(); const AudioEffectInfo *get_audio_effect(int p_idx); AudioEffect *instantiate_effect(int p_idx); void add_provider(AudioEffectProvider *p_provider); void rescan_effects(AudioEffectProvider::ScanCallback p_callback = NULL, void *p_userdata = NULL); }; #endif // AUDIO_EFFECT_H zytrax-master/engine/audio_effect_midi.cpp000066400000000000000000000577051347722000700213200ustar00rootroot00000000000000#include "audio_effect_midi.h" #include "globals/base64.h" #include //semitones/sec #define BEND_BASE_SPEED 100 #define BEND_VIBRATO_MAX_RATE_HZ 10.0 #define BEND_VIBRATO_MAX_DEPTH_SEMITONES 1.0 #define SLIDE_BASE_SPEED 100 AudioEffectMIDI::MIDIEventStamped *AudioEffectMIDI::_process_midi_events(const Event *p_events, int p_event_count, float p_time, int &r_stamped_event_count) { int idx = 0; //smart parameters if (reset_pending) { if (custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].visible || custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].get_command()) { process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 127; process_events[idx].event.control.parameter = 127; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 65; process_events[idx].event.control.parameter = 0; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } update_pitch_bend_range = true; { process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = PITCH_BEND_MAX; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } current_pitch_bend = 0; extra_pitch_bend = 0; bend_portamento_last_note = -1; bend_portamento_target_note = -1; prev_bend_portamento = false; prev_bend_slide = false; bp_remap_note_off_from = -1; bp_remap_note_off_to = -1; prev_bend_vibrato = 0; reset_pending = false; } if (update_pitch_bend_range) { //pitch bend range process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 101; process_events[idx].event.control.parameter = 0; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 100; process_events[idx].event.control.parameter = 0; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 6; process_events[idx].event.control.parameter = pitch_bend_range; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 38; process_events[idx].event.control.parameter = 0; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; update_pitch_bend_range = false; } //used by bend portamento bool ignore_note_off = false; bool ignore_note_on = false; if (custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].visible || custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].get_command()) { //use smart bend float bend_portamento = custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].get_normalized(); if (bend_portamento == 0) { bool found_note_or_off = false; for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_NOTE) { bend_portamento_last_note = p_events[i].param8; found_note_or_off = true; } if (p_events[i].type == Event::TYPE_NOTE_OFF) { found_note_or_off = true; } } bend_portamento_target_note = -1; if (prev_bend_portamento && found_note_or_off) { //send event current_pitch_bend = 0; process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; //reset pitch bend prev_bend_portamento = false; } } else { //exclusive if (custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].was_set) { custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].value = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].value = 0; custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].was_set = false; } bool found_note = false; bool found_note_off = false; for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_NOTE) { bend_portamento_target_note = p_events[i].param8; found_note = true; } if (p_events[i].type == Event::TYPE_NOTE_OFF) { found_note_off = true; } } if (found_note && found_note_off) { //combo (one note ends, another starts) ignore_note_off = true; //do not want if (bend_portamento_last_note != -1) { bp_remap_note_off_from = bend_portamento_target_note; bp_remap_note_off_to = bend_portamento_last_note; } } if (found_note) { ignore_note_on = true; } if (bend_portamento_last_note != -1 && bend_portamento_target_note != -1) { //semitone difference float note_diff = bend_portamento_target_note - bend_portamento_last_note; //convert pitch bend to semitones (for more same speed no matter range); float pitch_bend_semitones = float(current_pitch_bend) * pitch_bend_range / PITCH_BEND_MAX; float diff = note_diff - pitch_bend_semitones; if (diff != 0) { pitch_bend_semitones += MIN(ABS(diff) * p_time * bend_portamento * BEND_BASE_SPEED, ABS(diff)) * SIGN(diff); current_pitch_bend = int(pitch_bend_semitones * PITCH_BEND_MAX / pitch_bend_range); //send event process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } prev_bend_portamento = true; } } custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].was_set = false; } if (custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].visible || custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].get_command()) { float bend_slide = custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].get_normalized(); if (bend_slide > 0.0 || prev_bend_slide) { if (bend_slide > 0 && custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].was_set) { //they are exclusive custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].value = 0; } bool found_note = false; for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_NOTE) { found_note = true; } } if (found_note) { current_pitch_bend = 0; prev_bend_slide = false; } else { double decrease = p_time * SLIDE_BASE_SPEED * bend_slide * double(PITCH_BEND_MAX / pitch_bend_range); current_pitch_bend -= int(decrease); if (current_pitch_bend < PITCH_BEND_MIN) { current_pitch_bend = PITCH_BEND_MIN; } //reset pitch bend prev_bend_slide = true; } process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].was_set = false; } if (custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].visible || custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].get_command()) { float bend_slide = custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].get_normalized(); if (bend_slide > 0.0 || prev_bend_slide) { if (bend_slide > 0 && custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].was_set) { //they are exclusive custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].value = 0; } bool found_note = false; for (int i = 0; i < p_event_count; i++) { if (p_events[i].type == Event::TYPE_NOTE) { found_note = true; } } if (found_note) { current_pitch_bend = 0; prev_bend_slide = false; } else { double decrease = p_time * SLIDE_BASE_SPEED * bend_slide * double(PITCH_BEND_MAX / pitch_bend_range); current_pitch_bend += int(decrease); if (current_pitch_bend < PITCH_BEND_MIN) { current_pitch_bend = PITCH_BEND_MIN; } //reset pitch bend prev_bend_slide = true; } process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].was_set = false; } if (custom_ports[CUSTOM_MIDI_BEND_VIBRATO].visible || custom_ports[CUSTOM_MIDI_BEND_VIBRATO].get_command()) { int bend_vibrato = int(custom_ports[CUSTOM_MIDI_BEND_VIBRATO].get()); if (bend_vibrato) { if (prev_bend_vibrato == 0) { //turn on bend_vibrato_time = 0; } float depth = (float(bend_vibrato % 10) / 9.0) * BEND_VIBRATO_MAX_DEPTH_SEMITONES; float rate = (float(bend_vibrato / 10) / 9.0) * BEND_VIBRATO_MAX_RATE_HZ; extra_pitch_bend = (sin(bend_vibrato_time) * depth) * float(PITCH_BEND_MAX) / pitch_bend_range; bend_vibrato_time += p_time * rate * 3.14159265359 * 2.0; //send event process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } else { if (prev_bend_vibrato) { //send event extra_pitch_bend = 0; process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } } prev_bend_vibrato = bend_vibrato; } { if ((custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].visible || custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].get_command()) && custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].was_set) { float value = custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].get(); if (value == 0) { process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 127; process_events[idx].event.control.parameter = 127; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 65; process_events[idx].event.control.parameter = 0; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } else { process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 126; process_events[idx].event.control.parameter = 127; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 65; process_events[idx].event.control.parameter = 127; process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = 5; process_events[idx].event.control.parameter = CLAMP(int(value * 127), 0, 127); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; } custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].was_set = false; } if ((custom_ports[CUSTOM_MIDI_PITCH_BEND].visible || custom_ports[CUSTOM_MIDI_PITCH_BEND].get_command()) && custom_ports[CUSTOM_MIDI_PITCH_BEND].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; current_pitch_bend = int(custom_ports[CUSTOM_MIDI_PITCH_BEND].value); process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; custom_ports[CUSTOM_MIDI_PITCH_BEND].was_set = false; } if ((custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].visible || custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].get_command()) && custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; current_pitch_bend = int(custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].value); process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].was_set = false; } if ((custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].visible || custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].get_command()) && custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_PITCH_BEND; current_pitch_bend = -int(custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].value); process_events[idx].event.pitch_bend.bend = CLAMP(current_pitch_bend + extra_pitch_bend + PITCH_BEND_MAX, 0, 16383); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].was_set = false; } if ((custom_ports[CUSTOM_MIDI_AFTERTOUCH].visible || custom_ports[CUSTOM_MIDI_AFTERTOUCH].get_command()) && custom_ports[CUSTOM_MIDI_AFTERTOUCH].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_AFTERTOUCH; process_events[idx].event.aftertouch.pressure = CLAMP(int(custom_ports[CUSTOM_MIDI_AFTERTOUCH].value), 0, 127); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; custom_ports[CUSTOM_MIDI_AFTERTOUCH].was_set = false; } if ((custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].visible || custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].get_command()) && custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_PATCH_SELECT; process_events[idx].event.patch.index = CLAMP(int(custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].value), 0, 127); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; idx++; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].was_set = false; } } //midi macros //TODO //CCs for (int i = 0; i < MIDIEvent::CC_MAX; i++) { if (idx == INTERNAL_MIDI_EVENT_BUFFER_SIZE) { break; } if (!cc_ports[i].visible && !cc_ports[i].get_command()) { continue; //not enabled, continue } if (cc_ports[i].was_set) { process_events[idx].event.type = MIDIEvent::MIDI_CONTROLLER; process_events[idx].event.control.index = MIDIEvent::cc_indices[i]; process_events[idx].event.control.parameter = uint8_t(CLAMP(cc_ports[i].value, 0.0, 127.0)); process_events[idx].event.channel = midi_channel; process_events[idx].frame = 0; cc_ports[i].was_set = false; idx++; } } //events for (int i = 0; i < p_event_count; i++) { if (idx == INTERNAL_MIDI_EVENT_BUFFER_SIZE) { break; } switch (p_events[i].type) { case Event::TYPE_NOTE: { if (ignore_note_on) { continue; } process_events[idx].event.type = MIDIEvent::MIDI_NOTE_ON; process_events[idx].event.note.note = p_events[i].param8; process_events[idx].event.note.velocity = uint8_t(CLAMP(p_events[i].paramf * 127.0, 0.0, 127.0)); } break; case Event::TYPE_NOTE_OFF: { if (ignore_note_off) { continue; } process_events[idx].event.type = MIDIEvent::MIDI_NOTE_OFF; if (bp_remap_note_off_from == p_events[i].param8) { process_events[idx].event.note.note = bp_remap_note_off_to; bp_remap_note_off_from = -1; bp_remap_note_off_to = -1; } else { process_events[idx].event.note.note = p_events[i].param8; } process_events[idx].event.note.velocity = uint8_t(CLAMP(p_events[i].paramf * 127.0, 0.0, 127.0)); } break; case Event::TYPE_AFTERTOUCH: { process_events[idx].event.type = MIDIEvent::MIDI_NOTE_PRESSURE; process_events[idx].event.note.note = p_events[i].param8; process_events[idx].event.note.velocity = uint8_t(CLAMP(p_events[i].paramf * 127.0, 0.0, 127.0)); } break; case Event::TYPE_BPM: { process_events[idx].event.type = MIDIEvent::SEQ_TEMPO; process_events[idx].event.tempo.tempo = p_events[i].param8; } break; }; process_events[idx].event.channel = midi_channel; process_events[idx].frame = p_events[i].offset; idx++; } r_stamped_event_count = idx; return process_events; } int AudioEffectMIDI::get_control_port_count() const { return TOTAL_INTERNAL_PORTS + _get_internal_control_port_count(); } ControlPort *AudioEffectMIDI::get_control_port(int p_port) { if (p_port < CUSTOM_MIDI_MAX) { return &custom_ports[p_port]; } p_port -= CUSTOM_MIDI_MAX; if (p_port < MIDIEvent::CC_MAX) { return &cc_ports[p_port]; } p_port -= MIDIEvent::CC_MAX; return _get_internal_control_port(p_port); } void AudioEffectMIDI::set_cc_visible(MIDIEvent::CC p_cc, bool p_visible) { cc_ports[p_cc].visible = p_visible; } bool AudioEffectMIDI::is_cc_visible(MIDIEvent::CC p_cc) const { return cc_ports[p_cc].visible; } void AudioEffectMIDI::set_midi_channel(int p_channel) { ERR_FAIL_INDEX(p_channel, 16); midi_channel = p_channel; } int AudioEffectMIDI::get_midi_channel() const { return midi_channel; } void AudioEffectMIDI::set_midi_macro(int p_macro, const Vector &p_bytes) { ERR_FAIL_INDEX(p_macro, CUSTOM_MIDI_MACRO_MAX); midi_macro[p_macro] = p_bytes; } Vector AudioEffectMIDI::get_midi_macro(int p_macro) const { ERR_FAIL_INDEX_V(p_macro, CUSTOM_MIDI_MACRO_MAX, Vector()); return midi_macro[p_macro]; } JSON::Node AudioEffectMIDI::to_json() const { JSON::Node node = JSON::object(); JSON::Node enabled_ccs = JSON::array(); for (int i = 0; i < MIDIEvent::CC_MAX; i++) { if (cc_ports[i].visible) { enabled_ccs.add(MIDIEvent::cc_names[i]); } } node.add("enabled_ccs", enabled_ccs); JSON::Node macros = JSON::array(); for (int i = 0; i < CUSTOM_MIDI_MACRO_MAX; i++) { if (midi_macro[i].size()) { JSON::Node macro = JSON::object(); macro.add("index", i); macro.add("macro", base64_encode(midi_macro[i])); macros.add(macro); } } if (macros.getCount()) { node.add("macros", macros); } node.add("midi_channel", midi_channel); node.add("pitch_bend_range", pitch_bend_range); JSON::Node effect_node = _internal_to_json(); node.add("data", effect_node); return node; } Error AudioEffectMIDI::from_json(const JSON::Node &node) { JSON::Node enabled_ccs = node.get("enabled_ccs"); for (int i = 0; i < enabled_ccs.getCount(); i++) { std::string ccname = enabled_ccs.get(i).toString(); for (int j = 0; j < MIDIEvent::CC_MAX; j++) { if (ccname == MIDIEvent::cc_names[j]) { cc_ports[j].visible = true; break; } } } if (node.has("macros")) { JSON::Node macros = node.get("macros"); for (int i = 0; i < macros.getCount(); i++) { JSON::Node macro = macros.get(i); int index = macro.get("index").toInt(); std::string b64 = macro.get("macro").toString(); ERR_CONTINUE(index < 0 || index >= CUSTOM_MIDI_MACRO_MAX); midi_macro[index] = base64_decode(b64); } } midi_channel = node.get("midi_channel").toInt(); pitch_bend_range = node.get("pitch_bend_range").toInt(); if (pitch_bend_range < 2) { pitch_bend_range = 2; } return _internal_from_json(node.get("data")); } void AudioEffectMIDI::set_pitch_bend_range(int p_semitones) { pitch_bend_range = p_semitones; update_pitch_bend_range = true; } int AudioEffectMIDI::get_pitch_bend_range() const { return pitch_bend_range; } void AudioEffectMIDI::_reset_midi() { reset_pending = true; custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].value = 0; custom_ports[CUSTOM_MIDI_BEND_VIBRATO].value = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].value = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].value = 0; custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].value = 0; custom_ports[CUSTOM_MIDI_AFTERTOUCH].value = 0; } AudioEffectMIDI::AudioEffectMIDI() { midi_channel = 0; //set built in control ports custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].name = "Bend Portamento"; custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].identifier = "bend_portamento"; custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].max = 1.0; custom_ports[CUSTOM_MIDI_BEND_PORTAMENTO].step = 0.001; custom_ports[CUSTOM_MIDI_BEND_VIBRATO].name = "Bend Vibrato"; custom_ports[CUSTOM_MIDI_BEND_VIBRATO].identifier = "bend_vibrato"; custom_ports[CUSTOM_MIDI_BEND_VIBRATO].max = 99; custom_ports[CUSTOM_MIDI_BEND_VIBRATO].step = 1; custom_ports[CUSTOM_MIDI_PITCH_BEND].name = "Pitch Bend"; custom_ports[CUSTOM_MIDI_PITCH_BEND].identifier = "pitch_bend"; custom_ports[CUSTOM_MIDI_PITCH_BEND].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND].min = -8192; custom_ports[CUSTOM_MIDI_PITCH_BEND].max = 8191; custom_ports[CUSTOM_MIDI_PITCH_BEND].step = 1; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].name = "Bend Slide Up"; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].identifier = "bend_slide_up"; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].value = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].min = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].max = 1; custom_ports[CUSTOM_MIDI_BEND_SLIDE_UP].step = 0.001; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].name = "Bend Slide Down"; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].identifier = "bend_slide_down"; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].value = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].min = 0; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].max = 1; custom_ports[CUSTOM_MIDI_BEND_SLIDE_DOWN].step = 0.001; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].name = "Pitch Bend Up"; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].identifier = "pitch_bend_up"; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].min = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].max = 8191; custom_ports[CUSTOM_MIDI_PITCH_BEND_UP].step = 1; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].name = "Pitch Bend Down"; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].identifier = "pitch_bend_down"; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].value = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].min = 0; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].max = 8192; custom_ports[CUSTOM_MIDI_PITCH_BEND_DOWN].step = 1; custom_ports[CUSTOM_MIDI_AFTERTOUCH].name = "Aftertouch"; custom_ports[CUSTOM_MIDI_AFTERTOUCH].identifier = "aftertouch"; custom_ports[CUSTOM_MIDI_AFTERTOUCH].value = 0; custom_ports[CUSTOM_MIDI_AFTERTOUCH].min = 0; custom_ports[CUSTOM_MIDI_AFTERTOUCH].max = 127; custom_ports[CUSTOM_MIDI_AFTERTOUCH].step = 1; custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].name = "Smart Portamento"; custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].identifier = "smart_portamento"; custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].max = 1.0; custom_ports[CUSTOM_MIDI_SMART_PORTAMENTO].step = 0.001; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].name = "Change Program"; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].identifier = "change_program"; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].value = 0; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].min = 0; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].max = 99; custom_ports[CUSTOM_MIDI_CHANGE_PROGRAM].step = 1; custom_ports[CUSTOM_MIDI_MACRO].name = "MIDI Macros"; custom_ports[CUSTOM_MIDI_MACRO].identifier = "midi_macro"; custom_ports[CUSTOM_MIDI_MACRO].max = 99; custom_ports[CUSTOM_MIDI_MACRO].step = 1; //set ports for CCs for (int i = 0; i < MIDIEvent::CC_MAX; i++) { cc_ports[i].name = String("CC: ") + MIDIEvent::cc_names[i]; cc_ports[i].identifier = String("cc_") + MIDIEvent::cc_names[i]; cc_ports[i].visible = false; cc_ports[i].max = 127; cc_ports[i].step = 1; cc_ports[i].hint = ControlPort::HINT_RANGE; } cc_ports[MIDIEvent::CC_MODULATION].visible = true; cc_ports[MIDIEvent::CC_BREATH].visible = true; cc_ports[MIDIEvent::CC_PAN].visible = true; cc_ports[MIDIEvent::CC_PAN].value = 64; cc_ports[MIDIEvent::CC_EXPRESSION].visible = true; cc_ports[MIDIEvent::CC_EXPRESSION].value = 127; pitch_bend_range = 2; bend_portamento_last_note = -1; prev_bend_vibrato = 0; reset_pending = true; current_pitch_bend = 0; extra_pitch_bend = 0; prev_bend_portamento = false; update_pitch_bend_range = true; prev_bend_slide = false; } zytrax-master/engine/audio_effect_midi.h000066400000000000000000000050061347722000700207500ustar00rootroot00000000000000#ifndef AUDIO_EFFECT_MIDI_H #define AUDIO_EFFECT_MIDI_H #include "audio_effect.h" //base for effects that use MIDI, provides parameters for common MIDI stuff //and some helpers class AudioEffectMIDI : public AudioEffect { public: enum CustomMIDIPorts { CUSTOM_MIDI_BEND_PORTAMENTO, CUSTOM_MIDI_BEND_VIBRATO, CUSTOM_MIDI_BEND_SLIDE_UP, CUSTOM_MIDI_BEND_SLIDE_DOWN, CUSTOM_MIDI_PITCH_BEND, CUSTOM_MIDI_PITCH_BEND_UP, CUSTOM_MIDI_PITCH_BEND_DOWN, CUSTOM_MIDI_SMART_PORTAMENTO, CUSTOM_MIDI_AFTERTOUCH, CUSTOM_MIDI_CHANGE_PROGRAM, CUSTOM_MIDI_MACRO, CUSTOM_MIDI_MAX, }; enum { CUSTOM_MIDI_MACRO_MAX = 100, TOTAL_INTERNAL_PORTS = MIDIEvent::CC_MAX + CUSTOM_MIDI_MAX }; struct MIDIEventStamped { uint32_t frame; MIDIEvent event; }; private: enum { INTERNAL_MIDI_EVENT_BUFFER_SIZE = 4096, PITCH_BEND_MAX = 8192, PITCH_BEND_MIN = -8192 }; ControlPortDefault cc_ports[MIDIEvent::CC_MAX]; ControlPortDefault custom_ports[CUSTOM_MIDI_MAX]; Vector midi_macro[CUSTOM_MIDI_MACRO_MAX]; MIDIEventStamped process_events[INTERNAL_MIDI_EVENT_BUFFER_SIZE]; int midi_channel; int pitch_bend_range; //in semitones bool reset_pending; int current_pitch_bend; int extra_pitch_bend; /* bend vibrato stuff */ int prev_bend_vibrato; float bend_vibrato_time; /* bend portamento stuff */ bool prev_bend_portamento; bool prev_bend_slide; int bend_portamento_last_note; int bend_portamento_target_note; int bp_remap_note_off_from; int bp_remap_note_off_to; bool update_pitch_bend_range; protected: void _reset_midi(); MIDIEventStamped *_process_midi_events(const Event *p_events, int p_event_count, float p_time, int &r_stamped_event_count); virtual int _get_internal_control_port_count() const = 0; virtual ControlPort *_get_internal_control_port(int p_index) = 0; virtual JSON::Node _internal_to_json() const = 0; virtual Error _internal_from_json(const JSON::Node &node) = 0; public: void set_cc_visible(MIDIEvent::CC p_cc, bool p_visible); bool is_cc_visible(MIDIEvent::CC p_cc) const; void set_midi_channel(int p_channel); int get_midi_channel() const; void set_midi_macro(int p_macro, const Vector &p_bytes); Vector get_midi_macro(int p_macro) const; virtual int get_control_port_count() const; virtual ControlPort *get_control_port(int p_port); virtual JSON::Node to_json() const; virtual Error from_json(const JSON::Node &node); void set_pitch_bend_range(int p_semitones); int get_pitch_bend_range() const; AudioEffectMIDI(); }; #endif // AUDIO_EFFECT_MIDI_H zytrax-master/engine/audio_lock.cpp000066400000000000000000000001241347722000700177710ustar00rootroot00000000000000#include "audio_lock.h" void AudioLock::lock() { } void AudioLock::unlock() { } zytrax-master/engine/audio_lock.h000066400000000000000000000003631347722000700174430ustar00rootroot00000000000000#ifndef AUDIO_LOCK_H #define AUDIO_LOCK_H class AudioLock { public: static void lock(); static void unlock(); AudioLock() { lock(); } ~AudioLock() { unlock(); } }; #define _AUDIO_LOCK_ AudioLock __audio_lock__; #endif // AUDIO_LOCK_H zytrax-master/engine/edit_commands.cpp000066400000000000000000000201601347722000700204700ustar00rootroot00000000000000#include "edit_commands.h" #if 0 void EditCommands::automation_set_point(Automation *p_automation, int p_pattern, Tick p_offset, float p_value) { String action_name; // action is too generic, for performance and memory, name is omitted. CommandBase *undo=NULL; if (p_automation->has_point(p_pattern,p_offset)) { float prev = p_automation->get_point(p_pattern,p_offset); undo=command(p_automation,&Automation::set_point,p_pattern,p_offset,prev); } else { undo=command(p_automation,&Automation::remove_point,p_pattern,p_offset); } add_action( action_name, command(p_automation,&Automation::set_point,p_pattern,p_offset,p_value), undo ); } void EditCommands::automation_remove_point(Automation *p_automation, int p_pattern, Tick p_offset) { String action_name; // action is too generic, for performance and memory, name is omitted. if (!p_automation->has_point(p_pattern,p_offset)) return; //removing what isn't there is pointless.. float prev = p_automation->get_point(p_pattern,p_offset); add_action( action_name, command(p_automation,&Automation::remove_point,p_pattern,p_offset), command(p_automation,&Automation::set_point,p_pattern,p_offset,prev) ); } void EditCommands::track_add_audio_effect(Track *p_track, AudioEffect *p_effect,int p_pos) { String action_name = "Add Effect: "+p_effect->get_info()->caption; if (p_pos<0) p_pos=p_track->get_audio_effect_count(); else if (p_pos>p_track->get_audio_effect_count()) p_pos=p_track->get_audio_effect_count(); add_action( action_name, command(p_track,&Track::add_audio_effect,p_effect,p_pos)->with_data(p_effect), command(p_track,&Track::remove_audio_effect,p_pos) ); } void EditCommands::track_remove_audio_effect(Track *p_track, int p_pos) { ERR_FAIL_INDEX(p_pos,p_track->get_audio_effect_count()); AudioEffect* effect = p_track->get_audio_effect(p_pos); String action_name = "Remove Effect: "+effect->get_info()->caption; add_action( action_name, command(p_track,&Track::remove_audio_effect,p_pos), command(p_track,&Track::add_audio_effect,effect,p_pos)->with_data(effect) ); } void EditCommands::track_add_automation(Track *p_track, Automation *p_automation,int p_pos) { String action_name = "Add Automation: "+p_automation->get_control_port()->get_name(); if (p_pos<0) p_pos=p_track->get_automation_count(); else if (p_pos>p_track->get_automation_count()) p_pos=p_track->get_automation_count(); add_action( action_name, command(p_track,&Track::add_automation,p_automation,p_pos)->with_data(p_automation), command(p_track,&Track::remove_automation,p_pos) ); } void EditCommands::track_remove_automation(Track *p_track, int p_pos) { ERR_FAIL_INDEX(p_pos,p_track->get_automation_count()); Automation* automation = p_track->get_automation(p_pos); String action_name = "Remove Automation: "+automation->get_control_port()->get_name(); add_action( action_name, command(p_track,&Track::remove_automation,p_pos), command(p_track,&Track::add_automation,automation,p_pos)->with_data(automation) ); } void EditCommands::song_pattern_set_beats(Song *p_song, int p_pattern, int p_beats) { String action_name = "Pattern "+String::num(p_pattern)+" Set Beats"; int old_beats = p_song->pattern_get_beats(p_pattern); add_action( action_name, command(p_song,&Song::pattern_set_beats,p_pattern,p_beats), command(p_song,&Song::pattern_set_beats,p_pattern,old_beats) ); } void EditCommands::song_pattern_set_measure(Song *p_song, int p_pattern, int p_measure) { String action_name = "Pattern "+String::num(p_pattern)+" Set Measure"; int old_measure = p_song->pattern_get_measure(p_pattern); add_action( action_name, command(p_song,&Song::pattern_set_measure,p_pattern,p_measure), command(p_song,&Song::pattern_set_measure,p_pattern,old_measure) ); } void EditCommands::song_pattern_set_bars(Song *p_song, int p_pattern, int p_bars) { String action_name = "Pattern "+String::num(p_pattern)+" Set Bars"; int old_bars = p_song->pattern_get_bars(p_pattern); add_action( action_name, command(p_song,&Song::pattern_set_bars,p_pattern,p_bars), command(p_song,&Song::pattern_set_bars,p_pattern,old_bars) ); } void EditCommands::song_order_set(Song *p_song,int p_order, int p_pattern) { String action_name = "Set Order "+String::num(p_order); int old_pattern = p_song->order_get(p_order); add_action( action_name, command(p_song,&Song::order_set,p_order,p_pattern), command(p_song,&Song::order_set,p_order,old_pattern) ); } void EditCommands::song_track_add(Song *p_song,Track *p_track,int p_pos) { String type; switch(p_track->get_type()) { case Track::TYPE_AUDIO: type="Audio"; break; case Track::TYPE_PATTERN: type="Pattern"; break; case Track::TYPE_GLOBAL: type="Global"; break; } String action_name = "Add "+type+" Track"; // validate p_pos if (p_pos<0) p_pos=p_song->get_track_count(); //by default add last else if (p_pos>p_song->get_track_count()) p_pos=p_song->get_track_count(); add_action( action_name, command(p_song,&Song::add_track,p_track,p_pos)->with_data(p_track), command(p_song,&Song::remove_track,p_pos) ); } void EditCommands::song_track_remove(Song *p_song,int p_pos) { String action_name = "Remove Track "+String::num(p_pos); // validate p_pos if (p_pos<0 || p_pos>=p_song->get_track_count()) return; Track *track = p_song->get_track(p_pos); ERR_FAIL_COND(!track); add_action( action_name, command(p_song,&Song::remove_track,p_pos), command(p_song,&Song::add_track,track,p_pos)->with_data(track) ); } void EditCommands::pattern_track_set_note_columns(PatternTrack *p_pattern_track,int p_columns) { String action_name = "Pattern Track Set To "+String::num(p_columns)+ " Note Columns"; // validate p_pos ERR_FAIL_COND(p_columns<1 || p_columns>256); int old_columns=p_pattern_track->get_note_columns(); add_action( action_name, command(p_pattern_track,&PatternTrack::set_note_columns,p_columns), command(p_pattern_track,&PatternTrack::set_note_columns,old_columns) ); } void EditCommands::pattern_track_set_note(PatternTrack *p_pattern_track,int p_pattern, PatternPos p_pos, PatternNote p_note) { String action_name; //action too generic // validate p_pos PatternNote old_note = p_pattern_track->get_note(p_pattern,p_pos); if (old_note==p_note) return; add_action( action_name, command(p_pattern_track,&PatternTrack::set_note,p_pattern,p_pos,p_note), command(p_pattern_track,&PatternTrack::set_note,p_pattern,p_pos,old_note) ); } void EditCommands::pattern_track_set_command_columns(PatternTrack *p_pattern_track,int p_columns) { String action_name = "Pattern Track Set To "+String::num(p_columns)+ " Command Columns"; // validate p_pos ERR_FAIL_COND(p_columns<0 || p_columns>256); int old_columns=p_pattern_track->get_note_columns(); add_action( action_name, command(p_pattern_track,&PatternTrack::set_note_columns,p_columns), command(p_pattern_track,&PatternTrack::set_note_columns,old_columns) ); } void EditCommands::pattern_track_set_command(PatternTrack *p_pattern_track,int p_pattern, PatternPos p_pos, PatternCommand p_command) { String action_name; //action too generic // validate p_pos PatternCommand old_command = p_pattern_track->get_command(p_pattern,p_pos); if (old_command==p_command) return; add_action( action_name, command(p_pattern_track,&PatternTrack::set_command,p_pattern,p_pos,p_command), command(p_pattern_track,&PatternTrack::set_command,p_pattern,p_pos,old_command) ); } void EditCommands::pattern_track_set_swing(PatternTrack *p_pattern_track,float p_swing) { String action_name = "Pattern Track Set Swing"; // validate p_pos float old_swing=p_pattern_track->get_swing(); add_action( action_name, command(p_pattern_track,&PatternTrack::set_swing,p_swing), command(p_pattern_track,&PatternTrack::set_swing,old_swing) ); } void EditCommands::pattern_track_set_swing_step(PatternTrack *p_pattern_track,int p_swing_step) { String action_name = "Pattern Track Set Swing Step"; // validate p_pos int old_swing_step=p_pattern_track->get_swing_step(); add_action( action_name, command(p_pattern_track,&PatternTrack::set_swing_step,p_swing_step), command(p_pattern_track,&PatternTrack::set_swing_step,old_swing_step) ); } #endif zytrax-master/engine/edit_commands.h000066400000000000000000000032301347722000700201340ustar00rootroot00000000000000#ifndef EDIT_COMMANDS_H #define EDIT_COMMANDS_H #include "undo_redo.h" #include "song.h" #if 0 class EditCommands : public UndoRedo { public: /* Automation */ void automation_set_point(Automation *p_automation, int p_pattern, Tick p_offset, float p_value); void automation_remove_point(Automation *p_automation, int p_pattern, Tick p_offset); /* Track */ void track_add_audio_effect(Track *p_track, AudioEffect *p_effect,int p_pos=-1); void track_remove_audio_effect(Track *p_track, int p_pos); void track_add_automation(Track *p_track,Automation *p_automation,int p_pos=-1); void track_remove_automation(Track *p_track,int p_pos); /* Song */ void song_pattern_set_beats(Song *p_song, int p_pattern, int p_beats); void song_pattern_set_measure(Song *p_song, int p_pattern, int p_measure); void song_pattern_set_bars(Song *p_song, int p_pattern, int p_bars); void song_order_set(Song *p_song,int p_order, int p_pattern); void song_track_add(Song *p_song,Track *p_track,int p_pos=-1); void song_track_remove(Song *p_song,int p_pos); /* Pattern Track */ void pattern_track_set_note_columns(PatternTrack *p_pattern_track,int p_columns); void pattern_track_set_note(PatternTrack *p_pattern_track,int p_pattern, PatternPos p_pos, PatternNote p_note); void pattern_track_set_command_columns(PatternTrack *p_pattern_track,int p_columns); void pattern_track_set_command(PatternTrack *p_pattern_track,int p_pattern, PatternPos p_pos, PatternCommand p_command); void pattern_track_set_swing(PatternTrack *p_pattern_track,float p_swing); void pattern_track_set_swing_step(PatternTrack *p_pattern_track,int p_swing_step); }; #endif #endif // EDIT_COMMANDS_H zytrax-master/engine/midi_driver_manager.cpp000066400000000000000000000042721347722000700216570ustar00rootroot00000000000000#include "midi_driver_manager.h" void MIDIInputDriver::event(double p_stamp, const MIDIEvent &p_event) { if (MIDIDriverManager::event_callback) { MIDIDriverManager::event_callback(p_stamp, p_event); } } /////////////////////// MIDIInputDriver *MIDIDriverManager::input_drivers[MIDIDriverManager::MAX_MIDI_DRIVERS]; int MIDIDriverManager::input_driver_count = 0; int MIDIDriverManager::input_current_driver = -1; MIDIDriverManager::EventCallback MIDIDriverManager::event_callback = NULL; void MIDIDriverManager::lock_driver() { if (input_current_driver != -1 && input_drivers[input_current_driver]->is_active()) { input_drivers[input_current_driver]->lock(); } } void MIDIDriverManager::unlock_driver() { if (input_current_driver != -1 && input_drivers[input_current_driver]->is_active()) { input_drivers[input_current_driver]->unlock(); } } bool MIDIDriverManager::init_input_driver(int p_driver) { ERR_FAIL_COND_V(p_driver != -1 && p_driver < 0 && p_driver >= input_driver_count, false); if (input_current_driver != -1 && input_drivers[input_current_driver]->is_active()) { input_drivers[input_current_driver]->finish(); } input_current_driver = p_driver; if (input_current_driver != -1) { return input_drivers[input_current_driver]->init(); } return false; } void MIDIDriverManager::finish_input_driver() { if (input_current_driver != -1 && input_drivers[input_current_driver]->is_active()) { input_drivers[input_current_driver]->finish(); } } bool MIDIDriverManager::is_input_driver_active() { return (input_current_driver != -1 && input_drivers[input_current_driver]->is_active()); } int MIDIDriverManager::get_input_driver_count() { return input_driver_count; } MIDIInputDriver *MIDIDriverManager::get_input_driver(int p_which) { ERR_FAIL_INDEX_V(p_which, input_driver_count, NULL); return input_drivers[p_which]; } int MIDIDriverManager::get_current_input_driver_index() { return input_current_driver; } void MIDIDriverManager::add_input_driver(MIDIInputDriver *p_driver) { ERR_FAIL_COND(input_driver_count == MAX_MIDI_DRIVERS); input_drivers[input_driver_count++] = p_driver; } void MIDIDriverManager::set_event_callback(EventCallback p_callback) { event_callback = p_callback; } zytrax-master/engine/midi_driver_manager.h000066400000000000000000000032541347722000700213230ustar00rootroot00000000000000#ifndef MIDI_DRIVER_MANAGER_H #define MIDI_DRIVER_MANAGER_H #include "dsp/midi_event.h" #include "globals/error_macros.h" #include "globals/rstring.h" class MIDIInputDriver { protected: void event(double p_stamp, const MIDIEvent &p_event); public: virtual void lock() = 0; ///< Lock called from UI,game,etc (non-audio) thread, to access audio variables virtual void unlock() = 0; ///< UnLock called from UI,game,etc (non-audio) thread, to access audio variables virtual String get_name() const = 0; virtual String get_id() const = 0; virtual bool is_active() = 0; virtual bool init() = 0; virtual void finish() = 0; MIDIInputDriver() {} virtual ~MIDIInputDriver() {} }; class MIDIDriverManager { enum { MAX_MIDI_DRIVERS = 64 }; public: typedef void (*EventCallback)(double p_stamp, const MIDIEvent &); private: static MIDIInputDriver *input_drivers[MAX_MIDI_DRIVERS]; static int input_driver_count; static int input_current_driver; friend class MIDIInputDriver; static EventCallback event_callback; public: static void lock_driver(); ///< Protect audio thread variables from ui,game,etc (non-audio) threads static void unlock_driver(); ///< Protect audio thread variables from ui,game,etc (non-audio) threads static bool init_input_driver(int p_driver = -1); ///< -1 is current static void finish_input_driver(); static bool is_input_driver_active(); static int get_input_driver_count(); static MIDIInputDriver *get_input_driver(int p_which = -1); ///< -1 is current static int get_current_input_driver_index(); static void add_input_driver(MIDIInputDriver *p_driver); static void set_event_callback(EventCallback p_callback); }; #endif // MIDI_DRIVER_MANAGER_H zytrax-master/engine/song.cpp000066400000000000000000000463351347722000700166440ustar00rootroot00000000000000#include "song.h" #include "dsp/db.h" void Song::_process_audio_step() { int buffer_len = buffer.size(); AudioFrame *buffer_ptr = &buffer[0]; //clear for (int i = 0; i < buffer_len; i++) { buffer_ptr[i] = AudioFrame(0, 0); } //clear track buffers for (int j = 0; j < tracks.size(); j++) { Track *t = tracks[j]; ERR_FAIL_COND(t->input_buffer.size() != buffer_len); ERR_FAIL_COND(t->process_buffer.size() != buffer_len); AudioFrame *input_buf_ptr = &t->input_buffer[0]; AudioFrame *process_buf_ptr = &t->process_buffer[0]; //clear for (int i = 0; i < buffer_len; i++) { input_buf_ptr[i] = AudioFrame(0, 0); process_buf_ptr[i] = AudioFrame(0, 0); } } //process events and playback if (playback.playing) { double tick_to_frame = ((60.0 / double(playback.bpm)) / double(TICKS_PER_BEAT)) * double(sampling_rate); double block_ticks = buffer_len / tick_to_frame; double pattern_ticks = pattern_get_beats(playback.pattern) * TICKS_PER_BEAT; double block_offset = 0.0; SwingBeatDivisor swing_divisor = pattern_get_swing_beat_divisor(playback.pattern); if (playback.pos + block_ticks >= pattern_ticks) { //went past end of pattern Tick tick_from = Tick(playback.pos); Tick tick_to = Tick(pattern_ticks); //play to end of pattern for (int i = 0; i < tracks.size(); i++) { tracks[i]->process_events(playback.pattern, 0, tick_from, tick_to, playback.bpm, swing_divisor, swing); } //remainder block_ticks = playback.pos + block_ticks - pattern_ticks; block_offset = pattern_ticks - playback.pos; playback.pos = 0; //restart on next pattern //guess next pattern if (!playback.loop_pattern) { int attempts = Song::ORDER_MAX + 2; //add two, first and loop. playback.order++; //validate order while (attempts) { attempts--; if (playback.order > ORDER_MAX) { if (!playback.can_loop) { playback.playing = false; break; } playback.order = 0; } int next_pattern = order_get(playback.order); if (next_pattern == ORDER_EMPTY) { if (!playback.can_loop) { playback.playing = false; break; } playback.order = 0; } else if (next_pattern == ORDER_SKIP) { playback.order++; } else { playback.pattern = next_pattern; break; } } if (attempts == 0) { playback.playing = false; } } } if (playback.playing && playback.playing && block_ticks > 0) { //went past end of pattern Tick tick_from = Tick(playback.pos); Tick tick_to = Tick(playback.pos + block_ticks); Tick offset = Tick(block_offset); playback.pos += block_ticks; //play remaining region for (int i = 0; i < tracks.size(); i++) { tracks[i]->process_events(playback.pattern, offset, tick_from, tick_to, playback.bpm, swing_divisor, swing); } } } if (playback.range.active) { int from_track = get_event_column_track(playback.range.from_column); int to_track = get_event_column_track(playback.range.to_column); for (int i = from_track; i <= to_track; i++) { ERR_CONTINUE(i < 0 || i >= tracks.size()); int from = (i == from_track) ? get_event_column_event(playback.range.from_column) : 0; int to = (i == to_track) ? get_event_column_event(playback.range.to_column) : tracks[i]->get_event_column_count() - 1; tracks[i]->process_events(playback.range.pattern, 0, playback.range.from_tick, playback.range.to_tick, playback.playing ? playback.bpm : bpm, playback.range.pattern, 0, from, to); } playback.range.active = false; } for (int i = 0; i < playback.single_event_count; i++) { ERR_CONTINUE(playback.single_event[i].track < 0 || playback.single_event[i].track >= tracks.size()); tracks[playback.single_event[i].track]->add_single_event(playback.single_event[i].event); } playback.single_event_count = 0; //process audio in track-order ERR_FAIL_COND(track_process_order.size() != tracks.size()); for (int i = 0; i < track_process_order.size(); i++) { Track *t = tracks[track_process_order[i]]; //process tracks const AudioFrame *src_audio = t->process_audio_step(); //do sends int send_count = t->sends.size(); Track::Send *sends = send_count ? &t->sends[0] : NULL; for (int j = 0; j < send_count; j++) { if (sends[j].mute) { continue; } AudioFrame *dst_audio = sends[j].track == Track::SEND_SPEAKERS ? buffer_ptr : &tracks[sends[j].track]->process_buffer[0]; //accumulate for (int k = 0; k < buffer_len; k++) { dst_audio[k] += src_audio[k] * sends[j].amount; } } } //apply volume { float volume = db2linear(main_volume_db); for (int i = 0; i < buffer_len; i++) { buffer_ptr[i] *= volume; float energy_l = ABS(buffer_ptr[i].l); if (energy_l > peak_volume_l) { peak_volume_l = energy_l; } float energy_r = ABS(buffer_ptr[i].r); if (energy_r > peak_volume_r) { peak_volume_r = energy_r; } } } } void Song::process_audio(AudioFrame *p_output, int p_frames) { int buffer_len = buffer.size(); ERR_FAIL_COND(buffer_len == 0); const AudioFrame *buffer_ptr = &buffer[0]; for (int i = 0; i < p_frames; i++) { if (buffer_pos >= buffer_len) { _process_audio_step(); buffer_pos = 0; } p_output[i] = buffer_ptr[buffer_pos]; buffer_pos++; } } /////////////// void Song::_check_delete_pattern_config(int p_pattern) { if (!pattern_config.has(p_pattern)) return; if (pattern_config[p_pattern].beats_per_bar == DEFAULT_BEATS_PER_BAR && pattern_config[p_pattern].beats == DEFAULT_PATTERN_BEATS && pattern_config[p_pattern].swing_beat_divisor == SWING_BEAT_DIVISOR_1) { pattern_config.erase(p_pattern); } } void Song::pattern_set_beats_per_bar(int p_pattern, int p_beats_per_bar) { _AUDIO_LOCK_ if (!pattern_config.has(p_pattern)) pattern_config[p_pattern] = PatternConfig(); pattern_config[p_pattern].beats_per_bar = p_beats_per_bar; _check_delete_pattern_config(p_pattern); } int Song::pattern_get_beats_per_bar(int p_pattern) const { if (!pattern_config.has(p_pattern)) return DEFAULT_BEATS_PER_BAR; return pattern_config[p_pattern].beats_per_bar; } void Song::pattern_set_beats(int p_pattern, int p_beats) { _AUDIO_LOCK_ if (!pattern_config.has(p_pattern)) pattern_config[p_pattern] = PatternConfig(); pattern_config[p_pattern].beats = p_beats; _check_delete_pattern_config(p_pattern); } int Song::pattern_get_beats(int p_pattern) const { if (!pattern_config.has(p_pattern)) return DEFAULT_PATTERN_BEATS; return pattern_config[p_pattern].beats; } void Song::pattern_set_swing_beat_divisor(int p_pattern, SwingBeatDivisor p_divisor) { ERR_FAIL_INDEX(p_divisor, SWING_BEAT_DIVISOR_MAX); _AUDIO_LOCK_ if (!pattern_config.has(p_pattern)) pattern_config[p_pattern] = PatternConfig(); pattern_config[p_pattern].swing_beat_divisor = p_divisor; _check_delete_pattern_config(p_pattern); } Song::SwingBeatDivisor Song::pattern_get_swing_beat_divisor(int p_pattern) const { if (!pattern_config.has(p_pattern)) return SWING_BEAT_DIVISOR_1; return pattern_config[p_pattern].swing_beat_divisor; } void Song::order_set(int p_order, int p_pattern) { _AUDIO_LOCK_ ERR_FAIL_COND(p_order < 0 || (p_order > ORDER_MAX && p_order != ORDER_EMPTY && p_order != ORDER_SKIP)); if (p_order == ORDER_EMPTY) order_list.erase(p_order); else order_list.insert(p_order, p_pattern); } int Song::order_get(int p_order) const { if (order_list.has(p_order)) return order_list[p_order]; else return ORDER_EMPTY; } int Song::get_track_count() const { return tracks.size(); } bool Song::update_process_order() { Vector references; references.resize(tracks.size()); for (int i = 0; i < tracks.size(); i++) { references[i] = i; } // add connections Vector sort_sends; for (int i = 0; i < tracks.size(); i++) { for (int j = 0; j < tracks[i]->get_send_count(); j++) { int send_to = tracks[i]->get_send_track(j); if (send_to != Track::SEND_SPEAKERS) { SortSend ss; ss.from = i; ss.to = send_to; sort_sends.push_back(ss); } } } //sort int max_passes = sort_sends.size() * sort_sends.size(); int pass_count = 0; bool success = true; while (pass_count < max_passes) { //using bubblesort because of simplicity, success = true; for (int i = 0; i < sort_sends.size(); i++) { int from_idx = sort_sends[i].from; int to_idx = sort_sends[i].to; if (references[from_idx] > references[to_idx]) { SWAP(references[from_idx], references[to_idx]); success = false; break; } } if (success) { break; } pass_count++; } track_process_order.resize(tracks.size()); for (int i = 0; i < references.size(); i++) { track_process_order[references[i]] = i; } return success; } void Song::add_track(Track *p_track) { _AUDIO_LOCK_ //audio kill p_track->set_process_buffer_size(buffer.size()); p_track->set_sampling_rate(sampling_rate); tracks.push_back(p_track); update_process_order(); } void Song::add_track_at_pos(Track *p_track, int p_pos) { _AUDIO_LOCK_ tracks.insert(p_pos, p_track); update_process_order(); } void Song::remove_track(int p_idx) { _AUDIO_LOCK_ //audio kill... ERR_FAIL_INDEX(p_idx, tracks.size()); tracks.remove(p_idx); update_process_order(); } void Song::swap_tracks(int p_which, int p_by_which) { _AUDIO_LOCK_; SWAP(tracks[p_which], tracks[p_by_which]); for (int i = 0; i < tracks.size(); i++) { Track *t = tracks[i]; for (int j = 0; j < t->get_send_count(); j++) { int send = t->get_send_track(j); if (send == p_which) { t->set_send_track(j, p_by_which); } else if (send == p_by_which) { t->set_send_track(j, p_which); } } } update_process_order(); } Track *Song::get_track(int p_idx) { ERR_FAIL_INDEX_V(p_idx, tracks.size(), NULL); return tracks[p_idx]; } int Song::get_event_column_count() const { int cc = 0; for (int i = 0; i < tracks.size(); i++) cc += tracks[i]->get_event_column_count(); return cc; } int Song::get_event_column_track(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { return i; } p_column -= tracks[i]->get_event_column_count(); } return -1; } int Song::get_event_column_event(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { return p_column; } p_column -= tracks[i]->get_event_column_count(); } return -1; } int Song::get_event_column_note_column(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { if (p_column < tracks[i]->get_column_count()) { return p_column; } else { return -1; } } p_column -= tracks[i]->get_event_column_count(); } return -1; } int Song::get_event_column_command_column(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { if (p_column < tracks[i]->get_column_count()) { return -1; } p_column -= tracks[i]->get_column_count(); if (p_column < tracks[i]->get_command_column_count()) { return p_column; } else { return -1; } } p_column -= tracks[i]->get_event_column_count(); } return -1; } int Song::get_event_column_automation(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { if (p_column < tracks[i]->get_column_count() + tracks[i]->get_command_column_count()) { return -1; } else { p_column -= tracks[i]->get_column_count() + tracks[i]->get_command_column_count(); return p_column; } } p_column -= tracks[i]->get_event_column_count(); } return -1; } void Song::set_event(int p_pattern, int p_column, Tick p_pos, Track::Event p_event) { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { tracks[i]->set_event(p_pattern, p_column, p_pos, p_event); return; } p_column -= tracks[i]->get_event_column_count(); } ERR_FAIL_COND(true); } Track::Event::Type Song::get_event_column_type(int p_column) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { return tracks[i]->get_event_column_type(p_column); } p_column -= tracks[i]->get_event_column_count(); } ERR_FAIL_COND_V(true, Track::Event::TYPE_NOTE); } Track::Event Song::get_event(int p_pattern, int p_column, Tick p_pos) const { for (int i = 0; i < tracks.size(); i++) { if (p_column < tracks[i]->get_event_column_count()) { return tracks[i]->get_event(p_pattern, p_column, p_pos); } p_column -= tracks[i]->get_event_column_count(); } ERR_FAIL_COND_V(true, Track::Event()); } void Song::get_events_in_range(int p_pattern, const Track::Pos &p_from, const Track::Pos &p_to, List *r_events) const { int col = 0; for (int i = 0; i < tracks.size(); i++) { int tc = tracks[i]->get_event_column_count(); for (int j = 0; j < tc; j++) { if (col >= p_from.column) { Track::Pos from = p_from; from.column = j; Track::Pos to = p_to; to.column = j; List pevents; tracks[i]->get_events_in_range(p_pattern, from, to, &pevents); for (List::Element *E = pevents.front(); E; E = E->next()) { Track::PosEvent pe = E->get(); pe.pos.column = col; r_events->push_back(pe); } } col++; if (col > p_to.column) break; } if (col > p_to.column) break; } } void Song::set_bpm(float p_value) { bpm = p_value; } float Song::get_bpm() const { return bpm; } void Song::set_swing(float p_value) { swing = p_value; } float Song::get_swing() const { return swing; } void Song::set_name(String p_name) { name = p_name; } String Song::get_name() const { return name; } void Song::set_author(String p_author) { author = p_author; } String Song::get_author() const { return author; } void Song::set_description(String p_description) { description = p_description; } String Song::get_description() const { return description; } void Song::set_process_buffer_size(int p_frames) { _AUDIO_LOCK_; buffer.resize(p_frames); buffer_pos = p_frames; for (int i = 0; i < tracks.size(); i++) { tracks[i]->set_process_buffer_size(p_frames); } } void Song::set_sampling_rate(int p_hz) { _AUDIO_LOCK_ sampling_rate = p_hz; for (int i = 0; i < tracks.size(); i++) { tracks[i]->set_sampling_rate(p_hz); } } void Song::_pre_capture_automations() { for (int i = 0; i < tracks.size(); i++) { tracks[i]->automations_pre_play_capture(); } } void Song::_restore_automations() { for (int i = 0; i < tracks.size(); i++) { tracks[i]->automations_pre_play_restore(); } } bool Song::can_play() const { int from_order = 0; while (true) { int order = order_get(from_order); if (order == ORDER_EMPTY) { return false; //nothing to play } if (from_order > ORDER_MAX) { return false; } if (order != ORDER_SKIP) { break; } from_order++; } return true; } bool Song::play(int p_from_order, Tick p_from_tick, bool p_can_loop) { stop(); _AUDIO_LOCK_ int order; while (true) { order = order_get(p_from_order); if (order == ORDER_EMPTY) { return false; //nothing to play } if (p_from_order > ORDER_MAX) { return false; } if (order != ORDER_SKIP) { break; } p_from_order++; } _pre_capture_automations(); playback.playing = true; playback.loop_pattern = false; playback.pattern = order; playback.order = p_from_order; playback.bpm = bpm; playback.volume = 1.0; playback.prev_volume = 1.0; playback.pos = p_from_tick; playback.can_loop = p_can_loop; return true; } void Song::play_pattern(int p_pattern, Tick p_from_tick) { stop(); _AUDIO_LOCK_ _pre_capture_automations(); playback.playing = true; playback.loop_pattern = true; playback.pattern = p_pattern; playback.order = -1; playback.bpm = bpm; playback.volume = 1.0; playback.prev_volume = 1.0; playback.pos = p_from_tick; playback.can_loop = true; } void Song::play_event_range(int p_pattern, int p_from_column, int p_to_column, Tick p_from_tick, Tick p_to_tick) { _AUDIO_LOCK_ _pre_capture_automations(); playback.range.active = true; playback.range.pattern = p_pattern; playback.range.from_column = p_from_column; playback.range.from_tick = p_from_tick; playback.range.to_column = p_to_column; playback.range.to_tick = p_to_tick; } void Song::play_single_event(int p_track, const AudioEffect::Event &p_event) { _AUDIO_LOCK_ if (playback.single_event_count == SINGLE_EVENT_MAX) { return; } if (p_track < 0 || p_track >= tracks.size()) { return; } playback.single_event[playback.single_event_count].event = p_event; playback.single_event[playback.single_event_count].track = p_track; playback.single_event_count++; } void Song::stop() { _AUDIO_LOCK_ playback.playing = false; playback.can_loop = true; //important, restore before stopping because stopping may call reset, which may still //restore to an own value/ _restore_automations(); for (int i = 0; i < tracks.size(); i++) { tracks[i]->stop(); } } void Song::play_next_pattern() { _AUDIO_LOCK_ if (!playback.playing || playback.loop_pattern) { return; } int order = playback.order; int pattern; while (true) { order++; if (order > ORDER_MAX) { return; } pattern = order_get(order); if (pattern == ORDER_EMPTY) { return; } else if (pattern != ORDER_SKIP) { break; } } playback.order = order; playback.pattern = pattern; playback.pos = 0; } void Song::play_prev_pattern() { _AUDIO_LOCK_ if (!playback.playing || playback.loop_pattern) { return; } int order = playback.order; int pattern; while (true) { order--; if (order < 0) { return; } pattern = order_get(order); if (pattern == ORDER_EMPTY) { return; } else if (pattern != ORDER_SKIP) { break; } } playback.order = order; playback.pattern = pattern; playback.pos = 0; } bool Song::is_playing() const { return playback.playing; } int Song::get_playing_order() const { if (playback.playing && !playback.loop_pattern) { return playback.order; } else { return -1; } } int Song::get_playing_pattern() const { if (playback.playing) { return playback.pattern; } else { return -1; } } Tick Song::get_playing_tick() const { if (playback.playing) { return playback.pos; } else { return 0; } } void Song::clear() { for (int i = 0; i < tracks.size(); i++) delete tracks[i]; tracks.clear(); bpm = DEFAULT_BPM; swing = 0; name = ""; author = ""; description = ""; order_list.clear(); pattern_config.clear(); track_process_order.clear(); } void Song::set_main_volume_db(float p_db) { main_volume_db = p_db; } float Song::get_main_volume_db() const { return main_volume_db; } float Song::get_peak_volume_db_l() const { float peak_db = linear2db(peak_volume_l); peak_volume_l = 0; return peak_db; } float Song::get_peak_volume_db_r() const { float peak_db = linear2db(peak_volume_r); peak_volume_r = 0; return peak_db; } Song::~Song() { clear(); } Song::Song() { bpm = DEFAULT_BPM; swing = 0; set_process_buffer_size(DEFAULT_PROCESS_BUFFER_SIZE); buffer_pos = DEFAULT_PROCESS_BUFFER_SIZE; sampling_rate = 44100; playback.playing = false; playback.pattern = -1; playback.order = -1; playback.pos = 0; playback.bpm = bpm; playback.volume = 1.0; playback.prev_volume = 1.0; playback.range.active = false; playback.can_loop = true; playback.single_event_count = 0; main_volume_db = -12; peak_volume_l = 0; peak_volume_r = 0; playback.single_event_count = 0; } zytrax-master/engine/song.h000066400000000000000000000104771347722000700163070ustar00rootroot00000000000000#ifndef SONG_H #define SONG_H #include "track.h" class Song { public: enum { DEFAULT_BEATS_PER_BAR = 4, DEFAULT_PATTERN_BEATS = 16, DEFAULT_BPM = 125, DEFAULT_PROCESS_BUFFER_SIZE = 256, ORDER_MAX = 999, ORDER_EMPTY = 0xFFFFF, ORDER_SKIP = 0xFFFFE, MAX_PATTERN = 999, SINGLE_EVENT_MAX = 1024 }; enum SwingBeatDivisor { SWING_BEAT_DIVISOR_1, SWING_BEAT_DIVISOR_2, SWING_BEAT_DIVISOR_3, SWING_BEAT_DIVISOR_4, SWING_BEAT_DIVISOR_6, SWING_BEAT_DIVISOR_8, SWING_BEAT_DIVISOR_MAX }; private: struct PatternConfig { int beats_per_bar; int beats; SwingBeatDivisor swing_beat_divisor; PatternConfig() { beats_per_bar = DEFAULT_BEATS_PER_BAR; beats = DEFAULT_PATTERN_BEATS; swing_beat_divisor = SWING_BEAT_DIVISOR_1; } }; Map pattern_config; Map order_list; Vector tracks; void _check_delete_pattern_config(int p_pattern); float bpm; float swing; float main_volume_db; mutable float peak_volume_l; mutable float peak_volume_r; String name; String author; String description; Vector buffer; int buffer_pos; void _process_audio_step(); Vector track_process_order; struct SortSend { int from; int to; }; int sampling_rate; struct Playback { bool playing; bool loop_pattern; int pattern; int order; //number if playing song, else -1 double pos; //tick (needs sub-tick accuracy) int bpm; float volume; float prev_volume; bool can_loop; struct Range { bool active; int pattern; int from_column; int to_column; Tick from_tick; Tick to_tick; } range; struct SingleEvent { int track; AudioEffect::Event event; }; SingleEvent single_event[SINGLE_EVENT_MAX]; int single_event_count; } playback; void _pre_capture_automations(); void _restore_automations(); public: bool update_process_order(); void set_process_buffer_size(int p_frames); void set_sampling_rate(int p_hz); void process_audio(AudioFrame *p_output, int p_frames); void pattern_set_beats_per_bar(int p_pattern, int p_beats_per_bar); int pattern_get_beats_per_bar(int p_pattern) const; void pattern_set_beats(int p_pattern, int p_beats); int pattern_get_beats(int p_pattern) const; void pattern_set_swing_beat_divisor(int p_pattern, SwingBeatDivisor p_divisor); SwingBeatDivisor pattern_get_swing_beat_divisor(int p_pattern) const; void order_set(int p_order, int p_pattern); int order_get(int p_order) const; int get_track_count() const; void add_track(Track *p_track); void remove_track(int p_idx); Track *get_track(int p_idx); void add_track_at_pos(Track *p_track, int p_pos); void swap_tracks(int p_which, int p_by_which); int get_event_column_count() const; void set_event(int p_pattern, int p_column, Tick p_pos, Track::Event p_event); Track::Event::Type get_event_column_type(int p_column) const; int get_event_column_track(int p_column) const; int get_event_column_event(int p_column) const; int get_event_column_note_column(int p_column) const; int get_event_column_command_column(int p_column) const; int get_event_column_automation(int p_column) const; Track::Event get_event(int p_pattern, int p_column, Tick p_pos) const; void get_events_in_range(int p_pattern, const Track::Pos &p_from, const Track::Pos &p_to, List *r_events) const; void set_bpm(float p_value); float get_bpm() const; void set_swing(float p_value); float get_swing() const; void set_name(String p_name); String get_name() const; void set_author(String p_author); String get_author() const; void set_description(String p_description); String get_description() const; void clear(); bool can_play() const; bool play(int p_from_order = 0, Tick p_from_tick = 0, bool p_can_loop = true); void play_pattern(int p_pattern, Tick p_from_tick = 0); void play_event_range(int p_pattern, int p_from_column, int p_to_column, Tick p_from_tick, Tick p_to_tick); void play_next_pattern(); void play_prev_pattern(); void play_single_event(int p_track, const AudioEffect::Event &p_event); void stop(); bool is_playing() const; int get_playing_order() const; int get_playing_pattern() const; Tick get_playing_tick() const; void set_main_volume_db(float p_db); float get_main_volume_db() const; float get_peak_volume_db_l() const; float get_peak_volume_db_r() const; ~Song(); Song(); }; #endif // SONG_H zytrax-master/engine/song_file.cpp000066400000000000000000000506471347722000700176440ustar00rootroot00000000000000#include "song_file.h" #include "engine/sound_driver_manager.h" #include "json_file.h" class AudioEffectDummy : public AudioEffect { //used to instantiate in place of plugins that were not found public: String unique_id; String provider_id; Vector ports; JSON::Node json; virtual bool has_secondary_input() const { return false; } virtual void process(const Event *p_events, int p_event_count, const AudioFrame *p_in, AudioFrame *p_out, bool p_prev_active) {} virtual void process_with_secondary(const Event *p_events, int p_event_count, const AudioFrame *p_in, const AudioFrame *p_secondary, AudioFrame *p_out, bool p_prev_active) {} //info virtual String get_name() const { return "DummyPlugin"; } virtual String get_unique_id() const { return unique_id; } virtual String get_provider_id() const { return provider_id; } virtual int get_control_port_count() const { return ports.size(); } virtual ControlPort *get_control_port(int p_port) { return &ports[p_port]; } virtual void set_process_block_size(int p_size) {} virtual void set_sampling_rate(int p_hz) {} virtual void reset() {} /* Load/Save */ virtual JSON::Node to_json() const { return json; } virtual Error from_json(const JSON::Node &node) { json = node; return OK; } AudioEffectDummy() {} }; Error SongFile::save(const String &p_path) { JSON::Node node = JSON::object(); node.add("software", VERSION_SOFTWARE_NAME); { JSON::Node version = JSON::object(); version.add("major", VERSION_MAJOR); version.add("minor", VERSION_MINOR); version.add("status", _MKSTR(VERSION_STATUS)); node.add("version", version); } { JSON::Node information = JSON::object(); information.add("name", song->get_name().utf8().get_data()); information.add("author", song->get_author().utf8().get_data()); information.add("description", song->get_description().utf8().get_data()); node.add("information", information); } { JSON::Node tempo = JSON::object(); tempo.add("bpm", song->get_bpm()); tempo.add("swing", int(song->get_swing() * 100)); node.add("tempo", tempo); } JSON::Node tracks = JSON::array(); for (int i = 0; i < song->get_track_count(); i++) { Track *t = song->get_track(i); JSON::Node track = JSON::object(); track.add("name", t->get_name().utf8().get_data()); track.add("volume", t->get_mix_volume_db()); track.add("muted", t->is_muted()); track.add("columns", t->get_column_count()); track.add("command_columns", t->get_command_column_count()); JSON::Node automations = JSON::array(); for (int j = 0; j < t->get_automation_count(); j++) { Automation *a = t->get_automation(j); JSON::Node automation = JSON::object(); int effect_idx = -1; String param_name; for (int k = 0; k < t->get_audio_effect_count(); k++) { AudioEffect *fx = t->get_audio_effect(k); if (a->get_owner() == fx) { for (int l = 0; l < fx->get_control_port_count(); l++) { if (a->get_control_port() == fx->get_control_port(l)) { param_name = fx->get_control_port(l)->get_identifier(); effect_idx = k; break; } } break; } } if (effect_idx == -1) { continue; } automation.add("effect_index", effect_idx); automation.add("parameter", param_name.utf8().get_data()); String edit_mode; switch (a->get_edit_mode()) { case Automation::EDIT_ROWS_DISCRETE: edit_mode = "discrete_numbers"; break; case Automation::EDIT_ENVELOPE_SMALL: edit_mode = "envelope_small"; break; case Automation::EDIT_ENVELOPE_LARGE: edit_mode = "envelope_large"; break; } automation.add("edit_mode", edit_mode.utf8().get_data()); automations.add(automation); } track.add("automations", automations); JSON::Node effects = JSON::array(); for (int j = 0; j < t->get_audio_effect_count(); j++) { AudioEffect *fx = t->get_audio_effect(j); JSON::Node effect = JSON::object(); effect.add("provider_id", fx->get_provider_id().utf8().get_data()); effect.add("id", fx->get_unique_id().utf8().get_data()); effect.add("skip", fx->is_skipped()); JSON::Node commands = JSON::object(); bool added_commands = false; for (int k = 0; k < fx->get_control_port_count(); k++) { char command = fx->get_control_port(k)->get_command(); if (command != 0) { const char s[2] = { command, 0 }; commands.add(fx->get_control_port(k)->get_identifier().utf8().get_data(), s); added_commands = true; } } if (added_commands) { effect.add("commands", commands); } effect.add("state", fx->to_json()); effects.add(effect); } track.add("effects", effects); JSON::Node sends = JSON::array(); for (int j = 0; j < t->get_send_count(); j++) { JSON::Node send = JSON::object(); send.add("to_track", t->get_send_track(j)); send.add("amount", t->get_send_amount(j)); sends.add(send); } track.add("sends", sends); tracks.add(track); } node.add("tracks", tracks); { //orders JSON::Node orders = JSON::array(); int max_order = -1; for (int i = 0; i <= Song::ORDER_MAX; i++) { if (song->order_get(i) != Song::ORDER_EMPTY) { max_order = i; } } for (int i = 0; i <= max_order; i++) { int order = song->order_get(i); if (order == Song::ORDER_SKIP) { order = -1; //more readable i guess } orders.add(order); } node.add("orderlist", orders); } { //patterns JSON::Node patterns = JSON::array(); for (int i = 0; i < Song::MAX_PATTERN; i++) { JSON::Node pattern = JSON::object(); bool pattern_valid = false; JSON::Node tracks = JSON::array(); for (int j = 0; j < song->get_track_count(); j++) { Track *t = song->get_track(j); JSON::Node track = JSON::object(); bool track_valid = false; int note_count = t->get_note_count(i); if (note_count) { JSON::Node notes = JSON::array(); for (int k = 0; k < note_count; k++) { JSON::Node note = JSON::object(); Track::Pos p = t->get_note_pos_by_index(i, k); note.add("tick", (int)p.tick); note.add("column", p.column); Track::Note n = t->get_note_by_index(i, k); if (n.note <= Track::Note::MAX_NOTE) { note.add("note", n.note); } else if (n.note == Track::Note::OFF) { note.add("note", "off"); } if (n.volume != Track::Note::EMPTY) { note.add("volume", n.volume); } notes.add(note); } track.add("notes", notes); track_valid = true; } int command_count = t->get_command_count(i); if (command_count) { JSON::Node commands = JSON::array(); for (int k = 0; k < command_count; k++) { JSON::Node command = JSON::object(); Track::Pos p = t->get_command_pos_by_index(i, k); command.add("tick", (int)p.tick); command.add("column", p.column); Track::Command n = t->get_command_by_index(i, k); if (n.command != Track::Command::EMPTY) { command.add("command", int(n.command)); } //there is always parameter command.add("parameter", n.parameter); commands.add(command); } track.add("commands", commands); track_valid = true; } bool automation_valid = false; JSON::Node automations = JSON::array(); for (int k = 0; k < t->get_automation_count(); k++) { Automation *a = t->get_automation(k); int point_count = a->get_point_count(i); if (point_count > 0) { JSON::Node automation = JSON::object(); automation.add("index", k); JSON::Node points = JSON::array(); for (int l = 0; l < point_count; l++) { JSON::Node point = JSON::object(); Tick t = a->get_point_tick_by_index(i, l); point.add("tick", (int)t); int value = a->get_point_by_index(i, l); point.add("value", value); points.add(point); } automation.add("points", points); automation_valid = true; automations.add(automation); } } if (automation_valid) { track.add("automations", automations); } if (track_valid || automation_valid) { track.add("index", j); pattern_valid = true; tracks.add(track); } } bool config_not_default = false; if (song->pattern_get_beats_per_bar(i) != Song::DEFAULT_BEATS_PER_BAR) { config_not_default = true; } if (song->pattern_get_beats(i) != Song::DEFAULT_PATTERN_BEATS) { config_not_default = true; } if (song->pattern_get_swing_beat_divisor(i) != Song::SWING_BEAT_DIVISOR_1) { config_not_default = true; } if (pattern_valid || config_not_default) { pattern.add("index", i); pattern.add("tracks", tracks); pattern.add("beats", song->pattern_get_beats(i)); pattern.add("beats_per_bar", song->pattern_get_beats_per_bar(i)); pattern.add("swing_divisor_index", song->pattern_get_swing_beat_divisor(i)); patterns.add(pattern); } } node.add("patterns", patterns); } return save_json(p_path, node); } Error SongFile::load(const String &p_path, List *r_missing_plugins) { JSON::Node node; Error err = load_json(p_path, node); if (err != OK) { return err; } if (!node.has("software") || node.get("software").toString() != VERSION_SOFTWARE_NAME) { return ERR_FILE_UNRECOGNIZED; } { JSON::Node version = node.get("version"); if (!version.has("major") || !version.has("minor")) { return ERR_FILE_CORRUPT; } int minor = version.get("minor").toInt(); int major = version.get("major").toInt(); if (VERSION_MAJOR * 1000 + VERSION_MINOR < major * 1000 + minor) { return ERR_FILE_TOO_NEW; } } if (!node.has("information")) { return ERR_FILE_CORRUPT; } song->clear(); { JSON::Node information = node.get("information"); String str; str.parse_utf8(information.get("name").toString().c_str()); song->set_name(str); str.parse_utf8(information.get("author").toString().c_str()); song->set_author(str); str.parse_utf8(information.get("description").toString().c_str()); song->set_description(str); } if (node.has("tempo")) { JSON::Node tempo = node.get("tempo"); song->set_bpm(tempo.get("bpm").toInt()); song->set_swing(tempo.get("swing").toFloat() / 100.0); } JSON::Node tracks = node.get("tracks"); for (int i = 0; i < tracks.getCount(); i++) { JSON::Node track = tracks.get(i); Track *t = new Track; String name; name.parse_utf8(track.get("name").toString().c_str()); t->set_name(name); t->set_mix_volume_db(track.get("volume").toFloat()); t->set_muted(track.get("muted").toBool()); int columns = track.get("columns").toInt(); t->set_columns(MAX(1, columns)); int command_columns = track.get("command_columns").toInt(); t->set_command_columns(MAX(0, command_columns)); JSON::Node effects = track.get("effects"); for (int j = 0; j < effects.getCount(); j++) { JSON::Node effect = effects.get(j); String provider_id; provider_id.parse_utf8(effect.get("provider_id").toString().c_str()); String id; id.parse_utf8(effect.get("id").toString().c_str()); int effect_index = -1; for (int k = 0; k < fx_factory->get_audio_effect_count(); k++) { if (fx_factory->get_audio_effect(k)->provider_id == provider_id && fx_factory->get_audio_effect(k)->unique_ID == id) { effect_index = k; } } AudioEffect *fx = NULL; if (effect_index != -1) { fx = fx_factory->instantiate_effect(effect_index); } if (!fx) { //plugin not found, report and add a dummy if (r_missing_plugins) { MissingPlugin missing; missing.provider = provider_id; missing.id = id; r_missing_plugins->push_back(missing); } //create a dummy one to hold data (but does no audio). AudioEffectDummy *dummy = new AudioEffectDummy; dummy->provider_id = provider_id; dummy->unique_id = id; //check automations and add relevant ports JSON::Node automations = track.get("automations"); for (int k = 0; k < automations.getCount(); k++) { JSON::Node automation = automations.get(k); int index = automation.get("effect_index").toInt(); if (index == j) { String parameter; parameter.parse_utf8(automation.get("parameter").toString().c_str()); ControlPortDefault cpdefault; cpdefault.identifier = parameter; cpdefault.name = parameter; dummy->ports.push_back(cpdefault); } } if (effect.has("commands")) { JSON::Node commands = effect.get("commands"); for (JSON::Node::iterator I = commands.begin(); I != commands.end(); I++) { printf("ITERATION \n"); String name; name.parse_utf8(I->first.c_str()); int exists = false; for (int k = 0; k < dummy->ports.size(); k++) { if (dummy->ports[i].name == name) { exists = true; break; } } if (!exists) { ControlPortDefault cpdefault; cpdefault.identifier = name; cpdefault.name = name; dummy->ports.push_back(cpdefault); } } } fx = dummy; } if (effect.has("commands")) { JSON::Node commands = effect.get("commands"); for (int k = 0; k < fx->get_control_port_count(); k++) { std::string cpname = fx->get_control_port(k)->get_identifier().utf8().get_data(); if (commands.has(cpname)) { std::string cs = commands.get(cpname).toString(); if (cs.length() >= 1) { char c = cs[0]; fx->get_control_port(k)->set_command(c); } } } } fx->set_skip(effect.get("skip").toBool()); fx->from_json(effect.get("state")); t->add_audio_effect(fx); } JSON::Node automations = track.get("automations"); for (int j = 0; j < automations.getCount(); j++) { JSON::Node automation = automations.get(j); int effect_index = automation.get("effect_index").toInt(); String param_name; param_name.parse_utf8(automation.get("parameter").toString().c_str()); ERR_CONTINUE(effect_index < 0 || effect_index >= t->get_audio_effect_count()); AudioEffect *fx = t->get_audio_effect(effect_index); ControlPort *control_port = NULL; for (int k = 0; k < fx->get_control_port_count(); k++) { if (fx->get_control_port(k)->get_identifier() == param_name) { control_port = fx->get_control_port(k); } } ERR_CONTINUE(!control_port); Automation *a = new Automation(control_port, fx); String edit_mode; edit_mode.parse_utf8(automation.get("edit_mode").toString().c_str()); if (edit_mode == "envelope_large") { a->set_edit_mode(Automation::EDIT_ENVELOPE_LARGE); } else if (edit_mode == "envelope_small") { a->set_edit_mode(Automation::EDIT_ENVELOPE_SMALL); } else { a->set_edit_mode(Automation::EDIT_ROWS_DISCRETE); } t->add_automation(a); } JSON::Node sends = track.get("sends"); for (int j = 0; j < sends.getCount(); j++) { JSON::Node send = sends.get(j); t->add_send(send.get("to_track").toInt(), send.get("amount").toFloat()); } song->add_track(t); } { //orders JSON::Node orders = node.get("orderlist"); for (int i = 0; i < orders.getCount(); i++) { int order = orders.get(i).toInt(); if (order < 0) { song->order_set(i, Song::ORDER_SKIP); } else { song->order_set(i, order); } } } if (node.has("patterns")) { //patterns JSON::Node patterns = node.get("patterns"); for (int i = 0; i < patterns.getCount(); i++) { JSON::Node pattern = patterns.get(i); ERR_CONTINUE(!pattern.has("index")); int index = pattern.get("index").toInt(); if (pattern.has("beats")) { song->pattern_set_beats(index, pattern.get("beats").toInt()); } if (pattern.has("beats_per_bar")) { song->pattern_set_beats_per_bar(index, pattern.get("beats_per_bar").toInt()); } if (pattern.has("swing_divisor_index")) { song->pattern_set_swing_beat_divisor(index, Song::SwingBeatDivisor(pattern.get("swing_divisor_index").toInt())); } JSON::Node tracks = pattern.get("tracks"); for (int j = 0; j < tracks.getCount(); j++) { JSON::Node track = tracks.get(j); int track_index = track.get("index").toInt(); ERR_CONTINUE(track_index < 0 || track_index >= song->get_track_count()); Track *t = song->get_track(track_index); if (track.has("notes")) { JSON::Node notes = track.get("notes"); for (int k = 0; k < notes.getCount(); k++) { JSON::Node note = notes.get(k); Track::Pos p; p.tick = note.get("tick").toInt(); p.column = note.get("column").toInt(); Track::Note n; if (note.has("volume")) { n.volume = note.get("volume").toInt(); } else { n.volume = Track::Note::EMPTY; } if (note.get("note").getType() == JSON::Node::T_STRING) { //note off n.note = Track::Note::OFF; } else { n.note = note.get("note").toInt(); } t->set_note(index, p, n); } } if (track.has("commands")) { JSON::Node commands = track.get("commands"); for (int k = 0; k < commands.getCount(); k++) { JSON::Node command = commands.get(k); Track::Pos p; p.tick = command.get("tick").toInt(); p.column = command.get("column").toInt(); Track::Command c; if (command.has("command")) { c.command = command.get("command").toInt(); } else { c.command = Track::Command::EMPTY; } c.parameter = command.get("parameter").toInt(); t->set_command(index, p, c); } } if (track.has("automations")) { JSON::Node automations = track.get("automations"); for (int k = 0; k < automations.getCount(); k++) { JSON::Node automation = automations.get(k); int auto_index = automation.get("index").toInt(); ERR_CONTINUE(auto_index < 0 || auto_index >= t->get_automation_count()); Automation *a = t->get_automation(auto_index); if (automation.has("points")) { JSON::Node points = automation.get("points"); for (int l = 0; l < points.getCount(); l++) { JSON::Node point = points.get(l); Tick t = point.get("tick").toInt(); int value = point.get("value").toInt(); a->set_point(index, t, value); } } } } } } } song->update_process_order(); return OK; } ////////////////////////////////////////////////////////////////////////////// Error SongFile::export_wav(const String &p_path, int p_export_hz, ExportWavPatternCallback p_callback, void *p_userdata) { #ifdef WINDOWS_ENABLED FILE *f = _wfopen(p_path.c_str(), L"wb"); #else FILE *f = fopen(p_path.utf8().get_data(), "wb"); #endif if (!f) { return ERR_CANT_OPEN; } fwrite("RIFF", 4, 1, f); uint32_t total_size = 4 /* WAVE */ + 8 /* fmt+size */ + 16 /* format */ + 8 /* data+size */; fwrite(&total_size, 4, 1, f); fwrite("WAVE", 4, 1, f); fwrite("fmt ", 4, 1, f); uint32_t format = 16; fwrite(&format, 4, 1, f); uint16_t compression = 1; //standard pcm fwrite(&compression, 2, 1, f); uint16_t channels = 2; //stereo fwrite(&channels, 2, 1, f); uint32_t sampling_rate = p_export_hz; fwrite(&sampling_rate, 4, 1, f); uint16_t bits_per_sample = 32; uint16_t blockalign = bits_per_sample / 8 * (2); uint32_t bytes_per_sec = sampling_rate * blockalign; fwrite(&bytes_per_sec, 4, 1, f); fwrite(&blockalign, 2, 1, f); fwrite(&bits_per_sample, 2, 1, f); fwrite("data", 4, 1, f); uint32_t data_size = 0; fwrite(&data_size, 4, 1, f); uint32_t data_from = ftell(f); //begin save data SoundDriverManager::lock_driver(); uint32_t block_size = 64; AudioFrame *block = new AudioFrame[block_size]; song->set_sampling_rate(sampling_rate); song->set_process_buffer_size(block_size); song->play(0, 0, false); int current_order = -1; while (song->is_playing()) { song->process_audio(block, block_size); for (int i = 0; i < block_size; i++) { int32_t l = int32_t(CLAMP(double(block[i].l), -1.0, 1.0) * double(2147483647.0)); int32_t r = int32_t(CLAMP(double(block[i].r), -1.0, 1.0) * double(2147483647.0)); fwrite(&l, 4, 1, f); fwrite(&r, 4, 1, f); } int order = song->get_playing_order(); if (current_order != order) { current_order = order; printf("order: %i\n", current_order); if (p_callback) { p_callback(current_order, p_userdata); } } } song->stop(); delete[] block; //restore song->set_sampling_rate(SoundDriverManager::get_mix_frequency_hz(SoundDriverManager::get_mix_frequency())); song->set_process_buffer_size(SoundDriverManager::get_buffer_size_frames(SoundDriverManager::get_step_buffer_size())); SoundDriverManager::unlock_driver(); //end save data data_size = ftell(f) - data_from; fseek(f, 4, SEEK_SET); total_size += data_size; fwrite(&total_size, 4, 1, f); fseek(f, 0x28, SEEK_SET); fwrite(&data_size, 4, 1, f); fclose(f); return OK; } SongFile::SongFile(Song *p_song, AudioEffectFactory *p_fx_factory) { song = p_song; fx_factory = p_fx_factory; } zytrax-master/engine/song_file.h000066400000000000000000000011651347722000700173000ustar00rootroot00000000000000#ifndef SONG_FILE_H #define SONG_FILE_H #include "engine/audio_effect.h" #include "engine/song.h" class SongFile { Song *song; AudioEffectFactory *fx_factory; public: struct MissingPlugin { String provider; String id; }; Error save(const String &p_path); Error load(const String &p_path, List *r_missing_plugins = NULL); typedef void (*ExportWavPatternCallback)(int, void *); Error export_wav(const String &p_path, int p_export_hz = 96000, ExportWavPatternCallback p_callback = NULL, void *p_userdata = NULL); SongFile(Song *p_song, AudioEffectFactory *p_fx_factory); }; #endif // SONG_FILE_H zytrax-master/engine/sound_driver.cpp000066400000000000000000000007421347722000700203710ustar00rootroot00000000000000// // C++ Implementation: sound_driver // // Description: // // // Author: Juan Linietsky , (C) 2006 // // Copyright: See COPYING file that comes with this distribution // // #include "sound_driver.h" #include "sound_driver_manager.h" void SoundDriver::mix(AudioFrame *p_buffer, int p_frames) { if (SoundDriverManager::mix_callback) { SoundDriverManager::mix_callback(p_buffer, p_frames); } } SoundDriver::SoundDriver() { } SoundDriver::~SoundDriver() { } zytrax-master/engine/sound_driver.h000066400000000000000000000017421347722000700200370ustar00rootroot00000000000000// // C++ Interface: sound_driver // // Description: // // // Author: Juan Linietsky , (C) 2006 // // Copyright: See COPYING file that comes with this distribution // // #ifndef SOUND_DRIVER_H #define SOUND_DRIVER_H #include "dsp/frame.h" #include "globals/config.h" #include "rstring.h" /** @author Juan Linietsky */ class SoundDriver { protected: void mix(AudioFrame *p_buffer, int p_frames); public: virtual void lock() = 0; ///< Lock called from UI,game,etc (non-audio) thread, to access audio variables virtual void unlock() = 0; ///< UnLock called from UI,game,etc (non-audio) thread, to access audio variables virtual String get_name() const = 0; virtual String get_id() const = 0; virtual float get_max_level_l() = 0; virtual float get_max_level_r() = 0; virtual bool is_active() = 0; virtual bool init() = 0; virtual void finish() = 0; virtual int get_mix_rate() const = 0; SoundDriver(); virtual ~SoundDriver(); }; #endif zytrax-master/engine/sound_driver_manager.cpp000066400000000000000000000070611347722000700220640ustar00rootroot00000000000000// // C++ Implementation: sound_driver_manager // // Description: // // // Author: Juan Linietsky , (C) 2006 // // Copyright: See COPYING file that comes with this distribution // // #include "sound_driver_manager.h" #include "error_macros.h" int SoundDriverManager::buffer_size_frames[SoundDriverManager::BUFFER_SIZE_MAX] = { 64, 128, 256, 512, 1024, 2048, 4096 }; int SoundDriverManager::mix_frequenzy_hz[SoundDriverManager::MIX_FREQ_MAX] = { 22050, 44100, 48000, 96000, 192000 }; SoundDriver *SoundDriverManager::sound_drivers[MAX_SOUND_DRIVERS]; int SoundDriverManager::sound_driver_count = 0; int SoundDriverManager::current_driver = 0; SoundDriverManager::MixFrequency SoundDriverManager::mixing_hz = SoundDriverManager::MIX_FREQ_48000; SoundDriverManager::BufferSize SoundDriverManager::buffer_size = SoundDriverManager::BUFFER_SIZE_1024; SoundDriverManager::BufferSize SoundDriverManager::step_size = SoundDriverManager::BUFFER_SIZE_256; SoundDriverManager::MixCallback SoundDriverManager::mix_callback = NULL; int SoundDriverManager::get_current_driver_index() { return current_driver; } void SoundDriverManager::lock_driver() { if (sound_driver_count == 0) return; ERR_FAIL_INDEX(current_driver, sound_driver_count); sound_drivers[current_driver]->lock(); } void SoundDriverManager::unlock_driver() { if (sound_driver_count == 0) return; ERR_FAIL_INDEX(current_driver, sound_driver_count); sound_drivers[current_driver]->unlock(); } bool SoundDriverManager::is_driver_active() { ERR_FAIL_INDEX_V(current_driver, sound_driver_count, false); return sound_drivers[current_driver]->is_active(); } bool SoundDriverManager::init_driver(int p_driver) { if (p_driver == -1) p_driver = current_driver; ERR_FAIL_INDEX_V(p_driver, sound_driver_count, true); //init current driver, but current is invalid if (current_driver >= 0 && current_driver < sound_driver_count && sound_drivers[current_driver]->is_active()) sound_drivers[current_driver]->finish(); current_driver = p_driver; return sound_drivers[current_driver]->init(); } void SoundDriverManager::finish_driver() { ERR_FAIL_INDEX(current_driver, sound_driver_count); if (!sound_drivers[current_driver]->is_active()) return; sound_drivers[current_driver]->finish(); } int SoundDriverManager::get_driver_count() { return sound_driver_count; } SoundDriver *SoundDriverManager::get_driver(int p_which) { if (p_which == -1) p_which = current_driver; ERR_FAIL_INDEX_V(p_which, sound_driver_count, 0); return sound_drivers[p_which]; } void SoundDriverManager::set_mix_frequency(MixFrequency p_frequency) { mixing_hz = p_frequency; } SoundDriverManager::MixFrequency SoundDriverManager::get_mix_frequency() { return mixing_hz; } void SoundDriverManager::set_buffer_size(BufferSize p_size) { buffer_size = p_size; } SoundDriverManager::BufferSize SoundDriverManager::get_buffer_size() { return buffer_size; } void SoundDriverManager::set_step_buffer_size(BufferSize p_size) { step_size = p_size; } SoundDriverManager::BufferSize SoundDriverManager::get_step_buffer_size() { return step_size; } int SoundDriverManager::get_mix_frequency_hz(MixFrequency p_frequency) { return mix_frequenzy_hz[p_frequency]; } int SoundDriverManager::get_buffer_size_frames(BufferSize p_size) { return buffer_size_frames[p_size]; } void SoundDriverManager::set_mix_callback(MixCallback p_callback) { mix_callback = p_callback; } void SoundDriverManager::register_driver(SoundDriver *p_driver) { ERR_FAIL_COND(sound_driver_count >= MAX_SOUND_DRIVERS); sound_drivers[sound_driver_count++] = p_driver; } zytrax-master/engine/sound_driver_manager.h000066400000000000000000000042271347722000700215320ustar00rootroot00000000000000// // C++ Interface: sound_driver_manager // // Description: // // // Author: Juan Linietsky , (C) 2006 // // Copyright: See COPYING file that comes with this distribution // // #ifndef SOUND_DRIVER_MANAGER_H #define SOUND_DRIVER_MANAGER_H #include "dsp/frame.h" #include "engine/sound_driver.h" /** @author Juan Linietsky */ class SoundDriverManager { enum { MAX_SOUND_DRIVERS = 64 }; public: enum BufferSize { BUFFER_SIZE_64, BUFFER_SIZE_128, BUFFER_SIZE_256, BUFFER_SIZE_512, BUFFER_SIZE_1024, BUFFER_SIZE_2048, BUFFER_SIZE_4096, BUFFER_SIZE_MAX }; enum MixFrequency { MIX_FREQ_22050, MIX_FREQ_44100, MIX_FREQ_48000, MIX_FREQ_96000, MIX_FREQ_192000, MIX_FREQ_MAX }; typedef void (*MixCallback)(AudioFrame *p_buffer, int p_frames); private: static int buffer_size_frames[BUFFER_SIZE_MAX]; static int mix_frequenzy_hz[MIX_FREQ_MAX]; static SoundDriver *sound_drivers[MAX_SOUND_DRIVERS]; static int sound_driver_count; static int current_driver; static MixFrequency mixing_hz; static BufferSize buffer_size; static BufferSize step_size; friend class SoundDriver; static MixCallback mix_callback; public: static void lock_driver(); ///< Protect audio thread variables from ui,game,etc (non-audio) threads static void unlock_driver(); ///< Protect audio thread variables from ui,game,etc (non-audio) threads static bool init_driver(int p_driver = -1); ///< -1 is current static void finish_driver(); static bool is_driver_active(); static int get_driver_count(); static SoundDriver *get_driver(int p_which = -1); ///< -1 is current static int get_current_driver_index(); static void set_mix_frequency(MixFrequency p_frequency); static MixFrequency get_mix_frequency(); static void set_buffer_size(BufferSize p_size); static BufferSize get_buffer_size(); static void set_step_buffer_size(BufferSize p_size); static BufferSize get_step_buffer_size(); static void register_driver(SoundDriver *p_driver); static int get_mix_frequency_hz(MixFrequency p_frequency); static int get_buffer_size_frames(BufferSize p_size); static void set_mix_callback(MixCallback p_callback); }; #endif zytrax-master/engine/track.cpp000066400000000000000000000716771347722000700170110ustar00rootroot00000000000000#include "track.h" #include "dsp/db.h" #include "song.h" #include void Automation::_ui_changed_callbacks(void *p_ud) { Automation *automation = (Automation *)p_ud; automation->_ui_changed_callback(); } void Automation::_ui_changed_callback() { if (has_pre_play_capture) { pre_play_capture_value = port->get(); //update pre play capture because UI modified it } } void Automation::set_point(int p_pattern, Tick p_offset, uint8_t p_value) { _AUDIO_LOCK_ if (p_value == EMPTY) { remove_point(p_pattern, p_offset); return; } if (!data.has(p_pattern)) { data[p_pattern] = ValueStream(); } data[p_pattern].insert(p_offset, p_value); } bool Automation::has_point(int p_pattern, Tick p_offset) const { if (!data.has(p_pattern)) return false; return data[p_pattern].find_exact(p_offset) >= 0; } uint8_t Automation::get_point(int p_pattern, Tick p_offset) const { if (!data.has(p_pattern)) return EMPTY; int idx = data[p_pattern].find_exact(p_offset); if (idx < 0) return EMPTY; return data[p_pattern][idx]; } void Automation::remove_point(int p_pattern, Tick p_offset) { if (!data.has(p_pattern)) return; _AUDIO_LOCK_ int idx = data[p_pattern].find_exact(p_offset); if (idx < 0) return; data[p_pattern].erase(idx); if (data[p_pattern].size() == 0) data.erase(p_pattern); } Tick Automation::get_point_tick_by_index(int p_pattern, int p_index) const { // this is used super often when playing, so it should be more optimized const Map >::Element *E = data.find(p_pattern); ERR_FAIL_COND_V(!E, 0); ERR_FAIL_INDEX_V(p_index, E->get().size(), 0); return E->get().get_pos(p_index); } uint8_t Automation::get_point_by_index(int p_pattern, int p_index) const { // this is used super often when playing, so it should be more optimized const Map >::Element *E = data.find(p_pattern); ERR_FAIL_COND_V(!E, 0); ERR_FAIL_INDEX_V(p_index, E->get().size(), 0); return E->get()[p_index]; } int Automation::get_point_count(int p_pattern) const { const Map >::Element *E = data.find(p_pattern); if (!E) return 0; return E->get().size(); } float Automation::interpolate_offset(int p_pattern, Tick p_offset) const { const Map >::Element *E = data.find(p_pattern); if (!E) { return -1; } const ValueStream &vs = E->get(); int pos = vs.find(p_offset); int total = vs.size(); if (pos < 0 || pos >= total) return -1; int n = pos + 1; if (n >= total) return -1; float c = float(p_offset - vs.get_pos(pos)) / float(vs.get_pos(n) - vs.get_pos(pos)); float a = (vs[pos] / float(VALUE_MAX)); float b = (vs[n] / float(VALUE_MAX)); return b * c + a * (1.0 - c); } void Automation::get_points_in_range(int p_pattern, Tick p_from, Tick p_to, int &r_first, int &r_count) const { const Map >::Element *E = data.find(p_pattern); if (!E) { r_count = 0; return; } const ValueStream &vs = E->get(); if (vs.size() == 0) { r_count = 0; return; } int pos_beg = vs.find(p_from); int pos_end = vs.find(p_to); if (pos_end >= 0 && p_to == vs.get_pos(pos_end)) { pos_end--; } if (pos_end < 0) { r_count = 0; return; } if (pos_beg < 0 || vs.get_pos(pos_beg) < p_from) pos_beg++; if (pos_beg > pos_end) { r_count = 0; return; } r_first = pos_beg; r_count = pos_end - pos_beg + 1; } ControlPort *Automation::get_control_port() { return port; } AudioEffect *Automation::get_owner() { return owner; } void Automation::set_edit_mode(EditMode p_mode) { display_mode = p_mode; } Automation::EditMode Automation::get_edit_mode() const { return display_mode; } bool Automation::is_empty() const { return (display_mode == EDIT_ROWS_DISCRETE && data.empty()); } void Automation::pre_play_capture() { if (has_pre_play_capture) { return; //do not re-capture } pre_play_capture_value = port->get(); has_pre_play_capture = true; } void Automation::pre_play_restore() { if (has_pre_play_capture) { port->set(pre_play_capture_value); has_pre_play_capture = false; }; } void Automation::add_notify() { has_pre_play_capture = false; //should be false but just in case port->set_ui_changed_callback(&_ui_changed_callbacks, this); } void Automation::remove_notify() { pre_play_restore(); port->set_ui_changed_callback(NULL, NULL); } Automation::Automation(ControlPort *p_port, AudioEffect *p_owner) { port = p_port; owner = p_owner; display_mode = EDIT_ROWS_DISCRETE; has_pre_play_capture = false; pre_play_capture_value = 0; } /* audio effects */ void Track::set_name(String p_name) { name = p_name; } String Track::get_name() const { return name; } int Track::get_audio_effect_count() const { return effects.size(); } void Track::add_audio_effect(AudioEffect *p_effect, int p_pos) { _AUDIO_LOCK_ if (p_pos < 0) p_pos = effects.size(); ERR_FAIL_COND(p_pos > effects.size()); effects.insert(p_pos, p_effect); p_effect->set_process_block_size(process_buffer.size()); p_effect->set_sampling_rate(sampling_rate); } void Track::remove_audio_effect(int p_pos) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_pos, effects.size()); AudioEffect *fx = effects[p_pos]; for (int i = 0; i < automations.size(); i++) { if (automations[i]->get_owner() == fx) { automations.remove(i); i--; } } effects.remove(p_pos); } void Track::swap_audio_effects(int p_effect, int p_with_effect) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_effect, effects.size()); ERR_FAIL_INDEX(p_with_effect, effects.size()); SWAP(effects[p_effect], effects[p_with_effect]); } AudioEffect *Track::get_audio_effect(int p_pos) { ERR_FAIL_INDEX_V(p_pos, effects.size(), NULL); return effects[p_pos]; } /* automations */ int Track::get_automation_count() const { return automations.size(); } void Track::add_automation(Automation *p_automation, int p_pos) { _AUDIO_LOCK_ if (p_pos < 0) p_pos = automations.size(); ERR_FAIL_COND(p_pos > automations.size()); p_automation->add_notify(); automations.insert(p_pos, p_automation); } void Track::remove_automation(int p_pos) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_pos, automations.size()); automations[p_pos]->remove_notify(); automations.remove(p_pos); } Automation *Track::get_automation(int p_pos) const { ERR_FAIL_INDEX_V(p_pos, automations.size(), NULL); return automations[p_pos]; } void Track::swap_automations(int p_which, int p_by_which) { _AUDIO_LOCK_ SWAP(automations[p_which], automations[p_by_which]); } // disabled automations int Track::get_disabled_automation_count() const { return disabled_automations.size(); } void Track::add_disabled_automation(Automation *p_automation, int p_pos) { _AUDIO_LOCK_ if (p_pos < 0) { p_pos = disabled_automations.size(); } ERR_FAIL_COND(p_pos > disabled_automations.size()); disabled_automations.insert(p_pos, p_automation); } void Track::remove_disabled_automation(int p_pos) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_pos, disabled_automations.size()); disabled_automations.remove(p_pos); } Automation *Track::get_disabled_automation(int p_pos) const { ERR_FAIL_INDEX_V(p_pos, disabled_automations.size(), NULL); return disabled_automations[p_pos]; } //// void Track::set_columns(int p_columns) { _AUDIO_LOCK_ ERR_FAIL_COND(p_columns < 1); note_columns = p_columns; column_state.resize(note_columns); for (int i = 0; i < column_state.size(); i++) { column_state[i] = Note::EMPTY; } } int Track::get_column_count() const { return note_columns; } void Track::set_note(int p_pattern, Pos p_pos, Note p_note) { _AUDIO_LOCK_ if (!note_data.has(p_pattern)) note_data[p_pattern] = ValueStream(); if (p_note.is_empty()) { int idx = note_data[p_pattern].find_exact(p_pos); if (idx < 0) return; note_data[p_pattern].erase(idx); } else { note_data[p_pattern].insert(p_pos, p_note); } } Track::Note Track::get_note(int p_pattern, Pos p_pos) const { const Map >::Element *E = note_data.find(p_pattern); if (!E) return Note(); int idx = E->get().find_exact(p_pos); if (idx < 0) return Note(); else return E->get()[idx]; } void Track::get_notes_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, int &r_first, int &r_count) const { const Map >::Element *E = note_data.find(p_pattern); if (!E) { r_count = 0; return; } const ValueStream &vs = E->get(); if (vs.size() == 0) { r_count = 0; return; } int pos_beg = vs.find(p_from); int pos_end = vs.find(p_to); if (pos_end >= 0 && p_to == vs.get_pos(pos_end)) { pos_end--; } if (pos_end < 0) { r_count = 0; return; } if (pos_beg < 0 || vs.get_pos(pos_beg) < p_from) pos_beg++; if (pos_beg > pos_end) { r_count = 0; return; } r_first = pos_beg; r_count = pos_end - pos_beg + 1; } int Track::get_note_count(int p_pattern) const { const Map >::Element *E = note_data.find(p_pattern); if (!E) return 0; return E->get().size(); } Track::Note Track::get_note_by_index(int p_pattern, int p_index) const { const Map >::Element *E = note_data.find(p_pattern); if (!E) return Note(); const ValueStream &vs = E->get(); ERR_FAIL_INDEX_V(p_index, vs.size(), Note()); return vs[p_index]; } Track::Pos Track::get_note_pos_by_index(int p_pattern, int p_index) const { const Map >::Element *E = note_data.find(p_pattern); if (!E) return Pos(); const ValueStream &vs = E->get(); ERR_FAIL_INDEX_V(p_index, vs.size(), Pos()); return vs.get_pos(p_index); } void Track::get_notes_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_notes) const { Pos from = p_from; Pos to = p_to; if (from.column > to.column) { SWAP(from.column, to.column); } if (from.tick > to.tick) { SWAP(from.tick, to.tick); } int fromidx; int count; get_notes_in_range(p_pattern, from, to, fromidx, count); for (int i = 0; i < count; i++) { PosNote pn; pn.pos = get_note_pos_by_index(p_pattern, i + fromidx); if (pn.pos.column < from.column || pn.pos.column > to.column) continue; pn.note = get_note_by_index(p_pattern, i + fromidx); r_notes->push_back(pn); } } //// void Track::set_command_columns(int p_columns) { _AUDIO_LOCK_ ERR_FAIL_COND(p_columns < 0); command_columns = p_columns; for (int i = 0; i < column_state.size(); i++) { column_state[i] = Command::EMPTY; } } int Track::get_command_column_count() const { return command_columns; } void Track::set_command(int p_pattern, Pos p_pos, Command p_command) { _AUDIO_LOCK_ if (!command_data.has(p_pattern)) command_data[p_pattern] = ValueStream(); if (p_command.is_empty()) { int idx = command_data[p_pattern].find_exact(p_pos); if (idx < 0) return; command_data[p_pattern].erase(idx); } else { command_data[p_pattern].insert(p_pos, p_command); } } Track::Command Track::get_command(int p_pattern, Pos p_pos) const { const Map >::Element *E = command_data.find(p_pattern); if (!E) return Command(); int idx = E->get().find_exact(p_pos); if (idx < 0) return Command(); else return E->get()[idx]; } void Track::get_commands_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, int &r_first, int &r_count) const { const Map >::Element *E = command_data.find(p_pattern); if (!E) { r_count = 0; return; } const ValueStream &vs = E->get(); if (vs.size() == 0) { r_count = 0; return; } int pos_beg = vs.find(p_from); int pos_end = vs.find(p_to); if (pos_end >= 0 && p_to == vs.get_pos(pos_end)) { pos_end--; } if (pos_end < 0) { r_count = 0; return; } if (pos_beg < 0 || vs.get_pos(pos_beg) < p_from) pos_beg++; if (pos_beg > pos_end) { r_count = 0; return; } r_first = pos_beg; r_count = pos_end - pos_beg + 1; } int Track::get_command_count(int p_pattern) const { const Map >::Element *E = command_data.find(p_pattern); if (!E) return 0; return E->get().size(); } Track::Command Track::get_command_by_index(int p_pattern, int p_index) const { const Map >::Element *E = command_data.find(p_pattern); if (!E) return Command(); const ValueStream &vs = E->get(); ERR_FAIL_INDEX_V(p_index, vs.size(), Command()); return vs[p_index]; } Track::Pos Track::get_command_pos_by_index(int p_pattern, int p_index) const { const Map >::Element *E = command_data.find(p_pattern); if (!E) return Pos(); const ValueStream &vs = E->get(); ERR_FAIL_INDEX_V(p_index, vs.size(), Pos()); return vs.get_pos(p_index); } void Track::get_commands_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_commands) const { Pos from = p_from; Pos to = p_to; if (from.column > to.column) { SWAP(from.column, to.column); } if (from.tick > to.tick) { SWAP(from.tick, to.tick); } int fromidx; int count; get_commands_in_range(p_pattern, from, to, fromidx, count); for (int i = 0; i < count; i++) { PosCommand pn; pn.pos = get_command_pos_by_index(p_pattern, i + fromidx); if (pn.pos.column < from.column || pn.pos.column > to.column) continue; pn.command = get_command_by_index(p_pattern, i + fromidx); r_commands->push_back(pn); } } /// int Track::get_event_column_count() const { return note_columns + command_columns + automations.size(); } void Track::set_event(int p_pattern, int p_column, Tick p_pos, const Event &p_event) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_column, get_event_column_count()); if (p_column < note_columns) { //note ERR_FAIL_COND(p_event.type != Event::TYPE_NOTE); Pos p; p.column = p_column; p.tick = p_pos; set_note(p_pattern, p, p_event); return; } p_column -= note_columns; if (p_column < command_columns) { //command ERR_FAIL_COND(p_event.type != Event::TYPE_COMMAND); Pos p; p.column = p_column; p.tick = p_pos; set_command(p_pattern, p, p_event); return; } p_column -= command_columns; { ERR_FAIL_COND(p_event.type != Event::TYPE_AUTOMATION); for (int i = 0; i < automations.size(); i++) { if (p_column == 0) { get_automation(i)->set_point(p_pattern, p_pos, p_event.a); return; } p_column--; } } } Track::Event::Type Track::get_event_column_type(int p_column) const { if (p_column < note_columns) return Event::TYPE_NOTE; else if (p_column < note_columns + command_columns) return Event::TYPE_COMMAND; else return Event::TYPE_AUTOMATION; } Track::Event Track::get_event(int p_pattern, int p_column, Tick p_pos) const { ERR_FAIL_INDEX_V(p_column, get_event_column_count(), Event()); if (p_column < note_columns) { //note Pos p; p.column = p_column; p.tick = p_pos; return get_note(p_pattern, p); } p_column -= note_columns; if (p_column < command_columns) { //command Pos p; p.column = p_column; p.tick = p_pos; return get_command(p_pattern, p); } p_column -= command_columns; { for (int i = 0; i < automations.size(); i++) { if (p_column == 0) { return get_automation(i)->get_point(p_pattern, p_pos); } p_column--; } } ERR_FAIL_COND_V(true, Event()); } void Track::get_events_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_events) const { Map events; Pos events_from = p_from; Pos events_to = p_to; if (events_from.column < note_columns) { //has notes List pn; Pos end = events_to; if (end.column >= note_columns) end.column = note_columns - 1; get_notes_in_range(p_pattern, events_from, end, &pn); for (const List::Element *E = pn.front(); E; E = E->next()) { events.insert(E->get().pos, E->get().note); } } events_from.column -= note_columns; events_to.column -= note_columns; if (!(events_to.column < 0 || events_from.column >= command_columns)) { //has commands List pn; Pos from = events_from; from.column = MAX(0, from.column); Pos end = events_to; end.column = MIN(command_columns - 1, end.column); get_commands_in_range(p_pattern, from, end, &pn); for (const List::Element *E = pn.front(); E; E = E->next()) { Pos evpos; evpos = E->get().pos; evpos.column += note_columns; events.insert(evpos, E->get().command); } } events_from.column -= command_columns; events_to.column -= command_columns; if (!(events_to.column < 0 || events_from.column >= automations.size())) { //has commands int begin = MAX(0, events_from.column); int end = MIN(automations.size() - 1, events_to.column); for (int i = begin; i <= end; i++) { int f, c; automations[i]->get_points_in_range(p_pattern, events_from.tick, events_to.tick, f, c); for (int j = 0; j < c; j++) { uint8_t v = automations[i]->get_point_by_index(p_pattern, j + f); Tick t = automations[i]->get_point_tick_by_index(p_pattern, j + f); Pos p; p.column = i + note_columns + command_columns; p.tick = t; events.insert(p, v); } } } //add everything beautifully ordered for (Map::Element *E = events.front(); E; E = E->next()) { PosEvent pe; pe.pos = E->key(); pe.event = E->get(); r_events->push_back(pe); } } void Track::set_muted(bool p_mute) { _AUDIO_LOCK_ if (muted == p_mute) { return; } muted = p_mute; first_mix = true; //clear memories when unmuted } bool Track::is_muted() const { return muted; } void Track::set_mix_volume_db(float p_db) { mix_volume = p_db; } float Track::get_mix_volume_db() const { return mix_volume; } float Track::get_peak_volume_db_l() const { float pvolume = linear2db(peak_volume_l); peak_volume_l = 0; return pvolume; } float Track::get_peak_volume_db_r() const { float pvolume = linear2db(peak_volume_r); peak_volume_r = 0; return pvolume; } void Track::add_send(int p_track, int p_pos) { _AUDIO_LOCK_ for (int i = 0; i < sends.size(); i++) { ERR_FAIL_COND(sends[i].track == p_track); } Send send; send.amount = 1.0; send.track = p_track; send.mute = false; if (p_pos < 0 || p_pos >= sends.size()) { sends.push_back(send); } else { sends.insert(p_pos, send); } } void Track::set_send_amount(int p_send, float p_amount) { ERR_FAIL_INDEX(p_send, sends.size()); sends[p_send].amount = p_amount; } void Track::set_send_track(int p_send, int p_track) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_send, sends.size()); //cant validate, unfortunately #if 0 for (int i = 0; i < sends.size(); i++) { if (sends[i].track == p_send) { continue; } ERR_FAIL_COND(sends[i].track == p_track); } #endif sends[p_send].track = p_track; } void Track::set_send_mute(int p_send, bool p_mute) { ERR_FAIL_INDEX(p_send, sends.size()); sends[p_send].mute = p_mute; } int Track::get_send_track(int p_send) const { ERR_FAIL_INDEX_V(p_send, sends.size(), SEND_SPEAKERS); return sends[p_send].track; } float Track::get_send_amount(int p_send) const { ERR_FAIL_INDEX_V(p_send, sends.size(), SEND_SPEAKERS); return sends[p_send].amount; } bool Track::is_send_muted(int p_send) const { ERR_FAIL_INDEX_V(p_send, sends.size(), false); return sends[p_send].mute; } int Track::get_send_count() { return sends.size(); } void Track::remove_send(int p_send) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_send, sends.size()); sends.remove(p_send); } void Track::swap_sends(int p_send, int p_with_send) { _AUDIO_LOCK_ ERR_FAIL_INDEX(p_send, sends.size()); ERR_FAIL_INDEX(p_with_send, sends.size()); SWAP(sends[p_send], sends[p_with_send]); } bool Track::has_send(int p_send) const { for (int i = 0; i < sends.size(); i++) { if (sends[i].track == p_send) { return true; } } return false; } void Track::set_process_buffer_size(int p_frames) { _AUDIO_LOCK_ if (input_buffer.size() == p_frames) { return; } input_buffer.resize(p_frames); process_buffer.resize(p_frames); process_buffer2.resize(p_frames); for (int i = 0; i < effects.size(); i++) { effects[i]->set_process_block_size(p_frames); } } void Track::set_sampling_rate(int p_hz) { if (sampling_rate == p_hz) { return; } sampling_rate = p_hz; for (int i = 0; i < effects.size(); i++) { effects[i]->set_sampling_rate(p_hz); } } Tick Track::_get_swinged_tick(Tick p_tick, int p_swing_divisor, float p_swing) { if (p_swing_divisor < 0 || p_swing_divisor >= Song::SWING_BEAT_DIVISOR_MAX) { return p_tick; } static const int divisors[Song::SWING_BEAT_DIVISOR_MAX] = { 1, 2, 3, 4, 6, 8 }; Tick tick_frac_size = TICKS_PER_BEAT / divisors[p_swing_divisor]; Tick tick_frac = p_tick % tick_frac_size; Tick tick_debased = p_tick - tick_frac; Tick swing_split = int(((1.0 + p_swing) * (double)tick_frac_size) / 2.0); if (tick_frac <= swing_split) { tick_frac = tick_frac * (tick_frac_size / 2) / swing_split; } else { tick_frac = tick_frac_size - tick_frac; //invert Tick diff = tick_frac_size - swing_split; if (diff == 0) tick_frac = tick_frac_size; else tick_frac = tick_frac * (tick_frac_size / 2) / diff; tick_frac = tick_frac_size - tick_frac; //revert } return tick_frac + tick_debased; } void Track::add_single_event(const AudioEffect::Event &p_event) { if (event_buffer_size == EVENT_BUFFER_MAX) { return; } event_buffer[event_buffer_size] = p_event; event_buffer_size++; } void Track::process_events(int p_pattern, Tick p_offset, Tick p_from_tick, Tick p_to_tick, int p_bpm, int p_swing_divisor, float p_swing, int p_from, int p_to) { if (event_buffer_size == EVENT_BUFFER_MAX) { return; } if (p_offset == 0) { //add the bpm event_buffer[event_buffer_size].type = AudioEffect::Event::TYPE_BPM; event_buffer[event_buffer_size].param8 = p_bpm; event_buffer[event_buffer_size].paramf = 0; event_buffer_size++; } p_from_tick = _get_swinged_tick(p_from_tick, p_swing_divisor, p_swing); p_to_tick = _get_swinged_tick(p_to_tick, p_swing_divisor, p_swing); int first, count; get_notes_in_range(p_pattern, p_from_tick, p_to_tick, first, count); double tick_to_frame = ((60.0 / double(p_bpm)) / double(TICKS_PER_BEAT)) * double(sampling_rate); for (int i = 0; i < count; i++) { if (event_buffer_size == EVENT_BUFFER_MAX) { return; } Note note = get_note_by_index(p_pattern, first + i); Pos pos = get_note_pos_by_index(p_pattern, first + i); if (pos.column >= note_columns) { continue; } if (p_from != -1 && p_from > pos.column) { continue; } if (p_to != -1 && p_to < pos.column) { continue; } int sample_offset = int(double(p_offset + (pos.tick - p_from_tick)) * tick_to_frame); if (note.note != Note::EMPTY && column_state[pos.column] != Note::EMPTY) { //note or note off, and note was playing in column: must turn off existing note event_buffer[event_buffer_size].type = AudioEffect::Event::TYPE_NOTE_OFF; event_buffer[event_buffer_size].param8 = column_state[pos.column]; if (note.volume == Note::EMPTY || note.note == Note::OFF) { //new note, note-off quickly event_buffer[event_buffer_size].paramf = 1.0; } else { event_buffer[event_buffer_size].paramf = note.volume / 99.0; // map 0 .. 1 } event_buffer[event_buffer_size].offset = sample_offset; event_buffer_size++; column_state[pos.column] = Note::EMPTY; //clear } if (note.note <= Note::MAX_NOTE) { //note on event_buffer[event_buffer_size].type = AudioEffect::Event::TYPE_NOTE; event_buffer[event_buffer_size].param8 = note.note; if (note.volume == Note::EMPTY) { event_buffer[event_buffer_size].paramf = 1.0; } else { event_buffer[event_buffer_size].paramf = note.volume / 99.0; // map 0 .. 1 } event_buffer[event_buffer_size].offset = sample_offset; column_state[pos.column] = note.note; //clear event_buffer_size++; } else if (note.note == Note::EMPTY && note.volume != Note::EMPTY && column_state[pos.column] != Note::EMPTY) { //send volume change alone (may be supported via midi note pressure for regular MIDI) event_buffer[event_buffer_size].type = AudioEffect::Event::TYPE_AFTERTOUCH; event_buffer[event_buffer_size].param8 = column_state[pos.column]; event_buffer[event_buffer_size].paramf = note.volume / 99.0; // map 0 .. 1 event_buffer[event_buffer_size].offset = sample_offset; event_buffer_size++; } } int from_ofs = column_state.size(); //process events get_commands_in_range(p_pattern, p_from_tick, p_to_tick, first, count); for (int i = 0; i < count; i++) { Command command = get_command_by_index(p_pattern, first + i); if (command.command == Command::EMPTY) { //well, do nothing. continue; } Pos pos = get_command_pos_by_index(p_pattern, first + i); if (pos.column >= command_columns) { //may be hidden continue; } if (p_from != -1 && p_from > pos.column + from_ofs) { continue; } if (p_to != -1 && p_to < pos.column + from_ofs) { continue; } //go on a quest to find the control ports and send commands, this should be optimized, though not as many commands //are processed, so its not too bad. float param_value = float(command.parameter) / float(Command::MAX_PARAM); for (int j = 0; j < effects.size(); j++) { AudioEffect *fx = effects[j]; for (int k = 0; k < fx->get_control_port_count(); k++) { ControlPort *control_port = fx->get_control_port(k); if (control_port->get_command() == command.command) { control_port->set_normalized(param_value); } } } } from_ofs += command_columns; for (int j = 0; j < automations.size(); j++) { if (p_from != -1 && p_from > (j + from_ofs)) { continue; } if (p_to != -1 && p_to < (j + from_ofs)) { continue; } Automation *a = automations[j]; if (a->get_edit_mode() == Automation::EDIT_ROWS_DISCRETE) { //set the row, without interpolation a->get_points_in_range(p_pattern, p_from_tick, p_to_tick, first, count); for (int i = 0; i < count; i++) { int value = a->get_point_by_index(p_pattern, first + i); float valuef = float(value) / 99.0; a->get_control_port()->set_normalized(valuef); } } else { //interpolate float value = a->interpolate_offset(p_pattern, p_to_tick); if (value == -1) { //if to is empty, use the value in from value = a->interpolate_offset(p_pattern, p_from_tick); } if (value != -1) { //must be something in there a->get_control_port()->set_normalized(value); } } } } const AudioFrame *Track::process_audio_step() { //see if any of the effects uses the secondary input int effect_count = effects.size(); AudioEffect **effects_ptr = effect_count ? &effects[0] : NULL; bool has_side_input = false; for (int i = 0; i < effect_count; i++) { if (effects_ptr[i]->has_secondary_input() && !effects_ptr[i]->is_skipped()) { has_side_input = true; } } int buffer_len = input_buffer.size(); const AudioFrame *input_buffer_ptr = &input_buffer[0]; AudioFrame *process_buffer_src_ptr = &process_buffer[0]; AudioFrame *process_buffer_dst_ptr = &process_buffer2[0]; if (!has_side_input || muted) { //no side input, just copy input to process for (int i = 0; i < buffer_len; i++) { process_buffer_src_ptr[i] = input_buffer_ptr[i]; } } else { //has side input, which will eventually be used. //zero the buffer for (int i = 0; i < buffer_len; i++) { process_buffer_src_ptr[i] = AudioFrame(0, 0); } } if (!muted) { for (int i = 0; i < effect_count; i++) { if (effects_ptr[i]->is_skipped()) { continue; } if (effects_ptr[i]->has_secondary_input()) { effects_ptr[i]->process_with_secondary(event_buffer, event_buffer_size, process_buffer_src_ptr, input_buffer_ptr, process_buffer_dst_ptr, first_mix); } else { effects_ptr[i]->process(event_buffer, event_buffer_size, process_buffer_src_ptr, process_buffer_dst_ptr, first_mix); } SWAP(process_buffer_src_ptr, process_buffer_dst_ptr); } } //apply volume { float volume = db2linear(mix_volume); for (int i = 0; i < buffer_len; i++) { process_buffer_src_ptr[i] *= volume; float energy_l = ABS(process_buffer_src_ptr[i].l); if (energy_l > peak_volume_l) { peak_volume_l = energy_l; } float energy_r = ABS(process_buffer_src_ptr[i].r); if (energy_r > peak_volume_r) { peak_volume_r = energy_r; } } } //clear mix and event count first_mix = false; event_buffer_size = 0; return process_buffer_src_ptr; } void Track::stop() { for (int i = 0; i < effects.size(); i++) { AudioEffect *fx = effects[i]; fx->reset(); } for (int i = 0; i < column_state.size(); i++) { column_state[i] = Note::EMPTY; } event_buffer_size = 0; first_mix = true; } void Track::automations_pre_play_capture() { for (int i = 0; i < automations.size(); i++) { automations[i]->pre_play_capture(); } } void Track::automations_pre_play_restore() { for (int i = 0; i < automations.size(); i++) { automations[i]->pre_play_restore(); } } Track::Track() { name = "New Track"; note_columns = 1; command_columns = 0; sampling_rate = 44100; mix_volume = 0; event_buffer_size = 0; muted = false; first_mix = true; column_state.resize(1); column_state[0] = Note::EMPTY; peak_volume_l = peak_volume_r = -900; } Track::~Track() { for (int i = 0; i < effects.size(); i++) { delete effects[i]; } for (int i = 0; i < automations.size(); i++) { delete automations[i]; } for (int i = 0; i < disabled_automations.size(); i++) { delete disabled_automations[i]; } } zytrax-master/engine/track.h000066400000000000000000000207231347722000700164400ustar00rootroot00000000000000#ifndef TRACK_H #define TRACK_H #include "audio_effect.h" #include "audio_lock.h" #include "list.h" #include "map.h" #include "value_stream.h" #include "vector.h" #define TICKS_PER_BEAT 192 typedef uint64_t Tick; class Automation { public: enum EditMode { EDIT_ROWS_DISCRETE, EDIT_ENVELOPE_SMALL, EDIT_ENVELOPE_LARGE }; enum { VALUE_MAX = 99, EMPTY = 255 }; private: AudioEffect *owner; ControlPort *port; EditMode display_mode; Map > data; bool has_pre_play_capture; float pre_play_capture_value; static void _ui_changed_callbacks(void *p_ud); void _ui_changed_callback(); public: void set_point(int p_pattern, Tick p_offset, uint8_t p_value); bool has_point(int p_pattern, Tick p_offset) const; uint8_t get_point(int p_pattern, Tick p_offset) const; void remove_point(int p_pattern, Tick p_offset); Tick get_point_tick_by_index(int p_pattern, int p_index) const; uint8_t get_point_by_index(int p_pattern, int p_index) const; int get_point_count(int p_pattern) const; void get_points_in_range(int p_pattern, Tick p_from, Tick p_to, int &r_first, int &r_count) const; float interpolate_offset(int p_pattern, Tick p_offset) const; ControlPort *get_control_port(); AudioEffect *get_owner(); void set_edit_mode(EditMode p_mode); EditMode get_edit_mode() const; void pre_play_capture(); void pre_play_restore(); bool is_empty() const; void add_notify(); void remove_notify(); Automation(ControlPort *p_port, AudioEffect *p_owner = NULL); }; class Track { public: struct Note { enum { EMPTY = 0xFF, OFF = 0xFE, MAX_VOLUME = 99, MAX_NOTE = 119 // 10 octaves }; uint8_t note; uint8_t volume; inline bool is_empty() const { return (note == EMPTY && volume == EMPTY); } bool operator==(Note p_note) const { return note == p_note.note && volume == p_note.volume; } Note(uint8_t p_note = EMPTY, uint8_t p_volume = EMPTY) { note = p_note; volume = p_volume; } }; struct Command { enum { EMPTY = 0xFF, MAX_PARAM = 99, }; uint8_t command; uint8_t parameter; inline bool is_empty() const { return (command == EMPTY && parameter == 0); } bool operator==(Command p_command) const { return command == p_command.command && parameter == p_command.parameter; } Command(uint8_t p_command = EMPTY, uint8_t p_parameter = 0) { command = p_command; parameter = p_parameter; } }; struct Pos { Tick tick; int column; bool operator<(const Pos &p_rval) const { return (tick == p_rval.tick) ? (column < p_rval.column) : (tick < p_rval.tick); } bool operator>(const Pos &p_rval) const { return (tick == p_rval.tick) ? (column > p_rval.column) : (tick > p_rval.tick); } bool operator==(const Pos &p_rval) const { return (tick == p_rval.tick) && (column == p_rval.column); } Pos(Tick p_tick = 0, int p_column = 0) { tick = p_tick; column = p_column; } }; //generic event? struct Event { enum Type { TYPE_NOTE, TYPE_COMMAND, TYPE_AUTOMATION }; Type type; uint8_t a; uint8_t b; operator uint8_t() const { // to automation if (type != TYPE_AUTOMATION) return Automation::EMPTY; else return a; } operator Note() const { if (type != TYPE_NOTE) return Note(); else { Note n; n.note = a; n.volume = b; return n; } } operator Command() const { if (type != TYPE_COMMAND) return Command(); else { Command c; c.command = a; c.parameter = b; return c; } } Event(const Note &p_note) { type = TYPE_NOTE; a = p_note.note; b = p_note.volume; } Event(const Command &p_command) { type = TYPE_COMMAND; a = p_command.command; b = p_command.parameter; } Event(const uint8_t p_autoval) { type = TYPE_AUTOMATION; a = p_autoval; b = 0; } static Event make_empty(Type p_type) { Event ev; ev.type = p_type; ev.a = Note::EMPTY; ev.b = ev.type == TYPE_COMMAND ? 0 : Note::EMPTY; return ev; } Event() { type = TYPE_NOTE; a = Note::EMPTY; b = Note::EMPTY; } }; struct PosNote { Pos pos; Note note; }; struct PosCommand { Pos pos; Command command; }; struct PosEvent { Pos pos; Event event; }; enum { SEND_SPEAKERS = -1, EVENT_BUFFER_MAX = 8192 }; private: Map > note_data; int note_columns; Vector column_state; Map > command_data; int command_columns; bool muted; Vector effects; Vector automations; Vector disabled_automations; struct Send { int track; float amount; bool mute; }; Vector sends; float mix_volume; String name; int sampling_rate; friend class Song; Vector input_buffer; Vector process_buffer; Vector process_buffer2; AudioEffect::Event event_buffer[EVENT_BUFFER_MAX]; int event_buffer_size; void process_events(int p_pattern, Tick p_offset, Tick p_from_tick, Tick p_to_tick, int p_bpm, int p_swing_divisor, float p_swing, int p_from = -1, int p_to = -1); void add_single_event(const AudioEffect::Event &p_event); const AudioFrame *process_audio_step(); bool first_mix; mutable float peak_volume_l; mutable float peak_volume_r; _FORCE_INLINE_ Tick _get_swinged_tick(Tick p_tick, int p_swing_divisor, float p_swing); public: void set_name(String p_name); String get_name() const; /* audio effects */ int get_audio_effect_count() const; void add_audio_effect(AudioEffect *p_effect, int p_pos = -1); void remove_audio_effect(int p_pos); AudioEffect *get_audio_effect(int p_pos); void swap_audio_effects(int p_effect, int p_with_effect); /* automations */ int get_automation_count() const; void add_automation(Automation *p_automation, int p_pos = -1); void remove_automation(int p_pos); Automation *get_automation(int p_pos) const; void swap_automations(int p_which, int p_by_which); /* disabled automations (user may want to keep it, even if it was disabled)*/ int get_disabled_automation_count() const; void add_disabled_automation(Automation *p_automation, int p_pos = -1); void remove_disabled_automation(int p_pos); Automation *get_disabled_automation(int p_pos) const; /* notes */ void set_columns(int p_columns); int get_column_count() const; void set_note(int p_pattern, Pos p_pos, Note p_note); Note get_note(int p_pattern, Pos p_pos) const; int get_note_count(int p_pattern) const; Note get_note_by_index(int p_pattern, int p_index) const; Pos get_note_pos_by_index(int p_pattern, int p_index) const; void get_notes_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, int &r_first, int &r_count) const; void get_notes_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_notes) const; void set_command_columns(int p_columns); int get_command_column_count() const; void set_command(int p_pattern, Pos p_pos, Command p_command); Command get_command(int p_pattern, Pos p_pos) const; int get_command_count(int p_pattern) const; Command get_command_by_index(int p_pattern, int p_index) const; Pos get_command_pos_by_index(int p_pattern, int p_index) const; void get_commands_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, int &r_first, int &r_count) const; void get_commands_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_commands) const; void set_muted(bool p_mute); bool is_muted() const; int get_event_column_count() const; Track::Event::Type get_event_column_type(int p_column) const; void set_event(int p_pattern, int p_column, Tick p_pos, const Event &p_event); Event get_event(int p_pattern, int p_column, Tick p_pos) const; void get_events_in_range(int p_pattern, const Pos &p_from, const Pos &p_to, List *r_events) const; void set_mix_volume_db(float p_db); float get_mix_volume_db() const; float get_peak_volume_db_l() const; float get_peak_volume_db_r() const; void add_send(int p_track, int p_pos = -1); void set_send_amount(int p_send, float p_amount); void set_send_track(int p_send, int p_track); void set_send_mute(int p_send, bool p_mute); int get_send_track(int p_send) const; float get_send_amount(int p_send) const; bool is_send_muted(int p_send) const; int get_send_count(); void remove_send(int p_send); void swap_sends(int p_send, int p_with_send); bool has_send(int p_send) const; void set_process_buffer_size(int p_frames); void set_sampling_rate(int p_hz); void stop(); void automations_pre_play_capture(); void automations_pre_play_restore(); Track(); ~Track(); }; #endif // TRACK_H zytrax-master/engine/undo_redo.cpp000066400000000000000000000061761347722000700176530ustar00rootroot00000000000000// // C++ Implementation: undo_redo // // Description: // // // Author: Juan Linietsky , (C) 2008 // // Copyright: See COPYING file that comes with this distribution // // #include "undo_redo.h" void UndoRedo::_delete_group(Group *p_group, bool p_do, bool p_undo) { for (List::Element *E = p_group->undo_method_list.front(); E; E = E->next()) { delete E->get(); } for (List::Element *E = p_group->do_method_list.front(); E; E = E->next()) { delete E->get(); } if (p_undo) { for (List::Element *E = p_group->undo_data.front(); E; E = E->next()) { E->get()->free(); } } if (p_do) { for (List::Element *E = p_group->do_data.front(); E; E = E->next()) { E->get()->free(); } } delete p_group; } void UndoRedo::begin_action(String p_name, bool p_mergeable) { if (group_rc > 0) { group_rc++; return; } if (group_list.size() > current_group) { //delete redo history for (int i = current_group; i < group_list.size(); i++) { _delete_group(group_list[i], true, false); } group_list.resize(current_group); } Group *g = new Group; g->name = p_name; group_list.push_back(g); g->merge_backward = false; g->merge_forward = false; if (p_mergeable && current_group > 0 && group_list[current_group - 1]->name == p_name) { //mergeable command, use previous group, append to it. group_list[current_group - 1]->merge_forward = true; g->merge_backward = true; } else { } group_rc++; if (action_callback) { action_callback(group_list[current_group]->name, action_callback_userdata); } } void UndoRedo::commit_action() { ERR_FAIL_COND(group_rc <= 0); group_rc--; if (group_rc > 0) { return; } redo(); } void UndoRedo::undo() { if (current_group > group_list.size() || current_group == 0) return; do { current_group--; for (List::Element *E = group_list[current_group]->undo_method_list.front(); E; E = E->next()) { E->get()->call(); } if (action_callback) { action_callback(group_list[current_group]->name, action_callback_userdata); } } while (current_group > 0 && group_list[current_group]->merge_backward); } void UndoRedo::redo() { if (current_group >= group_list.size()) return; do { for (List::Element *E = group_list[current_group]->do_method_list.front(); E; E = E->next()) { E->get()->call(); } current_group++; if (action_callback) { action_callback(group_list[current_group - 1]->name, action_callback_userdata); } } while (group_list[current_group - 1]->merge_forward); } int UndoRedo::get_current_version() { return current_group; } void UndoRedo::clean() { for (int i = 0; i < group_list.size(); i++) { _delete_group(group_list[i], false, true); } group_list.clear(); current_group = 0; group_rc = 0; } void UndoRedo::set_action_callback(ActionCallback p_action_callback, void *p_action_callback_userdata) { action_callback = p_action_callback; action_callback_userdata = p_action_callback_userdata; } UndoRedo::UndoRedo() { current_group = 0; group_rc = 0; action_callback = NULL; action_callback_userdata = NULL; } UndoRedo::~UndoRedo() { clean(); } zytrax-master/engine/undo_redo.h000066400000000000000000000174511347722000700173160ustar00rootroot00000000000000// // C++ Interface: undo_redo // // Description: // // // Author: Juan Linietsky , (C) 2008 // // Copyright: See COPYING file that comes with this distribution // // #ifndef UNDO_REDO_H #define UNDO_REDO_H #include "list.h" #include "rstring.h" #include "vector.h" //#include "simple_type.h" /** @author Juan Linietsky */ class UndoRedo { public: typedef void (*ActionCallback)(const String &, void *); protected: struct CommandDataBase { virtual ~CommandDataBase() {} }; template struct CommandData : public CommandDataBase { T *data; ~CommandData() { delete data; } CommandData(T *p_data) { data = p_data; } }; struct CommandBase { List command_data; public: template CommandBase *with_data(T *p_data) { command_data.push_back(new CommandData(p_data)); return this; } virtual void call() = 0; virtual ~CommandBase() {} }; template struct Command0 : public CommandBase { typedef void (T::*Method)(); T *instance; Method method; virtual void call() { (instance->*method)(); } Command0(T *p_instance, Method p_method) { instance = p_instance; method = p_method; } }; template struct Command1 : public CommandBase { typedef void (T::*Method)(P1); T *instance; Method method; P1 p1; virtual void call() { (instance->*method)(p1); } Command1(T *p_instance, Method p_method, P1 p_p1) { instance = p_instance; method = p_method; p1 = p_p1; } }; /**/ template struct Command2 : public CommandBase { typedef void (T::*Method)(P1, P2); T *instance; Method method; P1 p1; P2 p2; virtual void call() { (instance->*method)(p1, p2); } Command2(T *p_instance, Method p_method, P1 p_p1, P2 p_p2) { instance = p_instance; method = p_method; p1 = p_p1; p2 = p_p2; } }; /**/ template struct Command3 : public CommandBase { typedef void (T::*Method)(P1, P2, P3); T *instance; Method method; P1 p1; P2 p2; P3 p3; virtual void call() { (instance->*method)(p1, p2, p3); } Command3(T *p_instance, Method p_method, P1 p_p1, P2 p_p2, P3 p_p3) { instance = p_instance; method = p_method; p1 = p_p1; p2 = p_p2; p3 = p_p3; } }; /**/ template struct Command4 : public CommandBase { typedef void (T::*Method)(P1, P2, P3, P4); T *instance; Method method; P1 p1; P2 p2; P3 p3; P4 p4; virtual void call() { (instance->*method)(p1, p2, p3, p4); } Command4(T *p_instance, Method p_method, P1 p_p1, P2 p_p2, P3 p_p3, P4 p_p4) { instance = p_instance; method = p_method; p1 = p_p1; p2 = p_p2; p3 = p_p3; p4 = p_p4; } }; /**/ template struct Command5 : public CommandBase { typedef void (T::*Method)(P1, P2, P3, P4, P5); T *instance; Method method; P1 p1; P2 p2; P3 p3; P4 p4; P5 p5; virtual void call() { (instance->*method)(p1, p2, p3, p4, p5); } Command5(T *p_instance, Method p_method, P1 p_p1, P2 p_p2, P3 p_p3, P4 p_p4, P5 p_p5) { instance = p_instance; method = p_method; p1 = p_p1; p2 = p_p2; p3 = p_p3; p4 = p_p4; p5 = p_p5; } }; /* methods */ template Command1 *command(T *p_instance, M p_method, P1 p1) { return new Command1(p_instance, p_method, p1); } template Command2 *command(T *p_instance, M p_method, P1 p1, P2 p2) { return new Command2(p_instance, p_method, p1, p2); } template Command3 *command(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3) { return new Command3(p_instance, p_method, p1, p2, p3); } template Command4 *command(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3, P4 p4) { return new Command4(p_instance, p_method, p1, p2, p3, p4); } template Command5 *command(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3, P4 p4, P5 p5) { return new Command5(p_instance, p_method, p1, p2, p3, p4, p5); } /*****/ class Data { public: virtual void free() = 0; virtual ~Data() {} }; template class DataPtr : public Data { T *ptr; public: virtual void free() { delete ptr; } DataPtr(T *p_ptr) { ptr = p_ptr; } ~DataPtr() {} }; private: struct Group { List do_method_list; List undo_method_list; List do_data; List undo_data; String name; bool merge_forward; bool merge_backward; }; void _delete_group(Group *p_group, bool p_do, bool p_undo); Vector group_list; int current_group; int group_rc; ActionCallback action_callback; void *action_callback_userdata; public: void begin_action(String p_name, bool p_mergeable = false); template void do_method(T *p_instance, M p_method) { group_list[current_group]->do_method_list.push_back(new Command0(p_instance, p_method)); } template void do_method(T *p_instance, M p_method, P1 p1) { group_list[current_group]->do_method_list.push_back(new Command1(p_instance, p_method, p1)); } template void do_method(T *p_instance, M p_method, P1 p1, P2 p2) { group_list[current_group]->do_method_list.push_back(new Command2(p_instance, p_method, p1, p2)); } template void do_method(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3) { group_list[current_group]->do_method_list.push_back(new Command3(p_instance, p_method, p1, p2, p3)); } template void do_method(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3, P4 p4) { group_list[current_group]->do_method_list.push_back(new Command4(p_instance, p_method, p1, p2, p3, p4)); } template void undo_method(T *p_instance, M p_method) { group_list[current_group]->undo_method_list.push_back(new Command0(p_instance, p_method)); } template void undo_method(T *p_instance, M p_method, P1 p1) { group_list[current_group]->undo_method_list.push_back(new Command1(p_instance, p_method, p1)); } template void undo_method(T *p_instance, M p_method, P1 p1, P2 p2) { group_list[current_group]->undo_method_list.push_back(new Command2(p_instance, p_method, p1, p2)); } template void undo_method(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3) { group_list[current_group]->undo_method_list.push_back(new Command3(p_instance, p_method, p1, p2, p3)); } template void undo_method(T *p_instance, M p_method, P1 p1, P2 p2, P3 p3, P4 p4) { group_list[current_group]->undo_method_list.push_back(new Command4(p_instance, p_method, p1, p2, p3, p4)); } template void do_data(T *p_data) { group_list[current_group]->do_data.push_back(new DataPtr(p_data)); } template void undo_data(T *p_data) { group_list[current_group]->undo_data.push_back(new DataPtr(p_data)); } void commit_action(); void undo(); void redo(); void clean(); int get_current_version(); void set_action_callback(ActionCallback p_action_callback, void *p_action_callback_userdata); UndoRedo(); ~UndoRedo(); }; #endif zytrax-master/globals/000077500000000000000000000000001347722000700153355ustar00rootroot00000000000000zytrax-master/globals/SCsub000066400000000000000000000001721347722000700162770ustar00rootroot00000000000000Import('env'); Export('env'); targets=[] env.add_sources(targets,"*.cpp") env.libs += env.Library('globals', targets); zytrax-master/globals/base64.cpp000066400000000000000000000050631347722000700171310ustar00rootroot00000000000000#include "base64.h" #include static const std::string base64_chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789+/"; static inline bool is_base64(uint8_t c) { return (isalnum(c) || (c == '+') || (c == '/')); } std::string base64_encode(const Vector &p_buffer) { const uint8_t *buf = &p_buffer[0]; int bufLen = p_buffer.size(); std::string ret; int i = 0; int j = 0; uint8_t char_array_3[3]; uint8_t char_array_4[4]; while (bufLen--) { char_array_3[i++] = *(buf++); if (i == 3) { char_array_4[0] = (char_array_3[0] & 0xfc) >> 2; char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4); char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6); char_array_4[3] = char_array_3[2] & 0x3f; for (i = 0; (i < 4); i++) ret += base64_chars[char_array_4[i]]; i = 0; } } if (i) { for (j = i; j < 3; j++) char_array_3[j] = '\0'; char_array_4[0] = (char_array_3[0] & 0xfc) >> 2; char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4); char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6); char_array_4[3] = char_array_3[2] & 0x3f; for (j = 0; (j < i + 1); j++) ret += base64_chars[char_array_4[j]]; while ((i++ < 3)) ret += '='; } return ret; } Vector base64_decode(std::string const &encoded_string) { int in_len = encoded_string.size(); int i = 0; int j = 0; int in_ = 0; uint8_t char_array_4[4], char_array_3[3]; Vector ret; while (in_len-- && (encoded_string[in_] != '=') && is_base64(encoded_string[in_])) { char_array_4[i++] = encoded_string[in_]; in_++; if (i == 4) { for (i = 0; i < 4; i++) char_array_4[i] = base64_chars.find(char_array_4[i]); char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4); char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2); char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3]; for (i = 0; (i < 3); i++) ret.push_back(char_array_3[i]); i = 0; } } if (i) { for (j = i; j < 4; j++) char_array_4[j] = 0; for (j = 0; j < 4; j++) char_array_4[j] = base64_chars.find(char_array_4[j]); char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4); char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2); char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3]; for (j = 0; (j < i - 1); j++) ret.push_back(char_array_3[j]); } return ret; } zytrax-master/globals/base64.h000066400000000000000000000003221347722000700165670ustar00rootroot00000000000000#ifndef _BASE64_H_ #define _BASE64_H_ #include "vector.h" #include std::string base64_encode(const Vector &p_buffer); Vector base64_decode(std::string const &); #endif // BASE64_H zytrax-master/globals/config.h000066400000000000000000000007301347722000700167530ustar00rootroot00000000000000// // C++ Interface: config // // Description: // // // Author: Juan Linietsky , (C) 2004 // // Copyright: See COPYING file that comes with this distribution // // #ifndef CONFIG_H #define CONFIG_H #include #include #ifndef MIN #define MIN(m_left, m_right) ((m_left) < (m_right) ? (m_left) : (m_right)) #endif #ifndef MAX #define MAX(m_left, m_right) ((m_left) > (m_right) ? (m_left) : (m_right)) #endif #endif /* Config_h */ zytrax-master/globals/error_list.h000066400000000000000000000033551347722000700177000ustar00rootroot00000000000000#ifndef ERROR_LIST_H #define ERROR_LIST_H /** Error List. Please never compare an error against FAILED * Either do result != OK , or !result. This way, Error fail * values can be more detailed in the future. * * This is a generic error list, mainly for organizing a language of returning errors. */ enum Error { OK, FAILED, ///< Generic fail error ERR_UNAVAILABLE, ///< What is requested is unsupported/unavailable ERR_UNCONFIGURED, ///< The object being used hasnt been properly set up yet ERR_UNAUTHORIZED, ///< Missing credentials for requested resource ERR_PARAMETER_RANGE_ERROR, ///< Parameter given out of range ERR_OUT_OF_MEMORY, ///< Out of memory ERR_FILE_NOT_FOUND, ERR_FILE_BAD_DRIVE, ERR_FILE_BAD_PATH, ERR_FILE_NO_PERMISSION, ERR_FILE_ALREADY_IN_USE, ERR_FILE_CANT_OPEN, ERR_FILE_CANT_WRITE, ERR_FILE_CANT_READ, ERR_FILE_UNRECOGNIZED, ERR_FILE_TOO_NEW, ERR_FILE_CORRUPT, ERR_FILE_EOF, ERR_CANT_OPEN, ///< Can't open a resource/socket/file ERROR_QUERY_FAILED, ERR_ALREADY_IN_USE, ERR_LOCKED, ///< resource is locked ERR_TIMEOUT, ERR_CANT_AQUIRE_RESOURCE, ERR_INVALID_DATA, ///< Data passed is invalid ERR_INVALID_PARAMETER, ///< Parameter passed is invalid ERR_ALREADY_EXISTS, ///< When adding, item already exists ERR_DOES_NOT_EXIST, ///< When retrieving/erasing, it item does not exist ERR_DATABASE_CANT_READ, ///< database is full ERR_DATABASE_CANT_WRITE, ///< database is full ERR_COMPILATION_FAILED, ERR_LINK_FAILED, ERR_VERSION_MISMATCH, ERR_BUG, ///< a bug in the software certainly happened, due to a double check failing or unexpected behavior. ERR_OMFG_THIS_IS_VERY_VERY_BAD, ///< shit happens, has never been used, though ERR_WTF = ERR_OMFG_THIS_IS_VERY_VERY_BAD ///< short version of the above }; #endif zytrax-master/globals/error_macros.cpp000066400000000000000000000001151347722000700205330ustar00rootroot00000000000000#include "error_macros.h" void _error_break_function() { //just to break } zytrax-master/globals/error_macros.h000066400000000000000000000121431347722000700202040ustar00rootroot00000000000000#ifndef ERROR_MACROS_H #define ERROR_MACROS_H #include #include #if 0 #ifdef WINDOWS_ENABLED #define RAISE_SIGNAL #endif #ifdef POSIX_ENABLED #define RAISE_SIGNAL \ if (getenv("ABORT_ON_ERROR")) abort(); #endif #endif #define RAISE_SIGNAL void _error_break_function(); #define ERR_FAIL_INDEX(m_index, m_size) \ { \ if ((m_index) < 0 || (m_index) >= (m_size)) { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " \ << "index out of size: " << m_index << "(" << m_size << ")" << std::endl; \ _error_break_function(); \ RAISE_SIGNAL return; \ } \ } #define ERR_FAIL_INDEX_V(m_index, m_size, m_retval) \ { \ if ((m_index) < 0 || (m_index) >= (m_size)) { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " \ << "index out of size: " << m_index << "(" << m_size << ")" << std::endl; \ _error_break_function(); \ RAISE_SIGNAL return (m_retval); \ } \ } #define ERR_FAIL_COND(m_cond) \ { \ if (m_cond) { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " << #m_cond " failed." << std::endl; \ _error_break_function(); \ RAISE_SIGNAL return; \ } \ } #define ERR_FAIL_COND_V(m_cond, m_retval) \ { \ if (m_cond) { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " << #m_cond " failed." << std::endl; \ _error_break_function(); \ RAISE_SIGNAL return m_retval; \ } \ } #define ERR_CONTINUE(m_cond) \ { \ if (m_cond) { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " << #m_cond " failed." << std::endl; \ _error_break_function(); \ RAISE_SIGNAL continue; \ } \ } #define ERR_PRINT(m_string) \ { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " << m_string << std::endl; \ _error_break_function(); \ RAISE_SIGNAL \ } #define WARN_PRINT(m_string) \ { \ std::cout << " *** ERROR *** " << __FILE__ << ":" << __LINE__ << " - " << m_string << std::endl; \ RAISE_SIGNAL \ } #endif zytrax-master/globals/json.cpp000066400000000000000000000507461347722000700170260ustar00rootroot00000000000000/* Copyright (c) 2015 Johannes Häggqvist Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #ifdef JZON_DLL #if defined _WIN32 || defined __CYGWIN__ #define JZON_API __declspec(dllexport) #define JZON_STL_EXTERN #endif #endif #include "json.h" #include #include #include #include #include namespace JSON { namespace { inline bool isWhitespace(char c) { return (c == '\n' || c == ' ' || c == '\t' || c == '\r' || c == '\f'); } const char charsUnescaped[] = { '\\', '/', '\"', '\n', '\t', '\b', '\f', '\r' }; const char *charsEscaped[] = { "\\\\", "\\/", "\\\"", "\\n", "\\t", "\\b", "\\f", "\\r" }; const unsigned int numEscapeChars = 8; const char nullUnescaped = '\0'; const char *nullEscaped = "\0\0"; const char *getEscaped(const char c) { for (unsigned int i = 0; i < numEscapeChars; ++i) { const char &ue = charsUnescaped[i]; if (c == ue) { return charsEscaped[i]; } } return nullEscaped; } char getUnescaped(const char c1, const char c2) { for (unsigned int i = 0; i < numEscapeChars; ++i) { const char *e = charsEscaped[i]; if (c1 == e[0] && c2 == e[1]) { return charsUnescaped[i]; } } return nullUnescaped; } } // namespace Node::Node() : data(NULL) { } Node::Node(Type type) : data(NULL) { if (type != T_INVALID) { data = new Data(type); } } Node::Node(const Node &other) : data(other.data) { if (data != NULL) { data->addRef(); } } Node::Node(Type type, const std::string &value) : data(new Data(T_NULL)) { set(type, value); } Node::Node(const std::string &value) : data(new Data(T_STRING)) { set(value); } Node::Node(const char *value) : data(new Data(T_STRING)) { set(value); } Node::Node(int value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(unsigned int value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(long long value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(unsigned long long value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(float value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(double value) : data(new Data(T_NUMBER)) { set(value); } Node::Node(bool value) : data(new Data(T_BOOL)) { set(value); } Node::~Node() { if (data != NULL && data->release()) { delete data; data = NULL; } } void Node::detach() { if (data != NULL && data->refCount > 1) { Data *newData = new Data(*data); if (data->release()) { delete data; } data = newData; } } std::string Node::toString(const std::string &def) const { if (isValue()) { if (isNull()) { return std::string("null"); } else { return data->valueStr; } } else { return def; } } #define GET_NUMBER(T) \ if (isNumber()) { \ std::stringstream sstr(data->valueStr); \ T val; \ sstr >> val; \ return val; \ } else { \ return def; \ } int Node::toInt(int def) const { GET_NUMBER(int) } float Node::toFloat(float def) const { GET_NUMBER(float) } double Node::toDouble(double def) const { GET_NUMBER(double) } #undef GET_NUMBER bool Node::toBool(bool def) const { if (isBool()) { return (data->valueStr == "true"); } else { return def; } } void Node::setNull() { if (isValue()) { detach(); data->type = T_NULL; data->valueStr.clear(); } } void Node::set(Type type, const std::string &value) { if (isValue() && (type == T_NULL || type == T_STRING || type == T_NUMBER || type == T_BOOL)) { detach(); data->type = type; if (type == T_STRING) { data->valueStr = unescapeString(value); } else { data->valueStr = value; } } } void Node::set(const std::string &value) { if (isValue()) { detach(); data->type = T_STRING; data->valueStr = unescapeString(value); } } void Node::set(const char *value) { if (isValue()) { detach(); data->type = T_STRING; data->valueStr = unescapeString(std::string(value)); } } #define SET_NUMBER \ if (isValue()) { \ detach(); \ data->type = T_NUMBER; \ std::stringstream sstr; \ sstr << value; \ data->valueStr = sstr.str(); \ } void Node::set(int value) { SET_NUMBER } void Node::set(unsigned int value) { SET_NUMBER } void Node::set(long long value) { SET_NUMBER } void Node::set(unsigned long long value) { SET_NUMBER } void Node::set(float value) { SET_NUMBER } void Node::set(double value) { SET_NUMBER } #undef SET_NUMBER void Node::set(bool value) { if (isValue()) { detach(); data->type = T_BOOL; data->valueStr = (value ? "true" : "false"); } } Node &Node::operator=(const Node &rhs) { if (this != &rhs) { if (data != NULL && data->release()) { delete data; } data = rhs.data; if (data != NULL) { data->addRef(); } } return *this; } Node &Node::operator=(const std::string &rhs) { set(rhs); return *this; } Node &Node::operator=(const char *rhs) { set(rhs); return *this; } Node &Node::operator=(int rhs) { set(rhs); return *this; } Node &Node::operator=(unsigned int rhs) { set(rhs); return *this; } Node &Node::operator=(long long rhs) { set(rhs); return *this; } Node &Node::operator=(unsigned long long rhs) { set(rhs); return *this; } Node &Node::operator=(float rhs) { set(rhs); return *this; } Node &Node::operator=(double rhs) { set(rhs); return *this; } Node &Node::operator=(bool rhs) { set(rhs); return *this; } void Node::add(const Node &node) { if (isArray()) { detach(); data->children.push_back(std::make_pair(std::string(), node)); } } void Node::add(const std::string &name, const Node &node) { if (isObject()) { detach(); data->children.push_back(std::make_pair(name, node)); } } void Node::append(const Node &node) { if ((isObject() && node.isObject()) || (isArray() && node.isArray())) { detach(); data->children.insert(data->children.end(), node.data->children.begin(), node.data->children.end()); } } void Node::remove(size_t index) { if (isContainer() && index < data->children.size()) { detach(); NamedNodeList::iterator it = data->children.begin() + index; data->children.erase(it); } } void Node::remove(const std::string &name) { if (isObject()) { detach(); NamedNodeList &children = data->children; for (NamedNodeList::iterator it = children.begin(); it != children.end(); ++it) { if ((*it).first == name) { children.erase(it); break; } } } } void Node::clear() { if (data != NULL && !data->children.empty()) { detach(); data->children.clear(); } } bool Node::has(const std::string &name) const { if (isObject()) { NamedNodeList &children = data->children; for (NamedNodeList::const_iterator it = children.begin(); it != children.end(); ++it) { if ((*it).first == name) { return true; } } } return false; } size_t Node::getCount() const { return data != NULL ? data->children.size() : 0; } Node Node::get(const std::string &name) const { if (isObject()) { NamedNodeList &children = data->children; for (NamedNodeList::const_iterator it = children.begin(); it != children.end(); ++it) { if ((*it).first == name) { return (*it).second; } } } return Node(T_INVALID); } Node Node::get(size_t index) const { if (isContainer() && index < data->children.size()) { return data->children.at(index).second; } return Node(T_INVALID); } Node::iterator Node::begin() { if (data != NULL && !data->children.empty()) return Node::iterator(&data->children.front()); else return Node::iterator(NULL); } Node::const_iterator Node::begin() const { if (data != NULL && !data->children.empty()) return Node::const_iterator(&data->children.front()); else return Node::const_iterator(NULL); } Node::iterator Node::end() { if (data != NULL && !data->children.empty()) return Node::iterator(&data->children.back() + 1); else return Node::iterator(NULL); } Node::const_iterator Node::end() const { if (data != NULL && !data->children.empty()) return Node::const_iterator(&data->children.back() + 1); else return Node::const_iterator(NULL); } bool Node::operator==(const Node &other) const { return ( (data == other.data) || (isValue() && (data->type == other.data->type) && (data->valueStr == other.data->valueStr))); } bool Node::operator!=(const Node &other) const { return !(*this == other); } Node::Data::Data(Type type) : refCount(1), type(type) { } Node::Data::Data(const Data &other) : refCount(1), type(other.type), valueStr(other.valueStr), children(other.children) { } Node::Data::~Data() { assert(refCount == 0); } void Node::Data::addRef() { ++refCount; } bool Node::Data::release() { return (--refCount == 0); } std::string escapeString(const std::string &value) { std::string escaped; escaped.reserve(value.length()); for (std::string::const_iterator it = value.begin(); it != value.end(); ++it) { const char &c = (*it); const char *a = getEscaped(c); if (a[0] != '\0') { escaped += a[0]; escaped += a[1]; } else { escaped += c; } } return escaped; } std::string unescapeString(const std::string &value) { std::string unescaped; for (std::string::const_iterator it = value.begin(); it != value.end(); ++it) { const char c = (*it); char c2 = '\0'; if (it + 1 != value.end()) c2 = *(it + 1); const char a = getUnescaped(c, c2); if (a != '\0') { unescaped += a; if (it + 1 != value.end()) ++it; } else { unescaped += c; } } return unescaped; } Node invalid() { return Node(Node::T_INVALID); } Node null() { return Node(Node::T_NULL); } Node object() { return Node(Node::T_OBJECT); } Node array() { return Node(Node::T_ARRAY); } Writer::Writer(const Format &format) { setFormat(format); } Writer::~Writer() { } void Writer::setFormat(const Format &format) { this->format = format; indentationChar = (format.useTabs ? '\t' : ' '); spacing = (format.spacing ? " " : ""); newline = (format.newline ? "\n" : spacing); } void Writer::writeStream(const Node &node, std::ostream &stream) const { writeNode(node, 0, stream); } void Writer::writeString(const Node &node, std::string &json) const { std::ostringstream stream(json); writeStream(node, stream); json = stream.str(); } void Writer::writeFile(const Node &node, const std::string &filename) const { std::ofstream stream(filename.c_str(), std::ios::out | std::ios::trunc); writeStream(node, stream); } void Writer::writeNode(const Node &node, unsigned int level, std::ostream &stream) const { switch (node.getType()) { case Node::T_INVALID: break; case Node::T_OBJECT: writeObject(node, level, stream); break; case Node::T_ARRAY: writeArray(node, level, stream); break; case Node::T_NULL: // Fallthrough case Node::T_STRING: // Fallthrough case Node::T_NUMBER: // Fallthrough case Node::T_BOOL: writeValue(node, stream); break; } } void Writer::writeObject(const Node &node, unsigned int level, std::ostream &stream) const { stream << "{" << newline; for (Node::const_iterator it = node.begin(); it != node.end(); ++it) { const std::string &name = (*it).first; const Node &value = (*it).second; if (it != node.begin()) stream << "," << newline; stream << getIndentation(level + 1) << "\"" << name << "\"" << ":" << spacing; writeNode(value, level + 1, stream); } stream << newline << getIndentation(level) << "}"; } void Writer::writeArray(const Node &node, unsigned int level, std::ostream &stream) const { stream << "[" << newline; for (Node::const_iterator it = node.begin(); it != node.end(); ++it) { const Node &value = (*it).second; if (it != node.begin()) stream << "," << newline; stream << getIndentation(level + 1); writeNode(value, level + 1, stream); } stream << newline << getIndentation(level) << "]"; } void Writer::writeValue(const Node &node, std::ostream &stream) const { if (node.isString()) { stream << "\"" << escapeString(node.toString()) << "\""; } else { stream << node.toString(); } } std::string Writer::getIndentation(unsigned int level) const { if (!format.newline) { return ""; } else { return std::string(format.indentSize * level, indentationChar); } } Parser::Parser() { } Parser::~Parser() { } Node Parser::parseStream(std::istream &stream) { TokenQueue tokens; DataQueue data; tokenize(stream, tokens, data); Node node = assemble(tokens, data); return node; } Node Parser::parseString(const std::string &json) { std::istringstream stream(json); return parseStream(stream); } Node Parser::parseFile(const std::string &filename) { std::ifstream stream(filename.c_str(), std::ios::in); return parseStream(stream); } const std::string &Parser::getError() const { return error; } void Parser::tokenize(std::istream &stream, TokenQueue &tokens, DataQueue &data) { Token token = T_UNKNOWN; std::string valueBuffer; bool saveBuffer; char c = '\0'; while (stream.peek() != std::char_traits::eof()) { stream.get(c); if (isWhitespace(c)) continue; saveBuffer = true; switch (c) { case '{': { token = T_OBJ_BEGIN; break; } case '}': { token = T_OBJ_END; break; } case '[': { token = T_ARRAY_BEGIN; break; } case ']': { token = T_ARRAY_END; break; } case ',': { token = T_SEPARATOR_NODE; break; } case ':': { token = T_SEPARATOR_NAME; break; } case '"': { token = T_VALUE; readString(stream, data); break; } case '/': { char p = static_cast(stream.peek()); if (p == '*') { jumpToCommentEnd(stream); saveBuffer = false; break; } else if (p == '/') { jumpToNext('\n', stream); saveBuffer = false; break; } // Intentional fallthrough } default: { valueBuffer += c; saveBuffer = false; break; } } if ((saveBuffer || stream.peek() == std::char_traits::eof()) && (!valueBuffer.empty())) // Always save buffer on the last character { if (interpretValue(valueBuffer, data)) { tokens.push(T_VALUE); } else { // Store the unknown token, so we can show it to the user data.push(std::make_pair(Node::T_STRING, valueBuffer)); tokens.push(T_UNKNOWN); } valueBuffer.clear(); } // Push the token last so that any data // will get pushed first from above. // If saveBuffer is false, it means that // we are in the middle of a value, so we // don't want to push any tokens now. if (saveBuffer) { tokens.push(token); } } } Node Parser::assemble(TokenQueue &tokens, DataQueue &data) { std::stack nodeStack; Node root(Node::T_INVALID); std::string nextName = ""; Token token; while (!tokens.empty()) { token = tokens.front(); tokens.pop(); switch (token) { case T_UNKNOWN: { const std::string &unknownToken = data.front().second; error = "Unknown token: " + unknownToken; data.pop(); return Node(Node::T_INVALID); } case T_OBJ_BEGIN: { nodeStack.push(std::make_pair(nextName, object())); nextName.clear(); break; } case T_ARRAY_BEGIN: { nodeStack.push(std::make_pair(nextName, array())); nextName.clear(); break; } case T_OBJ_END: case T_ARRAY_END: { if (nodeStack.empty()) { error = "Found end of object or array without beginning"; return Node(Node::T_INVALID); } if (token == T_OBJ_END && !nodeStack.top().second.isObject()) { error = "Mismatched end and beginning of object"; return Node(Node::T_INVALID); } if (token == T_ARRAY_END && !nodeStack.top().second.isArray()) { error = "Mismatched end and beginning of array"; return Node(Node::T_INVALID); } std::string nodeName = nodeStack.top().first; Node node = nodeStack.top().second; nodeStack.pop(); if (!nodeStack.empty()) { Node &stackTop = nodeStack.top().second; if (stackTop.isObject()) { stackTop.add(nodeName, node); } else if (stackTop.isArray()) { stackTop.add(node); } else { error = "Can only add elements to objects and arrays"; return Node(Node::T_INVALID); } } else { root = node; } break; } case T_VALUE: { if (data.empty()) { error = "Missing data for value"; return Node(Node::T_INVALID); } const std::pair &dataPair = data.front(); if (!tokens.empty() && tokens.front() == T_SEPARATOR_NAME) { tokens.pop(); if (dataPair.first != Node::T_STRING) { error = "A name has to be a string"; return Node(Node::T_INVALID); } else { nextName = dataPair.second; data.pop(); } } else { Node node(dataPair.first, dataPair.second); data.pop(); if (!nodeStack.empty()) { Node &stackTop = nodeStack.top().second; if (stackTop.isObject()) stackTop.add(nextName, node); else if (stackTop.isArray()) stackTop.add(node); nextName.clear(); } else { error = "Outermost node must be an object or array"; return Node(Node::T_INVALID); } } break; } case T_SEPARATOR_NAME: break; case T_SEPARATOR_NODE: { if (!tokens.empty() && tokens.front() == T_ARRAY_END) { error = "Extra comma in array"; return Node(Node::T_INVALID); } break; } } } return root; } void Parser::jumpToNext(char c, std::istream &stream) { while (!stream.eof() && static_cast(stream.get()) != c) ; stream.unget(); } void Parser::jumpToCommentEnd(std::istream &stream) { stream.ignore(1); char c1 = '\0', c2 = '\0'; while (stream.peek() != std::char_traits::eof()) { stream.get(c2); if (c1 == '*' && c2 == '/') break; c1 = c2; } } void Parser::readString(std::istream &stream, DataQueue &data) { std::string str; char c1 = '\0', c2 = '\0'; while (stream.peek() != std::char_traits::eof()) { stream.get(c2); if (c1 != '\\' && c2 == '"') { break; } str += c2; c1 = c2; } data.push(std::make_pair(Node::T_STRING, str)); } bool Parser::interpretValue(const std::string &value, DataQueue &data) { std::string upperValue(value.size(), '\0'); std::transform(value.begin(), value.end(), upperValue.begin(), toupper); if (upperValue == "NULL") { data.push(std::make_pair(Node::T_NULL, std::string())); } else if (upperValue == "TRUE") { data.push(std::make_pair(Node::T_BOOL, std::string("true"))); } else if (upperValue == "FALSE") { data.push(std::make_pair(Node::T_BOOL, std::string("false"))); } else { bool number = true; bool negative = false; bool fraction = false; bool scientific = false; bool scientificSign = false; bool scientificNumber = false; for (std::string::const_iterator it = upperValue.begin(); number && it != upperValue.end(); ++it) { char c = (*it); switch (c) { case '-': { if (scientific) { if (scientificSign) // Only one - allowed after E number = false; else scientificSign = true; } else { if (negative) // Only one - allowed before E number = false; else negative = true; } break; } case '+': { if (!scientific || scientificSign) number = false; else scientificSign = true; break; } case '.': { if (fraction) // Only one . allowed number = false; else fraction = true; break; } case 'E': { if (scientific) number = false; else scientific = true; break; } default: { if (c >= '0' && c <= '9') { if (scientific) scientificNumber = true; } else { number = false; } break; } } } if (scientific && !scientificNumber) number = false; if (number) { data.push(std::make_pair(Node::T_NUMBER, value)); } else { return false; } } return true; } } // namespace JSON zytrax-master/globals/json.h000066400000000000000000000166321347722000700164670ustar00rootroot00000000000000/* Copyright (c) 2015 Johannes Häggqvist Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #ifndef JSON_h__ #define JSON_h__ #include #include #include #include #include #include namespace JSON { namespace Version { const int MAJOR = 2; const int MINOR = 1; } // namespace Version class Node; typedef std::pair NamedNode; class Node { public: class iterator : public std::iterator { public: iterator() : p(0) {} iterator(NamedNode *o) : p(o) {} iterator(const iterator &it) : p(it.p) {} iterator &operator++() { ++p; return *this; } iterator operator++(int) { iterator tmp(*this); operator++(); return tmp; } bool operator==(const iterator &rhs) { return p == rhs.p; } bool operator!=(const iterator &rhs) { return p != rhs.p; } NamedNode &operator*() { return *p; } NamedNode *operator->() { return p; } private: NamedNode *p; }; class const_iterator : public std::iterator { public: const_iterator() : p(0) {} const_iterator(const NamedNode *o) : p(o) {} const_iterator(const const_iterator &it) : p(it.p) {} const_iterator &operator++() { ++p; return *this; } const_iterator operator++(int) { const_iterator tmp(*this); operator++(); return tmp; } bool operator==(const const_iterator &rhs) { return p == rhs.p; } bool operator!=(const const_iterator &rhs) { return p != rhs.p; } const NamedNode &operator*() { return *p; } const NamedNode *operator->() { return p; } private: const NamedNode *p; }; enum Type { T_INVALID, T_OBJECT, T_ARRAY, T_NULL, T_STRING, T_NUMBER, T_BOOL }; Node(); explicit Node(Type type); Node(const Node &other); Node(Type type, const std::string &value); Node(const std::string &value); Node(const char *value); Node(int value); Node(unsigned int value); Node(long long value); Node(unsigned long long value); Node(float value); Node(double value); Node(bool value); ~Node(); void detach(); inline Type getType() const { return (data == NULL ? T_INVALID : data->type); }; inline bool isValid() const { return (getType() != T_INVALID); } inline bool isObject() const { return (getType() == T_OBJECT); } inline bool isArray() const { return (getType() == T_ARRAY); } inline bool isNull() const { return (getType() == T_NULL); } inline bool isString() const { return (getType() == T_STRING); } inline bool isNumber() const { return (getType() == T_NUMBER); } inline bool isBool() const { return (getType() == T_BOOL); } inline bool isContainer() const { return (isObject() || isArray()); } inline bool isValue() const { return (isNull() || isString() || isNumber() || isBool()); } std::string toString(const std::string &def = std::string()) const; int toInt(int def = 0) const; float toFloat(float def = 0.f) const; double toDouble(double def = 0.0) const; bool toBool(bool def = false) const; void setNull(); void set(Type type, const std::string &value); void set(const std::string &value); void set(const char *value); void set(int value); void set(unsigned int value); void set(long long value); void set(unsigned long long value); void set(float value); void set(double value); void set(bool value); Node &operator=(const Node &rhs); Node &operator=(const std::string &rhs); Node &operator=(const char *rhs); Node &operator=(int rhs); Node &operator=(unsigned int rhs); Node &operator=(long long rhs); Node &operator=(unsigned long long rhs); Node &operator=(float rhs); Node &operator=(double rhs); Node &operator=(bool rhs); void add(const Node &node); void add(const std::string &name, const Node &node); void append(const Node &node); void remove(size_t index); void remove(const std::string &name); void clear(); bool has(const std::string &name) const; size_t getCount() const; Node get(const std::string &name) const; Node get(size_t index) const; iterator begin(); const_iterator begin() const; iterator end(); const_iterator end() const; bool operator==(const Node &other) const; bool operator!=(const Node &other) const; inline operator bool() const { return isValid(); } private: typedef std::vector NamedNodeList; struct Data { explicit Data(Type type); Data(const Data &other); ~Data(); void addRef(); bool release(); int refCount; Type type; std::string valueStr; NamedNodeList children; } * data; }; std::string escapeString(const std::string &value); std::string unescapeString(const std::string &value); Node invalid(); Node null(); Node object(); Node array(); struct Format { bool newline; bool spacing; bool useTabs; unsigned int indentSize; }; const Format StandardFormat = { true, true, true, 1 }; const Format NoFormat = { false, false, false, 0 }; class Writer { public: explicit Writer(const Format &format = NoFormat); ~Writer(); void setFormat(const Format &format); void writeStream(const Node &node, std::ostream &stream) const; void writeString(const Node &node, std::string &json) const; void writeFile(const Node &node, const std::string &filename) const; private: void writeNode(const Node &node, unsigned int level, std::ostream &stream) const; void writeObject(const Node &node, unsigned int level, std::ostream &stream) const; void writeArray(const Node &node, unsigned int level, std::ostream &stream) const; void writeValue(const Node &node, std::ostream &stream) const; std::string getIndentation(unsigned int level) const; Format format; char indentationChar; const char *newline; const char *spacing; }; class Parser { public: Parser(); ~Parser(); Node parseStream(std::istream &stream); Node parseString(const std::string &json); Node parseFile(const std::string &filename); const std::string &getError() const; private: enum Token { T_UNKNOWN, T_OBJ_BEGIN, T_OBJ_END, T_ARRAY_BEGIN, T_ARRAY_END, T_SEPARATOR_NODE, T_SEPARATOR_NAME, T_VALUE }; typedef std::queue TokenQueue; typedef std::queue > DataQueue; void tokenize(std::istream &stream, TokenQueue &tokens, DataQueue &data); Node assemble(TokenQueue &tokens, DataQueue &data); void jumpToNext(char c, std::istream &stream); void jumpToCommentEnd(std::istream &stream); void readString(std::istream &stream, DataQueue &data); bool interpretValue(const std::string &value, DataQueue &data); std::string error; }; } // namespace JSON #endif // JSON_h__ zytrax-master/globals/json_file.cpp000066400000000000000000000015501347722000700200120ustar00rootroot00000000000000#include "json_file.h" Error save_json(const String &p_path, const JSON::Node &p_node) { JSON::Writer w; std::string str; w.setFormat(JSON::StandardFormat); w.writeString(p_node, str); #ifdef WINDOWS_ENABLED FILE *f = _wfopen(p_path.c_str(), L"wb"); #else FILE *f = fopen(p_path.utf8().get_data(), "wb"); #endif ERR_FAIL_COND_V(!f, ERR_FILE_CANT_OPEN); fwrite(&str[0], str.length(), 1, f); fclose(f); return OK; } Error load_json(const String &p_path, JSON::Node &p_node) { #ifdef WINDOWS_ENABLED FILE *f = _wfopen(p_path.c_str(), L"rb"); #else FILE *f = fopen(p_path.utf8().get_data(), "rb"); #endif ERR_FAIL_COND_V(!f, ERR_FILE_CANT_OPEN); std::string str; fseek(f, 0, SEEK_END); size_t pos = ftell(f); str.resize(pos); fseek(f, 0, SEEK_SET); fread(&str[0], pos, 1, f); JSON::Parser p; p_node = p.parseString(str); fclose(f); return OK; } zytrax-master/globals/json_file.h000066400000000000000000000004261347722000700174600ustar00rootroot00000000000000#ifndef JSON_FILE_H #define JSON_FILE_H #include "error_list.h" #include "error_macros.h" #include "json.h" #include "rstring.h" Error save_json(const String &p_path, const JSON::Node &p_node); Error load_json(const String &p_path, JSON::Node &p_node); #endif // JSON_FILE_H zytrax-master/globals/list.h000066400000000000000000000170371347722000700164710ustar00rootroot00000000000000#ifndef GLOBALS_LIST_H #define GLOBALS_LIST_H #include "typedefs.h" /** * Generic Templatized Linked List Implementation. * The implementation differs from the STL one because * a compatible preallocated linked list can be written * using the same API, or features such as erasing an element * from the iterator. */ template class List { struct _Data; public: class Comparer { public: inline bool operator()(const T& p_l, const T& p_r) const { return p_l < p_r; }; }; class Element { public: friend class List; T value; Element* next_ptr; Element* prev_ptr; _Data *data; public: /** * Get NEXT Element iterator, for constant lists. */ const Element* next() const { return next_ptr; }; /** * Get NEXT Element iterator, */ Element* next() { return next_ptr; }; /** * Get PREV Element iterator, for constant lists. */ const Element* prev() const { return prev_ptr; }; /** * Get PREV Element iterator, */ Element* prev() { return prev_ptr; }; /** * * operator, for using as *iterator, when iterators are defined on stack. */ const T& operator *() const { return value; }; /** * operator->, for using as iterator->, when iterators are defined on stack, for constant lists. */ const T* operator->() const { return &value; }; /** * * operator, for using as *iterator, when iterators are defined on stack, */ T& operator *() { return value; }; /** * operator->, for using as iterator->, when iterators are defined on stack, for constant lists. */ T* operator->() { return &value; }; /** * get the value stored in this element. */ T& get() { return value; }; /** * get the value stored in this element, for constant lists */ const T& get() const { return value; }; /** * set the value stored in this element. */ void set(const T& p_value) { value = (T&)p_value; }; void erase() { data->erase(this); } Element() { next_ptr = 0; prev_ptr = 0; data=NULL; }; }; private: struct _Data { Element* first; Element* last; int size_cache; bool erase(const Element* p_I) { if (!p_I) return false; if (p_I->data!=this) return false; // does not belong if (first==p_I) { first=p_I->next_ptr; }; if (last==p_I) last=p_I->prev_ptr; if (p_I->prev_ptr) p_I->prev_ptr->next_ptr=p_I->next_ptr; if (p_I->next_ptr) p_I->next_ptr->prev_ptr=p_I->prev_ptr; delete const_cast(p_I); size_cache--; return true; } }; _Data *_data; public: /** * return an const iterator to the begining of the list. */ const Element* begin() const { return _data?_data->first:0; }; /** * return an iterator to the begining of the list. */ Element* front() { return _data?_data->first:0; }; /** * return an const iterator to the last member of the list. */ const Element* back() const { return _data?_data->last:0; }; /** * return an iterator to the last member of the list. */ Element* back() { return _data?_data->last:0; }; /** * store a new element at the end of the list */ void push_back(const T& value) { if (!_data) { _data=new _Data; _data->first=NULL; _data->last=NULL; _data->size_cache=0; } Element* n = new Element; n->value = (T&)value; n->prev_ptr=_data->last; n->next_ptr=0; n->data=_data; if (_data->last) { _data->last->next_ptr=n; } _data->last = n; if (!_data->first) _data->first=n; _data->size_cache++; }; void pop_back() { if (_data && _data->last) erase(_data->last); } /** * store a new element at the begining of the list */ void push_front(const T& value) { if (!_data) { _data=new _Data; _data->first=NULL; _data->last=NULL; _data->size_cache=0; } Element* n = new Element; n->value = (T&)value; n->prev_ptr = 0; n->next_ptr = _data->first; n->data=_data; if (_data->first) { _data->first->prev_ptr=n; } _data->first = n; if (!_data->last) _data->last=n; _data->size_cache++; }; void pop_front() { if (_data && _data->first) erase(_data->first); } /** * find an element in the list, */ template Element* find(const T_v& p_val) { Element* it = begin(); while (it) { if (it->value == p_val) return it; it = it->next(); }; return NULL; }; /** * erase an element in the list, by iterator pointing to it. Return true if it was found/erased. */ bool erase(const Element* p_I) { if (_data) { bool ret = _data->erase(p_I); if (_data->size_cache==0) { delete _data; _data=NULL; } return ret; } return false; }; /** * erase the first element in the list, that contains value */ bool erase(const T& value) { Element* I = find(value); return erase(I); }; /** * return wether the list is empty */ bool empty() const { return (!_data || !_data->size_cache); } /** * clear the list */ void clear() { while (begin()) { erase(begin()); }; }; int size() const { return _data?_data->size_cache:0; } /** * copy the list */ void operator=(const List& p_list) { clear(); const Element *it=p_list.begin(); while (it) { push_back( it->get() ); it=it->next(); } } T& operator[](int p_index) { if (p_index<0 || p_index>=size()) { T& aux=*((T*)0); //nullreturn ERR_FAIL_COND_V(p_index<0 || p_index>=size(),aux); } Element *I=begin(); int c=0; while(I) { if (c==p_index) { return I->get(); } I=I->next(); c++; } ERR_FAIL_V( *((T*)0) ); // bug!! } const T& operator[](int p_index) const { if (p_index<0 || p_index>=size()) { T& aux=*((T*)0); //nullreturn ERR_FAIL_COND_V(p_index<0 || p_index>=size(),aux); } const Element *I=begin(); int c=0; while(I) { if (c==p_index) { return I->get(); } I=I->next(); c++; } ERR_FAIL_V( *((T*)0) ); // bug! } void move_before(Element* value, Element* where) { if (value->prev_ptr) { value->prev_ptr->next_ptr = value->next_ptr; }; if (value->next_ptr) { value->next_ptr->prev_ptr = value->prev_ptr; }; value->next_ptr = where; if (!where) { value->prev_ptr = _data->last; _data->last = value; return; }; value->prev_ptr = where->prev_ptr; if (where->prev_ptr) { where->prev_ptr->next_ptr = value; } else { _data->first = value; }; where->prev_ptr = value; }; /** * quick sort */ void sort() { Comparer c; sort(c); } template void sort(const TC& comparer, Element* start = 0, Element* end = 0) { if (!start) start = begin(); if (!end) end = back(); if (start == end || start->next_ptr == end) return; Element *I = start->next(); Element* pivot = start; Element* last = pivot; Element* start_next = pivot; Element* next = I->next(); do { I = next; next = I->next(); if (comparer((const T&)I->get(), (const T&)pivot->get())) { // insert before start_next move_before(I, start_next); start_next = I; } else { last = I; }; } while (I != end); if (start_next != pivot) { sort(comparer, start_next, pivot->prev_ptr); }; if (last != pivot) { sort(comparer, pivot->next_ptr, last); }; }; /** * copy constructor for the list */ List(const List& p_list) { _data=NULL; const Element *it=p_list.begin(); while (it) { push_back( it->get() ); it=it->next(); } } List() { _data=NULL; }; ~List() { clear(); if (_data) { ERR_FAIL_COND(_data->size_cache); delete _data; } }; }; #endif zytrax-master/globals/map.h000066400000000000000000000272541347722000700162750ustar00rootroot00000000000000 // // C++ Interface: map // // Description: // // // Author: Juan Linietsky , (C) 2009 // // Copyright: See COPYING file that comes with this distribution // // #ifndef MAP_H #define MAP_H #include "typedefs.h" /** @author Juan Linietsky */ // based on the very nice implementation of rb-trees by: // http://web.mit.edu/~emin/www/source_code/red_black_tree/index.html template class Map { enum Color { RED, BLACK }; struct _Data; public: class Element { private: friend class Map; Color color; K _key; V _value; Element* right; Element* left; Element* parent; Element* _next; Element* _prev; //_Data *data; public: const Element *next() const { return _next; } Element *next() { return _next; } const Element *prev() const { return _prev; } Element *prev() { return _prev; } const K& key() const { return _key; }; V& value() { return _value; }; const V& value() const { return _value; }; V& get() { return _value; }; const V& get() const { return _value; }; Element() { color=RED; right=NULL; left=NULL; parent=NULL; _next=NULL; _prev=NULL; }; }; private: struct _Data { Element* _root; Element* _nil; int size_cache; _Data() { _nil = new Element; _nil->parent=_nil->left=_nil->right=_nil; _nil->color=BLACK; _root = new Element; _root->parent=_root->left=_root->right=_nil; _root->color=BLACK; size_cache=0; } ~_Data() { delete _nil; delete _root; } }; _Data _data; inline void _set_color(Element *p_node, Color p_color) { ERR_FAIL_COND( p_node == _data._nil && p_color == RED ); p_node->color=p_color; } inline void _rotate_left(Element *p_node) { Element *r=p_node->right; p_node->right=r->left; if (r->left != _data._nil ) r->left->parent=p_node; r->parent=p_node->parent; if (p_node==p_node->parent->left) p_node->parent->left=r; else p_node->parent->right=r; r->left=p_node; p_node->parent=r; } inline void _rotate_right(Element *p_node) { Element *l=p_node->left; p_node->left=l->right; if (l->right != _data._nil) l->right->parent=p_node; l->parent=p_node->parent; if (p_node==p_node->parent->right) p_node->parent->right=l; else p_node->parent->left=l; l->right=p_node; p_node->parent=l; } inline Element* _successor(Element *p_node) const { Element *node=p_node; if (node->right != _data._nil) { node=node->right; while(node->left != _data._nil) { /* returns the minium of the right subtree of node */ node=node->left; } return node; } else { while(node == node->parent->right) { node=node->parent; } if (node->parent == _data._root) return NULL; return node->parent; } } inline Element* _predecessor(Element *p_node) const { Element *node=p_node; if (node->left != _data._nil) { node=node->left; while(node->right != _data._nil) { /* returns the minium of the left subtree of node */ node=node->right; } return node; } else { while(node == node->parent->left) { if (node->parent == _data._root) return NULL; node=node->parent; } return node->parent; } } Element *_find(const K& p_key) const { Element *node = _data._root->left; while(node!=_data._nil) { if ((p_key_key)) node=node->left; else if ((node->_keyright; else break; // found } return (node!=_data._nil)?node:NULL; } Element *_find_closest(const K& p_key) const { Element *node = _data._root->left; Element *prev = NULL; while(node!=_data._nil) { prev=node; if ((p_key_key)) node=node->left; else if ((node->_keyright; else break; // found } if (node==_data._nil) { if (prev==NULL) return NULL; if ((p_key_key)) { prev=prev->_prev; } return prev; } else return node; } Element *_insert(const K& p_key, bool& r_exists) { Element *new_parent=_data._root; Element *node = _data._root->left; while (node!=_data._nil) { new_parent=node; if ((p_key_key)) node=node->left; else if ((node->_keyright; else { r_exists=true; return node; } } Element *new_node = new Element; new_node->parent=new_parent; new_node->right=_data._nil; new_node->left=_data._nil; new_node->_key=p_key; //new_node->data=_data; if (new_parent==_data._root || (p_key_key)) { new_parent->left=new_node; } else { new_parent->right=new_node; } r_exists=false; new_node->_next=_successor(new_node); new_node->_prev=_predecessor(new_node); if (new_node->_next) new_node->_next->_prev=new_node; if (new_node->_prev) new_node->_prev->_next=new_node; return new_node; } Element * _insert_rb(const K& p_key, const V& p_value) { bool exists=false; Element *new_node = _insert(p_key,exists); if (new_node) { new_node->_value=p_value; } if (exists) return new_node; Element *node=new_node; _data.size_cache++; while(node->parent->color==RED) { if (node->parent == node->parent->parent->left) { Element *aux=node->parent->parent->right; if (aux->color==RED) { _set_color(node->parent,BLACK); _set_color(aux,BLACK); _set_color(node->parent->parent,RED); node=node->parent->parent; } else { if (node == node->parent->right) { node=node->parent; _rotate_left(node); } _set_color(node->parent,BLACK); _set_color(node->parent->parent,RED); _rotate_right(node->parent->parent); } } else { Element *aux=node->parent->parent->left; if (aux->color==RED) { _set_color(node->parent,BLACK); _set_color(aux,BLACK); _set_color(node->parent->parent,RED); node=node->parent->parent; } else { if (node == node->parent->left) { node=node->parent; _rotate_right(node); } _set_color(node->parent,BLACK); _set_color(node->parent->parent,RED); _rotate_left(node->parent->parent); } } } _set_color(_data._root->left,BLACK); return new_node; } void _erase_fix(Element *p_node) { Element *root = _data._root->left; Element *node=p_node; while( (node->color==BLACK) && (root != node)) { if (node == node->parent->left) { Element *aux=node->parent->right; if (aux->color==RED) { _set_color(aux,BLACK); _set_color(node->parent,RED); _rotate_left(node->parent); aux=node->parent->right; } if ( (aux->right->color==BLACK) && (aux->left->color==BLACK) ) { _set_color(aux,RED); node=node->parent; } else { if (aux->right->color==BLACK) { _set_color(aux->left,BLACK); _set_color(aux,RED); _rotate_right(aux); aux=node->parent->right; } _set_color(aux,node->parent->color); _set_color(node->parent,BLACK); _set_color(aux->right,BLACK); _rotate_left(node->parent); node=root; /* this is to exit while loop */ } } else { /* the code below is has left and right switched from above */ Element *aux=node->parent->left; if (aux->color==RED) { _set_color(aux,BLACK); _set_color(node->parent,RED);; _rotate_right(node->parent); aux=node->parent->left; } if ( (aux->right->color==BLACK) && (aux->left->color==BLACK) ) { _set_color(aux,RED); node=node->parent; } else { if (aux->left->color==BLACK) { _set_color(aux->right,BLACK); _set_color(aux,RED); _rotate_left(aux); aux=node->parent->left; } _set_color(aux,node->parent->color); _set_color(node->parent,BLACK); _set_color(aux->left,BLACK); _rotate_right(node->parent); node=root; } } } _set_color(node,BLACK); ERR_FAIL_COND(_data._nil->color!=BLACK); } void _erase(Element *p_node) { Element *rp= ((p_node->left == _data._nil) || (p_node->right == _data._nil)) ? p_node : _successor(p_node); if (!rp) rp=_data._nil; Element *node= (rp->left == _data._nil) ? rp->right : rp->left; if (_data._root == (node->parent=rp->parent) ) { _data._root->left=node; } else { if (rp == rp->parent->left) { rp->parent->left=node; } else { rp->parent->right=node; } } if (rp != p_node) { ERR_FAIL_COND( rp == _data._nil ); if (rp->color==BLACK) _erase_fix(node); rp->left=p_node->left; rp->right=p_node->right; rp->parent=p_node->parent; rp->color=p_node->color; p_node->left->parent=rp; p_node->right->parent=rp; if (p_node == p_node->parent->left) { p_node->parent->left=rp; } else { p_node->parent->right=rp; } } else { if (p_node->color==BLACK) _erase_fix(node); } if (p_node->_next) p_node->_next->_prev=p_node->_prev; if (p_node->_prev) p_node->_prev->_next=p_node->_next; delete p_node; _data.size_cache--; ERR_FAIL_COND( _data._nil->color==RED ); } void _calculate_depth(Element *p_element,int &max_d,int d) const { if (p_element==_data._nil) { return; } _calculate_depth(p_element->left,max_d,d+1); _calculate_depth(p_element->right,max_d,d+1); if (d>max_d) max_d=d; } void _cleanup_tree(Element *p_element) { if (p_element==_data._nil) return; _cleanup_tree(p_element->left); _cleanup_tree(p_element->right); delete p_element; } void _copy_from( const Map& p_map) { clear(); // not the fastest way, but safeset to write. for(Element *I=p_map.front();I;I=I->next()) { insert(I->key(),I->value()); } } public: const Element *find(const K& p_key) const { const Element *res=_find(p_key); return res; } Element *find(const K& p_key) { Element *res=_find(p_key); return res; } const Element *find_closest(const K& p_key) const { const Element *res=_find_closest(p_key); return res; } Element *find_closest(const K& p_key) { Element *res=_find_closest(p_key); return res; } Element *insert(const K& p_key,const V& p_value) { return _insert_rb(p_key,p_value); } void erase(Element* p_element) { _erase(p_element); } bool erase(const K& p_key) { Element *e=find(p_key); if (!e) return false; _erase(e); return true; } bool has(const K& p_key) const { return find(p_key) != NULL; } const V& operator[](const K& p_key) const { const Element *e=find(p_key); ERR_FAIL_COND_V(!e, *(V*)NULL); // crash on purpose return e->_value; } V& operator[](const K& p_key) { Element *e=find(p_key); if (!e) e=insert(p_key,V()); ERR_FAIL_COND_V(!e, *(V*)NULL); // crash on purpose return e->_value; } Element *front() const { Element *e=_data._root->left; if (e==_data._nil) return NULL; while(e->left!=_data._nil) e=e->left; return e; } Element *back() const { Element *e=_data._root->left; if (e==_data._nil) return NULL; while(e->right!=_data._nil) e=e->right; return e; } inline bool empty() const { return _data.size_cache==0; } inline int size() const { return _data.size_cache; } int calculate_depth() const { // used for debug mostly int max_d=0; _calculate_depth(_data._root->left,max_d,0); return max_d; } void clear() { _cleanup_tree(_data._root->left); _data._root->left=_data._nil; _data.size_cache=0; _data._nil->parent=_data._nil; } void operator=(const Map& p_map) { _copy_from( p_map ); } Map(const Map& p_map) { _copy_from( p_map ); } _FORCE_INLINE_ Map() { } ~Map() { clear(); } }; #endif zytrax-master/globals/rstring.cpp000066400000000000000000000477331347722000700175470ustar00rootroot00000000000000// // C++ Implementation: rstring // // Description: // // // Author: Juan Linietsky , (C) 2005 // // Copyright: See COPYING file that comes with this distribution // // #include "rstring.h" #include "error_macros.h" #include "ucaps.h" #include #include #include #include #define MAX_DIGITS 6 #define UPPERCASE(m_c) (((m_c) >= 'a' && (m_c) <= 'z') ? ((m_c) - ('a' - 'A')) : (m_c)) void CharString::free_shared() { if (shared == NULL) return; if (shared->refcount == 1) { free(shared->data); free(shared); } else { shared->refcount--; } shared = NULL; } void CharString::take_shared(Shared *p_shared) { if (shared != NULL) free_shared(); shared = p_shared; shared->refcount++; } const char *CharString::get_data() { if (!shared) return ""; return shared->data; } const CharString &CharString::operator=(CharString &p_str) { take_shared(p_str.shared); return *this; } CharString::CharString() { shared = NULL; } CharString::CharString(const CharString &p_src) { CharString src = p_src; shared = NULL; take_shared(src.shared); } CharString::CharString(char *p_data) { if (p_data == NULL) shared = NULL; else { shared = (Shared *)malloc(sizeof(Shared)); shared->data = p_data; shared->refcount = 1; } } CharString::~CharString() { free_shared(); } /** STRING **/ void String::copy_on_write() { if (shared->refcount == 1) return; //no need to copy on write! Shared *old = shared; shared = NULL; create_shared(); resize_shared(old->len); for (int i = 0; i < shared->len; i++) { shared->data[i] = old->data[i]; } old->refcount--; } void String::resize_shared(int p_newsize) { shared->data = (CharType *)realloc(shared->data, (p_newsize + 1) * sizeof(CharType)); shared->len = p_newsize; shared->data[shared->len] = 0; //append 0 at the end so it's compatible to a cstring } void String::create_shared(int p_length) { if (shared != NULL) { ERR_PRINT(" Shared != NULL "); return; } shared = new Shared; shared->len = p_length; shared->data = (CharType *)malloc(sizeof(CharType) * (p_length + 1)); shared->data[p_length] = 0; //append 0 at the end so it's compatible to a cstring shared->refcount = 1; } void String::free_shared() { if (shared->refcount == 1) { //only us using it free(shared->data); delete shared; } else { shared->refcount--; } shared = NULL; } void String::copy_from(String &p_string) { if (p_string.shared == shared) return; //nothing to copy free_shared(); //if we have data, free it shared = p_string.shared; // copy the shared data shared->refcount++; // Increase refcount in shared } void String::copy_from(const char *p_cstr) { int len = 0; const char *ptr = p_cstr; while (*(ptr++) != 0) len++; if (shared != NULL) free_shared(); create_shared(len); for (int i = 0; i < len; i++) { shared->data[i] = p_cstr[i]; } shared->len = len; } void String::copy_from(const CharType *p_cstr, int p_clip_to_len) { int len = 0; const CharType *ptr = p_cstr; while (*(ptr++) != 0) len++; if (p_clip_to_len >= 0 && p_clip_to_len < len) { len = p_clip_to_len; } if (shared != NULL) free_shared(); create_shared(len); for (int i = 0; i < len; i++) { shared->data[i] = p_cstr[i]; } shared->len = len; } void String::copy_from(const CharType &p_char) { if (shared != NULL) free_shared(); create_shared(1); //@TODO@ create in a certain size must be allowed shared->data[0] = p_char; } bool String::operator=(String p_str) { copy_from(p_str); return length() > 0; //true if not empty } bool String::operator==(String p_str) const { /* speed up comparison */ if (shared == p_str.shared) return true; //no doubt if (shared->len != p_str.shared->len) return false; // no need for weird tests /* Compare char by char */ for (int i = 0; i < shared->len; i++) { if (shared->data[i] != p_str.shared->data[i]) return false; } return true; } bool String::operator!=(String p_str) const { return !(*this == p_str); } const String::CharType &String::operator[](int p_idx) const { static CharType errv = 0; if (p_idx < 0 || p_idx >= shared->len) { ERR_PRINT("p_idx <0 || p_idx>=shared->len"); return errv; }; return shared->data[p_idx]; } String::CharType &String::operator[](int p_idx) { static CharType errv = 0; errv = 0; //dont change it, damnit if (p_idx < 0 || p_idx >= shared->len) { ERR_PRINT("p_idx <0 || p_idx>=shared->len"); return errv; }; copy_on_write(); return shared->data[p_idx]; } String String::operator+(const String &p_str) const { String res = *this; res += p_str; return res; } String String::operator+(CharType p_chr) const { String res = *this; res += p_chr; return res; } String &String::operator+=(const String &p_str) { if (p_str.empty()) { return *this; } copy_on_write(); /* DATA IS MODIFIED, COPY ON WRITE! */ int old_len = shared->len; resize_shared(p_str.shared->len + shared->len); for (int i = 0; i < p_str.shared->len; i++) { shared->data[old_len + i] = p_str.shared->data[i]; } return *this; } String &String::operator+=(const CharType *p_str) { *this += String(p_str); return *this; } String &String::operator+=(CharType p_char) { copy_on_write(); ///< DATA IS MODIFIED, COPY ON WRITE! resize_shared(shared->len + 1); shared->data[shared->len - 1] = p_char; return *this; } String &String::operator+=(const char *p_str) { if (p_str[0] == 0) return *this; copy_on_write(); ///< DATA IS MODIFIED, COPY ON WRITE! int src_len = 0; const char *ptr = p_str; while (*(ptr++) != 0) src_len++; int old_len = shared->len; resize_shared(src_len + shared->len); for (int i = 0; i < src_len; i++) { shared->data[old_len + i] = p_str[i]; } return *this; } void String::operator=(const char *p_str) { copy_from(p_str); } void String::operator=(const CharType *p_str) { copy_from(p_str); } bool String::operator=(CharType p_chr) { copy_from(p_chr); return !empty(); } bool String::operator==(const char *p_str) const { int len = 0; const char *aux = p_str; while (*(aux++) != 0) len++; if (len != shared->len) return false; for (int i = 0; i < len; i++) if (p_str[i] != shared->data[i]) return false; return true; } bool String::operator==(const CharType *p_str) const { int len = 0; const CharType *aux = p_str; while (*(aux++) != 0) len++; if (len != shared->len) return false; for (int i = 0; i < len; i++) if (p_str[i] != shared->data[i]) return false; return true; } bool String::operator!=(const char *p_str) const { return (!(*this == p_str)); } bool String::operator!=(const CharType *p_str) const { return (!(*this == p_str)); } bool String::operator<(const CharType *p_str) const { const CharType *this_str = c_str(); while (true) { if (*p_str == 0 && *this_str == 0) return false; //this can't be equal, sadly else if (*this_str == 0) return true; //if this is empty, and the other one is not, then we're less.. I think? else if (*p_str == 0) return false; //otherwise the other one is smaller.. else if (*this_str < *p_str) //more than return true; else if (*this_str > *p_str) //less than return false; this_str++; p_str++; } return false; //should never reach here anyway } bool String::operator<(const char *p_str) const { const CharType *this_str = c_str(); while (true) { if (*p_str == 0 && *this_str == 0) return false; //this can't be equal, sadly else if (*this_str == 0) return true; //if this is empty, and the other one is not, then we're less.. I think? else if (*p_str == 0) return false; //otherwise the other one is smaller.. else if (*this_str < *p_str) //more than return true; else if (*this_str > *p_str) //less than return false; this_str++; p_str++; } return false; //should never reach here anyway } bool String::operator<(String p_str) const { return operator<(p_str.c_str()); } signed char String::nocasecmp_to(String p_str) const { ////strcmp like, <0 for we are less 0 for equal > 0 for we are greater const CharType *that_str = p_str.c_str(); const CharType *this_str = c_str(); while (true) { if (*that_str == 0 && *this_str == 0) return 0; //we're equal else if (*this_str == 0) return -1; //if this is empty, and the other one is not, then we're less.. I think? else if (*that_str == 0) return 1; //otherwise the other one is smaller.. else if (UPPERCASE(*this_str) < UPPERCASE(*that_str)) //more than return -1; else if (UPPERCASE(*this_str) > UPPERCASE(*that_str)) //less than return 1; this_str++; that_str++; } return 0; //should never reach anyway } void String::erase(int p_pos, int p_chars) { *this = left(p_pos) + substr(p_pos + p_chars, length() - ((p_pos + p_chars))); } int String::get_slice_count(String p_splitter) { int pos = 0; int slices = 1; while ((pos = find(p_splitter, pos)) >= 0) { slices++; pos += p_splitter.length(); } return slices; } String String::get_slice(String p_splitter, int p_slice) { int pos = 0; int prev_pos = 0; int slices = 1; if (p_slice < 0) return ""; if (find(p_splitter) == -1) return *this; int i = 0; while (true) { pos = find(p_splitter, pos); if (pos == -1) pos = length(); //reached end int from = prev_pos; int to = pos; if (p_slice == i) { return substr(from, pos - from); } if (pos == length()) //reached end and no find break; pos += p_splitter.length(); prev_pos = pos; i++; } return ""; //no find! } String String::to_upper() { String upper = *this; for (int i = 0; i < upper.size(); i++) { upper[i] = UPPERCASE(upper[i]); } return upper; } int String::length() const { return shared->len; } int String::size() const { return shared->len; } bool String::empty() const { return shared->len == 0; } const String::CharType *String::c_str() const { return shared->data; } String String::num(double p_num, int p_digits) { String s; String sd; /* integer part */ bool neg = p_num < 0; p_num = fabs(p_num); int intn = (int)p_num; /* decimal part */ if (p_digits > 0 || (p_digits == -1 && (int)p_num != p_num)) { double dec = p_num - floor(p_num); int digit = 0; if (p_digits > MAX_DIGITS) p_digits = MAX_DIGITS; int dec_int = 0; int dec_max = 0; while (true) { dec *= 10.0; dec_int = dec_int * 10 + (int)dec % 10; dec_max = dec_max * 10 + 9; digit++; if (p_digits == -1) { if (digit == MAX_DIGITS) //no point in going to infinite break; if ((dec - floor(dec)) < 1e-6) break; } if (digit == p_digits) break; } dec *= 10; int last = (int)dec % 10; if (last > 5) { if (dec_int == dec_max) { dec_int = 0; intn++; } else { dec_int++; } } String decimal; for (int i = 0; i < digit; i++) { decimal = String('0' + dec_int % 10) + decimal; dec_int /= 10; } sd = '.' + decimal; } if (intn == 0) s = "0"; else { while (intn) { CharType num = '0' + (intn % 10); intn /= 10; s = num + s; } } s = s + sd; if (neg) s = "-" + s; return s; } CharString String::ascii(bool p_allow_extended) const { if (!length()) return CharString(); char *ascii = (char *)malloc(length() + 1); for (int i = 0; i < length(); i++) { int max = p_allow_extended ? 0xFF : 0x7F; if ((*this)[i] > max) ascii[i] = '?'; else { unsigned char uc = (unsigned char)((*this)[i]); signed char *sc = (signed char *)&uc; ascii[i] = *sc; } } ascii[length()] = 0; return CharString(ascii); } static int parse_utf8_char(const char *p_utf8, unsigned int *p_ucs4, int p_left) { //return len int len = 0; /* Determine the number of characters in sequence */ if ((*p_utf8 & 0x80) == 0) len = 1; else if ((*p_utf8 & 0xE0) == 0xC0) len = 2; else if ((*p_utf8 & 0xF0) == 0xE0) len = 3; else if ((*p_utf8 & 0xF8) == 0xF0) len = 4; else if ((*p_utf8 & 0xFC) == 0xF8) len = 5; else if ((*p_utf8 & 0xFE) == 0xFC) len = 6; else return -1; //invalid UTF8 if (len > p_left) return -1; //not enough space if (len == 2 && (*p_utf8 & 0x1E) == 0) return -1; //reject overlong /* Convert the first character */ unsigned int unichar = 0; if (len == 1) unichar = *p_utf8; else { unichar = (0xFF >> (len + 1)) & *p_utf8; ; for (int i = 1; i < len; i++) { if ((p_utf8[i] & 0xC0) != 0x80) return -1; //invalid utf8 if (unichar == 0 && i == 2 && ((p_utf8[i] & 0x7F) >> (7 - len)) == 0) return -1; //no overlong unichar = (unichar << 6) | (p_utf8[i] & 0x3F); } } *p_ucs4 = unichar; return len; } bool String::parse_utf8(const char *p_utf8) { if (!p_utf8) { free_shared(); return false; } String aux; int cstr_size = 0; while (p_utf8[cstr_size]) cstr_size++; // printf("Parsing %s\n",p_utf8); while (cstr_size) { unsigned int unichar; int len = parse_utf8_char(p_utf8, &unichar, cstr_size); if (len < 0) return true; // printf("char %i, len %i\n",unichar,len); if (sizeof(wchar_t) == 2) { //windows most certainly if (unichar <= 0xFFFF) { //windows can't eat this aux += unichar; } } else { aux += unichar; } cstr_size -= len; p_utf8 += len; } *this = aux; return false; } CharString String::utf8() const { if (!length()) return CharString(); String utf8s; for (int i = 0; i < length(); i++) { CharType c = (*this)[i]; if (c <= 0x7f) // 7 bits. utf8s += c; else if (c <= 0x7ff) { // 11 bits utf8s += CharType(0xc0 | ((c >> 6) & 0x1f)); // Top 5 bits. utf8s += CharType(0x80 | (c & 0x3f)); // Bottom 6 bits. } else if (c <= 0xffff) { // 16 bits utf8s += CharType(0xe0 | ((c >> 12) & 0x0f)); // Top 4 bits. utf8s += CharType(0x80 | ((c >> 6) & 0x3f)); // Middle 6 bits. utf8s += CharType(0x80 | (c & 0x3f)); // Bottom 6 bits. } else if (c <= 0x001fffff) { // 21 bits utf8s += CharType(0xf0 | ((c >> 18) & 0x07)); // Top 3 bits. utf8s += CharType(0x80 | ((c >> 12) & 0x3f)); // Upper middle 6 bits. utf8s += CharType(0x80 | ((c >> 6) & 0x3f)); // Lower middle 6 bits. utf8s += CharType(0x80 | (c & 0x3f)); // Bottom 6 bits. } else if (c <= 0x03ffffff) { // 26 bits utf8s += CharType(0xf8 | ((c >> 24) & 0x03)); // Top 2 bits. utf8s += CharType(0x80 | ((c >> 18) & 0x3f)); // Upper middle 6 bits. utf8s += CharType(0x80 | ((c >> 12) & 0x3f)); // middle 6 bits. utf8s += CharType(0x80 | ((c >> 6) & 0x3f)); // Lower middle 6 bits. utf8s += CharType(0x80 | (c & 0x3f)); // Bottom 6 bits. } else if (c <= 0x7fffffff) { // 31 bits utf8s += CharType(0xfc | ((c >> 30) & 0x01)); // Top 1 bit. utf8s += CharType(0x80 | ((c >> 24) & 0x3f)); // Upper upper middle 6 bits. utf8s += CharType(0x80 | ((c >> 18) & 0x3f)); // Lower upper middle 6 bits. utf8s += CharType(0x80 | ((c >> 12) & 0x3f)); // Upper lower middle 6 bits. utf8s += CharType(0x80 | ((c >> 6) & 0x3f)); // Lower lower middle 6 bits. utf8s += CharType(0x80 | (c & 0x3f)); // Bottom 6 bits. } } return utf8s.ascii(true); //allow extended } String::String(CharType p_char) { shared = NULL; copy_from(p_char); } String::String(const String &p_string) { shared = NULL; create_shared(); String &str = (String &)p_string; /* remove Const-ness */ copy_from(str); } String::String(const char *p_str) { shared = NULL; copy_from(p_str); } String::String(const CharType *p_str, int p_clip_to_len) { shared = NULL; copy_from(p_str, p_clip_to_len); } String::String() { shared = NULL; create_shared(); } int String::to_int() const { if (length() == 0) return 0; int to = (find(".") >= 0) ? find(".") : length(); int integer = 0; for (int i = 0; i < to; i++) { CharType c = operator[](i); if (c >= '0' && c <= '9') { integer *= 10; integer += c - '0'; } } return integer; } double String::to_double() const { if (length() == 0) return 0; int dot = find("."); if (dot < 0) dot = length(); int integer = to_int(); double decimal = 0; if (dot < length()) { //has decimal part? double multiplier = 0.1; for (int i = (dot + 1); i < length(); i++) { CharType c = operator[](i); if (c >= '0' && c <= '9') { decimal += (double)(c - '0') * multiplier; multiplier *= 0.1; } } } return (double)integer + decimal; } String::~String() { free_shared(); } bool operator==(const char *p_chr, const String &p_str) { return p_str == p_chr; } String operator+(const char *p_chr, const String &p_str) { String tmp = p_chr; tmp += p_str; return tmp; } String operator+(String::CharType p_chr, const String &p_str) { String tmp(p_chr); tmp += p_str; return tmp; } void String::insert(int p_at_pos, String p_string) { if (p_at_pos < 0) return; if (p_at_pos > length()) p_at_pos = length(); String pre; if (p_at_pos > 0) pre = substr(0, p_at_pos); String post; if (p_at_pos < length()) post = substr(p_at_pos, length() - p_at_pos); *this = pre + p_string + post; } String String::substr(int p_from, int p_chars) const { if (p_from < 0 || p_from >= length() || p_chars <= 0) return ""; if ((p_from + p_chars) > length()) { p_chars = length() - p_from; } return String(&shared->data[p_from], p_chars); } int String::find(String p_str, int p_from) const { if (p_from < 0) return -1; int src_len = p_str.length(); if (src_len == 0 || length() == 0) return -1; //wont find anything! for (int i = p_from; i <= (length() - src_len); i++) { bool found = true; for (int j = 0; j < src_len; j++) { int read_pos = i + j; if (read_pos >= length()) { ERR_PRINT("read_pos>=length()"); return -1; }; if (shared->data[read_pos] != p_str[j]) { found = false; break; } } if (found) return i; } return -1; } int String::find_last(String p_str) const { int idx = find(p_str); if (idx == -1) return -1; while (true) { int res = find(p_str, idx + 1); if (res == -1) return idx; idx = res; } } int String::findn(String p_str, int p_from) const { if (p_from < 0) return -1; int src_len = p_str.length(); if (src_len == 0 || length() == 0) return -1; //wont find anything! for (int i = p_from; i <= (length() - src_len); i++) { bool found = true; for (int j = 0; j < src_len; j++) { int read_pos = i + j; if (read_pos >= length()) { ERR_PRINT("read_pos>=length()"); return -1; }; CharType src = shared->data[read_pos]; CharType dst = p_str[j]; if (src >= 'a' && src <= 'z') src -= 'a' - 'A'; if (dst >= 'a' && dst <= 'z') dst -= 'a' - 'A'; if (src != dst) { found = false; break; } } if (found) return i; } return -1; } void String::replace(String p_key, String p_with) { String new_string; int search_from = 0; int result = 0; while ((result = find(p_key, search_from)) >= 0) { new_string += substr(search_from, result - search_from); new_string += p_with; search_from = result + p_key.length(); } new_string += substr(search_from, length() - search_from); *this = new_string; } String String::left(int p_chars) { if (p_chars <= 0) return ""; if (p_chars >= length()) return *this; return substr(0, p_chars); } String String::right(int p_chars) { int from = (int)length() - p_chars; if (from < 0) return ""; int len = p_chars; return substr(from, p_chars); } String String::strip_edges() { int beg = 0, end = length(); for (int i = 0; i < length(); i++) { if (operator[](i) <= 32) beg++; else break; } for (int i = (int)(length() - 1); i >= 0; i--) { if (operator[](i) <= 32) end++; else break; } return substr(beg, end - beg); } String String::get_extension() const { int pos = find_last("."); if (pos < 0 || pos < MAX(find_last("/"), find_last("\\"))) return ""; return substr(pos + 1, length()); } String String::to_upper() const { String upper = *this; for (int i = 0; i < upper.size(); i++) { const CharType s = upper[i]; const CharType t = _find_upper(s); if (s != t) // avoid copy on write upper[i] = t; } return upper; } String String::to_lower() const { String lower = *this; for (int i = 0; i < lower.size(); i++) { const CharType s = lower[i]; const CharType t = _find_lower(s); if (s != t) // avoid copy on write lower[i] = t; } return lower; } zytrax-master/globals/rstring.h000066400000000000000000000105131347722000700171760ustar00rootroot00000000000000// // C++ Interface: rstring // // Description: // // // Author: Juan Linietsky , (C) 2005 // // Copyright: See COPYING file that comes with this distribution // // #ifndef RESHAKEDSTRING_H #define RESHAKEDSTRING_H #include "globals/config.h" class CharString { struct Shared { int refcount; char *data; }; Shared *shared; void free_shared(); void take_shared(Shared *p_shared); friend class String; CharString(char *p_data); public: const CharString &operator=(CharString &p_str); const char *get_data(); CharString(); CharString(const CharString &p_src); ~CharString(); }; class String { public: typedef wchar_t CharType; // -- standard // typedef unsigned short CharType; // ucs16 private: struct Shared { CharType *data; int len; int refcount; }; Shared *shared; /** * Creates the shared data, to a fixed length in case it's needed, and a zero always att he end * By default creates empty */ void create_shared(int p_length = 0); /** * Free shared data. If still being used by other strings, it decreases the refcount, otherwise it deletes the data */ void free_shared(); /** * Copy the data from another string. It shares the data with the other string */ void copy_from(String &p_string); /** * Copy the data from something else, the data is not shared as it is not possible */ void copy_from(const char *p_cstr); void copy_from(const CharType *p_cstr, int p_clip_to_len = -1); void copy_from(const CharType &p_char); /** * Resize the data, copy on write should often be used before */ void resize_shared(int p_newsize); /** * Copy on write: * If the current string is going to be EDITED (not replaced) * this class creates a local copy of the shared string data (in case it is used by many strings, otherwise it's not touched) */ void copy_on_write(); //copy on write public: /* Regular Operators */ bool operator=(String p_str); bool operator=(CharType p_chr); bool operator==(String p_str) const; bool operator!=(String p_str) const; String operator+(const String &) const; String operator+(CharType p_char) const; String &operator+=(const String &); String &operator+=(CharType p_str); String &operator+=(const char *p_str); String &operator+=(const CharType *p_str); /* Compatibility Operators */ void operator=(const char *p_str); void operator=(const CharType *p_str); bool operator==(const char *p_str) const; bool operator==(const CharType *p_str) const; bool operator!=(const char *p_str) const; bool operator!=(const CharType *p_str) const; bool operator<(const CharType *p_str) const; bool operator<(const char *p_str) const; bool operator<(String p_str) const; signed char nocasecmp_to(String p_str) const; ////strcmp like, <0 for less 0 for equal > 0 for greater /* [] op */ const CharType &operator[](int p_idx) const; //constref CharType &operator[](int p_idx); //assignment const CharType *c_str() const; /* standard size stuff */ int length() const; int size() const; bool empty() const; /* complex helpers */ String substr(int p_from, int p_chars) const; int find(String p_str, int p_from = 0) const; ///< return <0 if failed int find_last(String p_str) const; ///< return <0 if failed int findn(String p_str, int p_from = 0) const; ///< return <0 if failed, case insensitive void replace(String p_key, String p_with); void insert(int p_at_pos, String p_string); static String num(double p_num, int p_digits = -1); double to_double() const; int to_int() const; int get_slice_count(String p_splitter); String get_slice(String p_splitter, int p_slice); String to_upper(); String left(int p_chars); String right(int p_chars); void erase(int p_pos, int p_chars); CharString ascii(bool p_allow_extended = false) const; CharString utf8() const; bool parse_utf8(const char *p_utf8); //return true on error /** * The constructors must not depend on other overloads */ String to_upper() const; String to_lower() const; String get_extension() const; String strip_edges(); String(); String(CharType p_char); String(const char *p_str); String(const CharType *p_str, int p_clip_to_len = -1); String(const String &p_string); ~String(); }; bool operator==(const char *p_chr, const String &p_str); String operator+(const char *p_chr, const String &p_str); String operator+(String::CharType p_chr, const String &p_str); #endif zytrax-master/globals/typedefs.h000066400000000000000000000114201347722000700173270ustar00rootroot00000000000000 #ifndef TYPEDEFS_H #define TYPEDEFS_H #include /** * Basic definitions and simple functions to be used everywhere.. */ #ifndef _STR #define _STR(m_x) #m_x #define _MKSTR(m_x) _STR(m_x) #endif #define VERSION_MAJOR 1 #define VERSION_MINOR 0 #define VERSION_STATUS alpha #define VERSION_SOFTWARE_NAME "ZyTrax" #define VERSION_COPYRIGHT "(c) 2019 Juan Linietsky" #define VERSION_MKSTRING \ _MKSTR(VERSION_MAJOR) \ "." _MKSTR(VERSION_MINOR) "-" _MKSTR(VERSION_STATUS) #define VERSION_WITH_COPYRIGHT VERSION_SOFTWARE_NAME " v" _MKSTR(VERSION_MAJOR) "." _MKSTR(VERSION_MINOR) "-" _MKSTR(VERSION_STATUS) " " VERSION_COPYRIGHT #if defined(__GNUC__) && (__GNUC__ >= 4) #define _FORCE_INLINE_ __attribute__((always_inline)) inline #define _FORCE_ALIGN_ __attribute__((aligned(16))) #elif defined(_MSC_VER) #define _FORCE_INLINE_ __forceinline #error no idea how to align in MSVC, find out #else #define _FORCE_INLINE_ inline #define _FORCE_ALIGN_ #endif //custom, gcc-safe offsetof, because gcc complains a lot. template T *_nullptr() { T *t = NULL; return t; } #define OFFSET_OF(st, m) \ ((size_t)((char *)&(_nullptr()->m) - (char *)0)) /** * Some platforms (devices) not define NULL */ /** * Windows defines a lot of badly stuff we'll never ever use. undefine it. */ #ifdef _WIN32 #undef min // override standard definition #undef max // override standard definition #undef ERROR // override (really stupid) wingdi.h standard definition #undef DELETE // override (another really stupid) winnt.h standard definition #undef MessageBox // override winuser.h standard definition #undef MIN // override standard definition #undef MAX // override standard definition #undef CLAMP // override standard definition #undef Error #undef OK #endif #include "error_list.h" #include "error_macros.h" /** * Types defined for portability. * libSDL uses the same convention, so if libSDL is in use, we just use SDL ones. */ #ifdef _MSC_VER /* Microsoft Visual C doesn't support the C98, C++0x standard types, so redefine them */ typedef signed __int8 int8_t; typedef unsigned __int8 uint8_t; typedef signed __int16 int16_t; typedef unsigned __int16 uint16_t; typedef signed __int32 int32_t; typedef unsigned __int32 uint32_t; typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; #else #ifdef NO_STDINT_H typedef unsigned char uint8_t; typedef signed char int8_t; typedef unsigned short uint16_t; typedef signed short int16_t; typedef unsigned int uint32_t; typedef signed int int32_t; typedef long long int64_t; typedef unsigned long long int64_t; #else #include #endif #endif /** Generic ABS function */ #ifndef ABS #define ABS(m_v) ((m_v < 0) ? (-(m_v)) : (m_v)) #endif #ifndef SIGN #define SIGN(m_v) ((m_v < 0) ? (-1) : (1)) #endif #ifndef MIN #define MIN(m_a, m_b) (((m_a) < (m_b)) ? (m_a) : (m_b)) #endif #ifndef MAX #define MAX(m_a, m_b) (((m_a) > (m_b)) ? (m_a) : (m_b)) #endif #ifndef CLAMP #define CLAMP(m_a, m_min, m_max) (((m_a) < (m_min)) ? (m_min) : (((m_a) > (m_max)) ? m_max : m_a)) #endif /** Generic swap template */ #ifndef SWAP #define SWAP(m_x, m_y) __swap_tmpl(m_x, m_y) template inline void __swap_tmpl(T &x, T &y) { T aux = x; x = y; y = aux; } #endif //swap #define HEX2CHR(m_hex) ((m_hex >= '0' && m_hex <= '9') ? (m_hex - '0') : \ ((m_hex >= 'A' && m_hex <= 'F') ? (10 + m_hex - 'A') : \ ((m_hex >= 'a' && m_hex <= 'f') ? (10 + m_hex - 'a') : 0))) /** Function to find the nearest (bigger) power of 2 to an integer */ static inline unsigned int nearest_power_of_2(unsigned int p_number) { for (int i = 30; i >= 0; i--) { if (p_number & (1 << i)) return ((p_number == (unsigned int)(1 << i)) ? p_number : (1 << (i + 1))); } return 0; } /** Function to find the nearest (bigger) power of 2 to an integer */ static inline unsigned int nearest_shift(unsigned int p_number) { for (int i = 30; i >= 0; i--) { if (p_number & (1 << i)) return i + 1; } return 0; } /** get a shift value from a power of 2 */ static inline int get_shift_from_power_of_2(unsigned int p_pixel) { // return a GL_TEXTURE_SIZE_ENUM for (unsigned int i = 0; i < 32; i++) { if (p_pixel == (unsigned int)(1 << i)) return i; } return -1; } /** Swap 32 bits value for endianness */ static inline uint32_t BSWAP32(uint32_t x) { return ((x << 24) | ((x << 8) & 0x00FF0000) | ((x >> 8) & 0x0000FF00) | (x >> 24)); } /** When compiling with RTTI, we can add an "extra" * layer of safeness in many operations, so dynamic_cast * is used besides casting by enum. */ template struct Comparator { inline bool operator()(const T &p_a, const T &p_b) const { return (p_a < p_b); } }; #define __STRX(m_index) #m_index #define __STR(m_index) __STRX(m_index) #endif /* typedefs.h */ zytrax-master/globals/ucaps.h000066400000000000000000000704711347722000700166320ustar00rootroot00000000000000#ifndef UCAPS_H #define UCAPS_H //satan invented unicode? #define CAPS_LEN 666 static const int caps_table[CAPS_LEN][2] = { { 0x0061, 0x0041 }, { 0x0062, 0x0042 }, { 0x0063, 0x0043 }, { 0x0064, 0x0044 }, { 0x0065, 0x0045 }, { 0x0066, 0x0046 }, { 0x0067, 0x0047 }, { 0x0068, 0x0048 }, { 0x0069, 0x0049 }, { 0x006A, 0x004A }, { 0x006B, 0x004B }, { 0x006C, 0x004C }, { 0x006D, 0x004D }, { 0x006E, 0x004E }, { 0x006F, 0x004F }, { 0x0070, 0x0050 }, { 0x0071, 0x0051 }, { 0x0072, 0x0052 }, { 0x0073, 0x0053 }, { 0x0074, 0x0054 }, { 0x0075, 0x0055 }, { 0x0076, 0x0056 }, { 0x0077, 0x0057 }, { 0x0078, 0x0058 }, { 0x0079, 0x0059 }, { 0x007A, 0x005A }, { 0x00E0, 0x00C0 }, { 0x00E1, 0x00C1 }, { 0x00E2, 0x00C2 }, { 0x00E3, 0x00C3 }, { 0x00E4, 0x00C4 }, { 0x00E5, 0x00C5 }, { 0x00E6, 0x00C6 }, { 0x00E7, 0x00C7 }, { 0x00E8, 0x00C8 }, { 0x00E9, 0x00C9 }, { 0x00EA, 0x00CA }, { 0x00EB, 0x00CB }, { 0x00EC, 0x00CC }, { 0x00ED, 0x00CD }, { 0x00EE, 0x00CE }, { 0x00EF, 0x00CF }, { 0x00F0, 0x00D0 }, { 0x00F1, 0x00D1 }, { 0x00F2, 0x00D2 }, { 0x00F3, 0x00D3 }, { 0x00F4, 0x00D4 }, { 0x00F5, 0x00D5 }, { 0x00F6, 0x00D6 }, { 0x00F8, 0x00D8 }, { 0x00F9, 0x00D9 }, { 0x00FA, 0x00DA }, { 0x00FB, 0x00DB }, { 0x00FC, 0x00DC }, { 0x00FD, 0x00DD }, { 0x00FE, 0x00DE }, { 0x00FF, 0x0178 }, { 0x0101, 0x0100 }, { 0x0103, 0x0102 }, { 0x0105, 0x0104 }, { 0x0107, 0x0106 }, { 0x0109, 0x0108 }, { 0x010B, 0x010A }, { 0x010D, 0x010C }, { 0x010F, 0x010E }, { 0x0111, 0x0110 }, { 0x0113, 0x0112 }, { 0x0115, 0x0114 }, { 0x0117, 0x0116 }, { 0x0119, 0x0118 }, { 0x011B, 0x011A }, { 0x011D, 0x011C }, { 0x011F, 0x011E }, { 0x0121, 0x0120 }, { 0x0123, 0x0122 }, { 0x0125, 0x0124 }, { 0x0127, 0x0126 }, { 0x0129, 0x0128 }, { 0x012B, 0x012A }, { 0x012D, 0x012C }, { 0x012F, 0x012E }, { 0x0131, 0x0049 }, { 0x0133, 0x0132 }, { 0x0135, 0x0134 }, { 0x0137, 0x0136 }, { 0x013A, 0x0139 }, { 0x013C, 0x013B }, { 0x013E, 0x013D }, { 0x0140, 0x013F }, { 0x0142, 0x0141 }, { 0x0144, 0x0143 }, { 0x0146, 0x0145 }, { 0x0148, 0x0147 }, { 0x014B, 0x014A }, { 0x014D, 0x014C }, { 0x014F, 0x014E }, { 0x0151, 0x0150 }, { 0x0153, 0x0152 }, { 0x0155, 0x0154 }, { 0x0157, 0x0156 }, { 0x0159, 0x0158 }, { 0x015B, 0x015A }, { 0x015D, 0x015C }, { 0x015F, 0x015E }, { 0x0161, 0x0160 }, { 0x0163, 0x0162 }, { 0x0165, 0x0164 }, { 0x0167, 0x0166 }, { 0x0169, 0x0168 }, { 0x016B, 0x016A }, { 0x016D, 0x016C }, { 0x016F, 0x016E }, { 0x0171, 0x0170 }, { 0x0173, 0x0172 }, { 0x0175, 0x0174 }, { 0x0177, 0x0176 }, { 0x017A, 0x0179 }, { 0x017C, 0x017B }, { 0x017E, 0x017D }, { 0x0183, 0x0182 }, { 0x0185, 0x0184 }, { 0x0188, 0x0187 }, { 0x018C, 0x018B }, { 0x0192, 0x0191 }, { 0x0199, 0x0198 }, { 0x01A1, 0x01A0 }, { 0x01A3, 0x01A2 }, { 0x01A5, 0x01A4 }, { 0x01A8, 0x01A7 }, { 0x01AD, 0x01AC }, { 0x01B0, 0x01AF }, { 0x01B4, 0x01B3 }, { 0x01B6, 0x01B5 }, { 0x01B9, 0x01B8 }, { 0x01BD, 0x01BC }, { 0x01C6, 0x01C4 }, { 0x01C9, 0x01C7 }, { 0x01CC, 0x01CA }, { 0x01CE, 0x01CD }, { 0x01D0, 0x01CF }, { 0x01D2, 0x01D1 }, { 0x01D4, 0x01D3 }, { 0x01D6, 0x01D5 }, { 0x01D8, 0x01D7 }, { 0x01DA, 0x01D9 }, { 0x01DC, 0x01DB }, { 0x01DF, 0x01DE }, { 0x01E1, 0x01E0 }, { 0x01E3, 0x01E2 }, { 0x01E5, 0x01E4 }, { 0x01E7, 0x01E6 }, { 0x01E9, 0x01E8 }, { 0x01EB, 0x01EA }, { 0x01ED, 0x01EC }, { 0x01EF, 0x01EE }, { 0x01F3, 0x01F1 }, { 0x01F5, 0x01F4 }, { 0x01FB, 0x01FA }, { 0x01FD, 0x01FC }, { 0x01FF, 0x01FE }, { 0x0201, 0x0200 }, { 0x0203, 0x0202 }, { 0x0205, 0x0204 }, { 0x0207, 0x0206 }, { 0x0209, 0x0208 }, { 0x020B, 0x020A }, { 0x020D, 0x020C }, { 0x020F, 0x020E }, { 0x0211, 0x0210 }, { 0x0213, 0x0212 }, { 0x0215, 0x0214 }, { 0x0217, 0x0216 }, { 0x0253, 0x0181 }, { 0x0254, 0x0186 }, { 0x0257, 0x018A }, { 0x0258, 0x018E }, { 0x0259, 0x018F }, { 0x025B, 0x0190 }, { 0x0260, 0x0193 }, { 0x0263, 0x0194 }, { 0x0268, 0x0197 }, { 0x0269, 0x0196 }, { 0x026F, 0x019C }, { 0x0272, 0x019D }, { 0x0275, 0x019F }, { 0x0283, 0x01A9 }, { 0x0288, 0x01AE }, { 0x028A, 0x01B1 }, { 0x028B, 0x01B2 }, { 0x0292, 0x01B7 }, { 0x03AC, 0x0386 }, { 0x03AD, 0x0388 }, { 0x03AE, 0x0389 }, { 0x03AF, 0x038A }, { 0x03B1, 0x0391 }, { 0x03B2, 0x0392 }, { 0x03B3, 0x0393 }, { 0x03B4, 0x0394 }, { 0x03B5, 0x0395 }, { 0x03B6, 0x0396 }, { 0x03B7, 0x0397 }, { 0x03B8, 0x0398 }, { 0x03B9, 0x0399 }, { 0x03BA, 0x039A }, { 0x03BB, 0x039B }, { 0x03BC, 0x039C }, { 0x03BD, 0x039D }, { 0x03BE, 0x039E }, { 0x03BF, 0x039F }, { 0x03C0, 0x03A0 }, { 0x03C1, 0x03A1 }, { 0x03C3, 0x03A3 }, { 0x03C4, 0x03A4 }, { 0x03C5, 0x03A5 }, { 0x03C6, 0x03A6 }, { 0x03C7, 0x03A7 }, { 0x03C8, 0x03A8 }, { 0x03C9, 0x03A9 }, { 0x03CA, 0x03AA }, { 0x03CB, 0x03AB }, { 0x03CC, 0x038C }, { 0x03CD, 0x038E }, { 0x03CE, 0x038F }, { 0x03E3, 0x03E2 }, { 0x03E5, 0x03E4 }, { 0x03E7, 0x03E6 }, { 0x03E9, 0x03E8 }, { 0x03EB, 0x03EA }, { 0x03ED, 0x03EC }, { 0x03EF, 0x03EE }, { 0x0430, 0x0410 }, { 0x0431, 0x0411 }, { 0x0432, 0x0412 }, { 0x0433, 0x0413 }, { 0x0434, 0x0414 }, { 0x0435, 0x0415 }, { 0x0436, 0x0416 }, { 0x0437, 0x0417 }, { 0x0438, 0x0418 }, { 0x0439, 0x0419 }, { 0x043A, 0x041A }, { 0x043B, 0x041B }, { 0x043C, 0x041C }, { 0x043D, 0x041D }, { 0x043E, 0x041E }, { 0x043F, 0x041F }, { 0x0440, 0x0420 }, { 0x0441, 0x0421 }, { 0x0442, 0x0422 }, { 0x0443, 0x0423 }, { 0x0444, 0x0424 }, { 0x0445, 0x0425 }, { 0x0446, 0x0426 }, { 0x0447, 0x0427 }, { 0x0448, 0x0428 }, { 0x0449, 0x0429 }, { 0x044A, 0x042A }, { 0x044B, 0x042B }, { 0x044C, 0x042C }, { 0x044D, 0x042D }, { 0x044E, 0x042E }, { 0x044F, 0x042F }, { 0x0451, 0x0401 }, { 0x0452, 0x0402 }, { 0x0453, 0x0403 }, { 0x0454, 0x0404 }, { 0x0455, 0x0405 }, { 0x0456, 0x0406 }, { 0x0457, 0x0407 }, { 0x0458, 0x0408 }, { 0x0459, 0x0409 }, { 0x045A, 0x040A }, { 0x045B, 0x040B }, { 0x045C, 0x040C }, { 0x045E, 0x040E }, { 0x045F, 0x040F }, { 0x0461, 0x0460 }, { 0x0463, 0x0462 }, { 0x0465, 0x0464 }, { 0x0467, 0x0466 }, { 0x0469, 0x0468 }, { 0x046B, 0x046A }, { 0x046D, 0x046C }, { 0x046F, 0x046E }, { 0x0471, 0x0470 }, { 0x0473, 0x0472 }, { 0x0475, 0x0474 }, { 0x0477, 0x0476 }, { 0x0479, 0x0478 }, { 0x047B, 0x047A }, { 0x047D, 0x047C }, { 0x047F, 0x047E }, { 0x0481, 0x0480 }, { 0x0491, 0x0490 }, { 0x0493, 0x0492 }, { 0x0495, 0x0494 }, { 0x0497, 0x0496 }, { 0x0499, 0x0498 }, { 0x049B, 0x049A }, { 0x049D, 0x049C }, { 0x049F, 0x049E }, { 0x04A1, 0x04A0 }, { 0x04A3, 0x04A2 }, { 0x04A5, 0x04A4 }, { 0x04A7, 0x04A6 }, { 0x04A9, 0x04A8 }, { 0x04AB, 0x04AA }, { 0x04AD, 0x04AC }, { 0x04AF, 0x04AE }, { 0x04B1, 0x04B0 }, { 0x04B3, 0x04B2 }, { 0x04B5, 0x04B4 }, { 0x04B7, 0x04B6 }, { 0x04B9, 0x04B8 }, { 0x04BB, 0x04BA }, { 0x04BD, 0x04BC }, { 0x04BF, 0x04BE }, { 0x04C2, 0x04C1 }, { 0x04C4, 0x04C3 }, { 0x04C8, 0x04C7 }, { 0x04CC, 0x04CB }, { 0x04D1, 0x04D0 }, { 0x04D3, 0x04D2 }, { 0x04D5, 0x04D4 }, { 0x04D7, 0x04D6 }, { 0x04D9, 0x04D8 }, { 0x04DB, 0x04DA }, { 0x04DD, 0x04DC }, { 0x04DF, 0x04DE }, { 0x04E1, 0x04E0 }, { 0x04E3, 0x04E2 }, { 0x04E5, 0x04E4 }, { 0x04E7, 0x04E6 }, { 0x04E9, 0x04E8 }, { 0x04EB, 0x04EA }, { 0x04EF, 0x04EE }, { 0x04F1, 0x04F0 }, { 0x04F3, 0x04F2 }, { 0x04F5, 0x04F4 }, { 0x04F9, 0x04F8 }, { 0x0561, 0x0531 }, { 0x0562, 0x0532 }, { 0x0563, 0x0533 }, { 0x0564, 0x0534 }, { 0x0565, 0x0535 }, { 0x0566, 0x0536 }, { 0x0567, 0x0537 }, { 0x0568, 0x0538 }, { 0x0569, 0x0539 }, { 0x056A, 0x053A }, { 0x056B, 0x053B }, { 0x056C, 0x053C }, { 0x056D, 0x053D }, { 0x056E, 0x053E }, { 0x056F, 0x053F }, { 0x0570, 0x0540 }, { 0x0571, 0x0541 }, { 0x0572, 0x0542 }, { 0x0573, 0x0543 }, { 0x0574, 0x0544 }, { 0x0575, 0x0545 }, { 0x0576, 0x0546 }, { 0x0577, 0x0547 }, { 0x0578, 0x0548 }, { 0x0579, 0x0549 }, { 0x057A, 0x054A }, { 0x057B, 0x054B }, { 0x057C, 0x054C }, { 0x057D, 0x054D }, { 0x057E, 0x054E }, { 0x057F, 0x054F }, { 0x0580, 0x0550 }, { 0x0581, 0x0551 }, { 0x0582, 0x0552 }, { 0x0583, 0x0553 }, { 0x0584, 0x0554 }, { 0x0585, 0x0555 }, { 0x0586, 0x0556 }, { 0x10D0, 0x10A0 }, { 0x10D1, 0x10A1 }, { 0x10D2, 0x10A2 }, { 0x10D3, 0x10A3 }, { 0x10D4, 0x10A4 }, { 0x10D5, 0x10A5 }, { 0x10D6, 0x10A6 }, { 0x10D7, 0x10A7 }, { 0x10D8, 0x10A8 }, { 0x10D9, 0x10A9 }, { 0x10DA, 0x10AA }, { 0x10DB, 0x10AB }, { 0x10DC, 0x10AC }, { 0x10DD, 0x10AD }, { 0x10DE, 0x10AE }, { 0x10DF, 0x10AF }, { 0x10E0, 0x10B0 }, { 0x10E1, 0x10B1 }, { 0x10E2, 0x10B2 }, { 0x10E3, 0x10B3 }, { 0x10E4, 0x10B4 }, { 0x10E5, 0x10B5 }, { 0x10E6, 0x10B6 }, { 0x10E7, 0x10B7 }, { 0x10E8, 0x10B8 }, { 0x10E9, 0x10B9 }, { 0x10EA, 0x10BA }, { 0x10EB, 0x10BB }, { 0x10EC, 0x10BC }, { 0x10ED, 0x10BD }, { 0x10EE, 0x10BE }, { 0x10EF, 0x10BF }, { 0x10F0, 0x10C0 }, { 0x10F1, 0x10C1 }, { 0x10F2, 0x10C2 }, { 0x10F3, 0x10C3 }, { 0x10F4, 0x10C4 }, { 0x10F5, 0x10C5 }, { 0x1E01, 0x1E00 }, { 0x1E03, 0x1E02 }, { 0x1E05, 0x1E04 }, { 0x1E07, 0x1E06 }, { 0x1E09, 0x1E08 }, { 0x1E0B, 0x1E0A }, { 0x1E0D, 0x1E0C }, { 0x1E0F, 0x1E0E }, { 0x1E11, 0x1E10 }, { 0x1E13, 0x1E12 }, { 0x1E15, 0x1E14 }, { 0x1E17, 0x1E16 }, { 0x1E19, 0x1E18 }, { 0x1E1B, 0x1E1A }, { 0x1E1D, 0x1E1C }, { 0x1E1F, 0x1E1E }, { 0x1E21, 0x1E20 }, { 0x1E23, 0x1E22 }, { 0x1E25, 0x1E24 }, { 0x1E27, 0x1E26 }, { 0x1E29, 0x1E28 }, { 0x1E2B, 0x1E2A }, { 0x1E2D, 0x1E2C }, { 0x1E2F, 0x1E2E }, { 0x1E31, 0x1E30 }, { 0x1E33, 0x1E32 }, { 0x1E35, 0x1E34 }, { 0x1E37, 0x1E36 }, { 0x1E39, 0x1E38 }, { 0x1E3B, 0x1E3A }, { 0x1E3D, 0x1E3C }, { 0x1E3F, 0x1E3E }, { 0x1E41, 0x1E40 }, { 0x1E43, 0x1E42 }, { 0x1E45, 0x1E44 }, { 0x1E47, 0x1E46 }, { 0x1E49, 0x1E48 }, { 0x1E4B, 0x1E4A }, { 0x1E4D, 0x1E4C }, { 0x1E4F, 0x1E4E }, { 0x1E51, 0x1E50 }, { 0x1E53, 0x1E52 }, { 0x1E55, 0x1E54 }, { 0x1E57, 0x1E56 }, { 0x1E59, 0x1E58 }, { 0x1E5B, 0x1E5A }, { 0x1E5D, 0x1E5C }, { 0x1E5F, 0x1E5E }, { 0x1E61, 0x1E60 }, { 0x1E63, 0x1E62 }, { 0x1E65, 0x1E64 }, { 0x1E67, 0x1E66 }, { 0x1E69, 0x1E68 }, { 0x1E6B, 0x1E6A }, { 0x1E6D, 0x1E6C }, { 0x1E6F, 0x1E6E }, { 0x1E71, 0x1E70 }, { 0x1E73, 0x1E72 }, { 0x1E75, 0x1E74 }, { 0x1E77, 0x1E76 }, { 0x1E79, 0x1E78 }, { 0x1E7B, 0x1E7A }, { 0x1E7D, 0x1E7C }, { 0x1E7F, 0x1E7E }, { 0x1E81, 0x1E80 }, { 0x1E83, 0x1E82 }, { 0x1E85, 0x1E84 }, { 0x1E87, 0x1E86 }, { 0x1E89, 0x1E88 }, { 0x1E8B, 0x1E8A }, { 0x1E8D, 0x1E8C }, { 0x1E8F, 0x1E8E }, { 0x1E91, 0x1E90 }, { 0x1E93, 0x1E92 }, { 0x1E95, 0x1E94 }, { 0x1EA1, 0x1EA0 }, { 0x1EA3, 0x1EA2 }, { 0x1EA5, 0x1EA4 }, { 0x1EA7, 0x1EA6 }, { 0x1EA9, 0x1EA8 }, { 0x1EAB, 0x1EAA }, { 0x1EAD, 0x1EAC }, { 0x1EAF, 0x1EAE }, { 0x1EB1, 0x1EB0 }, { 0x1EB3, 0x1EB2 }, { 0x1EB5, 0x1EB4 }, { 0x1EB7, 0x1EB6 }, { 0x1EB9, 0x1EB8 }, { 0x1EBB, 0x1EBA }, { 0x1EBD, 0x1EBC }, { 0x1EBF, 0x1EBE }, { 0x1EC1, 0x1EC0 }, { 0x1EC3, 0x1EC2 }, { 0x1EC5, 0x1EC4 }, { 0x1EC7, 0x1EC6 }, { 0x1EC9, 0x1EC8 }, { 0x1ECB, 0x1ECA }, { 0x1ECD, 0x1ECC }, { 0x1ECF, 0x1ECE }, { 0x1ED1, 0x1ED0 }, { 0x1ED3, 0x1ED2 }, { 0x1ED5, 0x1ED4 }, { 0x1ED7, 0x1ED6 }, { 0x1ED9, 0x1ED8 }, { 0x1EDB, 0x1EDA }, { 0x1EDD, 0x1EDC }, { 0x1EDF, 0x1EDE }, { 0x1EE1, 0x1EE0 }, { 0x1EE3, 0x1EE2 }, { 0x1EE5, 0x1EE4 }, { 0x1EE7, 0x1EE6 }, { 0x1EE9, 0x1EE8 }, { 0x1EEB, 0x1EEA }, { 0x1EED, 0x1EEC }, { 0x1EEF, 0x1EEE }, { 0x1EF1, 0x1EF0 }, { 0x1EF3, 0x1EF2 }, { 0x1EF5, 0x1EF4 }, { 0x1EF7, 0x1EF6 }, { 0x1EF9, 0x1EF8 }, { 0x1F00, 0x1F08 }, { 0x1F01, 0x1F09 }, { 0x1F02, 0x1F0A }, { 0x1F03, 0x1F0B }, { 0x1F04, 0x1F0C }, { 0x1F05, 0x1F0D }, { 0x1F06, 0x1F0E }, { 0x1F07, 0x1F0F }, { 0x1F10, 0x1F18 }, { 0x1F11, 0x1F19 }, { 0x1F12, 0x1F1A }, { 0x1F13, 0x1F1B }, { 0x1F14, 0x1F1C }, { 0x1F15, 0x1F1D }, { 0x1F20, 0x1F28 }, { 0x1F21, 0x1F29 }, { 0x1F22, 0x1F2A }, { 0x1F23, 0x1F2B }, { 0x1F24, 0x1F2C }, { 0x1F25, 0x1F2D }, { 0x1F26, 0x1F2E }, { 0x1F27, 0x1F2F }, { 0x1F30, 0x1F38 }, { 0x1F31, 0x1F39 }, { 0x1F32, 0x1F3A }, { 0x1F33, 0x1F3B }, { 0x1F34, 0x1F3C }, { 0x1F35, 0x1F3D }, { 0x1F36, 0x1F3E }, { 0x1F37, 0x1F3F }, { 0x1F40, 0x1F48 }, { 0x1F41, 0x1F49 }, { 0x1F42, 0x1F4A }, { 0x1F43, 0x1F4B }, { 0x1F44, 0x1F4C }, { 0x1F45, 0x1F4D }, { 0x1F51, 0x1F59 }, { 0x1F53, 0x1F5B }, { 0x1F55, 0x1F5D }, { 0x1F57, 0x1F5F }, { 0x1F60, 0x1F68 }, { 0x1F61, 0x1F69 }, { 0x1F62, 0x1F6A }, { 0x1F63, 0x1F6B }, { 0x1F64, 0x1F6C }, { 0x1F65, 0x1F6D }, { 0x1F66, 0x1F6E }, { 0x1F67, 0x1F6F }, { 0x1F80, 0x1F88 }, { 0x1F81, 0x1F89 }, { 0x1F82, 0x1F8A }, { 0x1F83, 0x1F8B }, { 0x1F84, 0x1F8C }, { 0x1F85, 0x1F8D }, { 0x1F86, 0x1F8E }, { 0x1F87, 0x1F8F }, { 0x1F90, 0x1F98 }, { 0x1F91, 0x1F99 }, { 0x1F92, 0x1F9A }, { 0x1F93, 0x1F9B }, { 0x1F94, 0x1F9C }, { 0x1F95, 0x1F9D }, { 0x1F96, 0x1F9E }, { 0x1F97, 0x1F9F }, { 0x1FA0, 0x1FA8 }, { 0x1FA1, 0x1FA9 }, { 0x1FA2, 0x1FAA }, { 0x1FA3, 0x1FAB }, { 0x1FA4, 0x1FAC }, { 0x1FA5, 0x1FAD }, { 0x1FA6, 0x1FAE }, { 0x1FA7, 0x1FAF }, { 0x1FB0, 0x1FB8 }, { 0x1FB1, 0x1FB9 }, { 0x1FD0, 0x1FD8 }, { 0x1FD1, 0x1FD9 }, { 0x1FE0, 0x1FE8 }, { 0x1FE1, 0x1FE9 }, { 0x24D0, 0x24B6 }, { 0x24D1, 0x24B7 }, { 0x24D2, 0x24B8 }, { 0x24D3, 0x24B9 }, { 0x24D4, 0x24BA }, { 0x24D5, 0x24BB }, { 0x24D6, 0x24BC }, { 0x24D7, 0x24BD }, { 0x24D8, 0x24BE }, { 0x24D9, 0x24BF }, { 0x24DA, 0x24C0 }, { 0x24DB, 0x24C1 }, { 0x24DC, 0x24C2 }, { 0x24DD, 0x24C3 }, { 0x24DE, 0x24C4 }, { 0x24DF, 0x24C5 }, { 0x24E0, 0x24C6 }, { 0x24E1, 0x24C7 }, { 0x24E2, 0x24C8 }, { 0x24E3, 0x24C9 }, { 0x24E4, 0x24CA }, { 0x24E5, 0x24CB }, { 0x24E6, 0x24CC }, { 0x24E7, 0x24CD }, { 0x24E8, 0x24CE }, { 0x24E9, 0x24CF }, { 0xFF41, 0xFF21 }, { 0xFF42, 0xFF22 }, { 0xFF43, 0xFF23 }, { 0xFF44, 0xFF24 }, { 0xFF45, 0xFF25 }, { 0xFF46, 0xFF26 }, { 0xFF47, 0xFF27 }, { 0xFF48, 0xFF28 }, { 0xFF49, 0xFF29 }, { 0xFF4A, 0xFF2A }, { 0xFF4B, 0xFF2B }, { 0xFF4C, 0xFF2C }, { 0xFF4D, 0xFF2D }, { 0xFF4E, 0xFF2E }, { 0xFF4F, 0xFF2F }, { 0xFF50, 0xFF30 }, { 0xFF51, 0xFF31 }, { 0xFF52, 0xFF32 }, { 0xFF53, 0xFF33 }, { 0xFF54, 0xFF34 }, { 0xFF55, 0xFF35 }, { 0xFF56, 0xFF36 }, { 0xFF57, 0xFF37 }, { 0xFF58, 0xFF38 }, { 0xFF59, 0xFF39 }, { 0xFF5A, 0xFF3A }, }; static const int reverse_caps_table[CAPS_LEN - 1][2] = { { 0x0041, 0x0061 }, { 0x0042, 0x0062 }, { 0x0043, 0x0063 }, { 0x0044, 0x0064 }, { 0x0045, 0x0065 }, { 0x0046, 0x0066 }, { 0x0047, 0x0067 }, { 0x0048, 0x0068 }, { 0x0049, 0x0069 }, // { 0x0049, 0x0131 }, // dotless I { 0x004A, 0x006A }, { 0x004B, 0x006B }, { 0x004C, 0x006C }, { 0x004D, 0x006D }, { 0x004E, 0x006E }, { 0x004F, 0x006F }, { 0x0050, 0x0070 }, { 0x0051, 0x0071 }, { 0x0052, 0x0072 }, { 0x0053, 0x0073 }, { 0x0054, 0x0074 }, { 0x0055, 0x0075 }, { 0x0056, 0x0076 }, { 0x0057, 0x0077 }, { 0x0058, 0x0078 }, { 0x0059, 0x0079 }, { 0x005A, 0x007A }, { 0x00C0, 0x00E0 }, { 0x00C1, 0x00E1 }, { 0x00C2, 0x00E2 }, { 0x00C3, 0x00E3 }, { 0x00C4, 0x00E4 }, { 0x00C5, 0x00E5 }, { 0x00C6, 0x00E6 }, { 0x00C7, 0x00E7 }, { 0x00C8, 0x00E8 }, { 0x00C9, 0x00E9 }, { 0x00CA, 0x00EA }, { 0x00CB, 0x00EB }, { 0x00CC, 0x00EC }, { 0x00CD, 0x00ED }, { 0x00CE, 0x00EE }, { 0x00CF, 0x00EF }, { 0x00D0, 0x00F0 }, { 0x00D1, 0x00F1 }, { 0x00D2, 0x00F2 }, { 0x00D3, 0x00F3 }, { 0x00D4, 0x00F4 }, { 0x00D5, 0x00F5 }, { 0x00D6, 0x00F6 }, { 0x00D8, 0x00F8 }, { 0x00D9, 0x00F9 }, { 0x00DA, 0x00FA }, { 0x00DB, 0x00FB }, { 0x00DC, 0x00FC }, { 0x00DD, 0x00FD }, { 0x00DE, 0x00FE }, { 0x0100, 0x0101 }, { 0x0102, 0x0103 }, { 0x0104, 0x0105 }, { 0x0106, 0x0107 }, { 0x0108, 0x0109 }, { 0x010A, 0x010B }, { 0x010C, 0x010D }, { 0x010E, 0x010F }, { 0x0110, 0x0111 }, { 0x0112, 0x0113 }, { 0x0114, 0x0115 }, { 0x0116, 0x0117 }, { 0x0118, 0x0119 }, { 0x011A, 0x011B }, { 0x011C, 0x011D }, { 0x011E, 0x011F }, { 0x0120, 0x0121 }, { 0x0122, 0x0123 }, { 0x0124, 0x0125 }, { 0x0126, 0x0127 }, { 0x0128, 0x0129 }, { 0x012A, 0x012B }, { 0x012C, 0x012D }, { 0x012E, 0x012F }, { 0x0132, 0x0133 }, { 0x0134, 0x0135 }, { 0x0136, 0x0137 }, { 0x0139, 0x013A }, { 0x013B, 0x013C }, { 0x013D, 0x013E }, { 0x013F, 0x0140 }, { 0x0141, 0x0142 }, { 0x0143, 0x0144 }, { 0x0145, 0x0146 }, { 0x0147, 0x0148 }, { 0x014A, 0x014B }, { 0x014C, 0x014D }, { 0x014E, 0x014F }, { 0x0150, 0x0151 }, { 0x0152, 0x0153 }, { 0x0154, 0x0155 }, { 0x0156, 0x0157 }, { 0x0158, 0x0159 }, { 0x015A, 0x015B }, { 0x015C, 0x015D }, { 0x015E, 0x015F }, { 0x0160, 0x0161 }, { 0x0162, 0x0163 }, { 0x0164, 0x0165 }, { 0x0166, 0x0167 }, { 0x0168, 0x0169 }, { 0x016A, 0x016B }, { 0x016C, 0x016D }, { 0x016E, 0x016F }, { 0x0170, 0x0171 }, { 0x0172, 0x0173 }, { 0x0174, 0x0175 }, { 0x0176, 0x0177 }, { 0x0178, 0x00FF }, { 0x0179, 0x017A }, { 0x017B, 0x017C }, { 0x017D, 0x017E }, { 0x0181, 0x0253 }, { 0x0182, 0x0183 }, { 0x0184, 0x0185 }, { 0x0186, 0x0254 }, { 0x0187, 0x0188 }, { 0x018A, 0x0257 }, { 0x018B, 0x018C }, { 0x018E, 0x0258 }, { 0x018F, 0x0259 }, { 0x0190, 0x025B }, { 0x0191, 0x0192 }, { 0x0193, 0x0260 }, { 0x0194, 0x0263 }, { 0x0196, 0x0269 }, { 0x0197, 0x0268 }, { 0x0198, 0x0199 }, { 0x019C, 0x026F }, { 0x019D, 0x0272 }, { 0x019F, 0x0275 }, { 0x01A0, 0x01A1 }, { 0x01A2, 0x01A3 }, { 0x01A4, 0x01A5 }, { 0x01A7, 0x01A8 }, { 0x01A9, 0x0283 }, { 0x01AC, 0x01AD }, { 0x01AE, 0x0288 }, { 0x01AF, 0x01B0 }, { 0x01B1, 0x028A }, { 0x01B2, 0x028B }, { 0x01B3, 0x01B4 }, { 0x01B5, 0x01B6 }, { 0x01B7, 0x0292 }, { 0x01B8, 0x01B9 }, { 0x01BC, 0x01BD }, { 0x01C4, 0x01C6 }, { 0x01C7, 0x01C9 }, { 0x01CA, 0x01CC }, { 0x01CD, 0x01CE }, { 0x01CF, 0x01D0 }, { 0x01D1, 0x01D2 }, { 0x01D3, 0x01D4 }, { 0x01D5, 0x01D6 }, { 0x01D7, 0x01D8 }, { 0x01D9, 0x01DA }, { 0x01DB, 0x01DC }, { 0x01DE, 0x01DF }, { 0x01E0, 0x01E1 }, { 0x01E2, 0x01E3 }, { 0x01E4, 0x01E5 }, { 0x01E6, 0x01E7 }, { 0x01E8, 0x01E9 }, { 0x01EA, 0x01EB }, { 0x01EC, 0x01ED }, { 0x01EE, 0x01EF }, { 0x01F1, 0x01F3 }, { 0x01F4, 0x01F5 }, { 0x01FA, 0x01FB }, { 0x01FC, 0x01FD }, { 0x01FE, 0x01FF }, { 0x0200, 0x0201 }, { 0x0202, 0x0203 }, { 0x0204, 0x0205 }, { 0x0206, 0x0207 }, { 0x0208, 0x0209 }, { 0x020A, 0x020B }, { 0x020C, 0x020D }, { 0x020E, 0x020F }, { 0x0210, 0x0211 }, { 0x0212, 0x0213 }, { 0x0214, 0x0215 }, { 0x0216, 0x0217 }, { 0x0386, 0x03AC }, { 0x0388, 0x03AD }, { 0x0389, 0x03AE }, { 0x038A, 0x03AF }, { 0x038C, 0x03CC }, { 0x038E, 0x03CD }, { 0x038F, 0x03CE }, { 0x0391, 0x03B1 }, { 0x0392, 0x03B2 }, { 0x0393, 0x03B3 }, { 0x0394, 0x03B4 }, { 0x0395, 0x03B5 }, { 0x0396, 0x03B6 }, { 0x0397, 0x03B7 }, { 0x0398, 0x03B8 }, { 0x0399, 0x03B9 }, { 0x039A, 0x03BA }, { 0x039B, 0x03BB }, { 0x039C, 0x03BC }, { 0x039D, 0x03BD }, { 0x039E, 0x03BE }, { 0x039F, 0x03BF }, { 0x03A0, 0x03C0 }, { 0x03A1, 0x03C1 }, { 0x03A3, 0x03C3 }, { 0x03A4, 0x03C4 }, { 0x03A5, 0x03C5 }, { 0x03A6, 0x03C6 }, { 0x03A7, 0x03C7 }, { 0x03A8, 0x03C8 }, { 0x03A9, 0x03C9 }, { 0x03AA, 0x03CA }, { 0x03AB, 0x03CB }, { 0x03E2, 0x03E3 }, { 0x03E4, 0x03E5 }, { 0x03E6, 0x03E7 }, { 0x03E8, 0x03E9 }, { 0x03EA, 0x03EB }, { 0x03EC, 0x03ED }, { 0x03EE, 0x03EF }, { 0x0401, 0x0451 }, { 0x0402, 0x0452 }, { 0x0403, 0x0453 }, { 0x0404, 0x0454 }, { 0x0405, 0x0455 }, { 0x0406, 0x0456 }, { 0x0407, 0x0457 }, { 0x0408, 0x0458 }, { 0x0409, 0x0459 }, { 0x040A, 0x045A }, { 0x040B, 0x045B }, { 0x040C, 0x045C }, { 0x040E, 0x045E }, { 0x040F, 0x045F }, { 0x0410, 0x0430 }, { 0x0411, 0x0431 }, { 0x0412, 0x0432 }, { 0x0413, 0x0433 }, { 0x0414, 0x0434 }, { 0x0415, 0x0435 }, { 0x0416, 0x0436 }, { 0x0417, 0x0437 }, { 0x0418, 0x0438 }, { 0x0419, 0x0439 }, { 0x041A, 0x043A }, { 0x041B, 0x043B }, { 0x041C, 0x043C }, { 0x041D, 0x043D }, { 0x041E, 0x043E }, { 0x041F, 0x043F }, { 0x0420, 0x0440 }, { 0x0421, 0x0441 }, { 0x0422, 0x0442 }, { 0x0423, 0x0443 }, { 0x0424, 0x0444 }, { 0x0425, 0x0445 }, { 0x0426, 0x0446 }, { 0x0427, 0x0447 }, { 0x0428, 0x0448 }, { 0x0429, 0x0449 }, { 0x042A, 0x044A }, { 0x042B, 0x044B }, { 0x042C, 0x044C }, { 0x042D, 0x044D }, { 0x042E, 0x044E }, { 0x042F, 0x044F }, { 0x0460, 0x0461 }, { 0x0462, 0x0463 }, { 0x0464, 0x0465 }, { 0x0466, 0x0467 }, { 0x0468, 0x0469 }, { 0x046A, 0x046B }, { 0x046C, 0x046D }, { 0x046E, 0x046F }, { 0x0470, 0x0471 }, { 0x0472, 0x0473 }, { 0x0474, 0x0475 }, { 0x0476, 0x0477 }, { 0x0478, 0x0479 }, { 0x047A, 0x047B }, { 0x047C, 0x047D }, { 0x047E, 0x047F }, { 0x0480, 0x0481 }, { 0x0490, 0x0491 }, { 0x0492, 0x0493 }, { 0x0494, 0x0495 }, { 0x0496, 0x0497 }, { 0x0498, 0x0499 }, { 0x049A, 0x049B }, { 0x049C, 0x049D }, { 0x049E, 0x049F }, { 0x04A0, 0x04A1 }, { 0x04A2, 0x04A3 }, { 0x04A4, 0x04A5 }, { 0x04A6, 0x04A7 }, { 0x04A8, 0x04A9 }, { 0x04AA, 0x04AB }, { 0x04AC, 0x04AD }, { 0x04AE, 0x04AF }, { 0x04B0, 0x04B1 }, { 0x04B2, 0x04B3 }, { 0x04B4, 0x04B5 }, { 0x04B6, 0x04B7 }, { 0x04B8, 0x04B9 }, { 0x04BA, 0x04BB }, { 0x04BC, 0x04BD }, { 0x04BE, 0x04BF }, { 0x04C1, 0x04C2 }, { 0x04C3, 0x04C4 }, { 0x04C7, 0x04C8 }, { 0x04CB, 0x04CC }, { 0x04D0, 0x04D1 }, { 0x04D2, 0x04D3 }, { 0x04D4, 0x04D5 }, { 0x04D6, 0x04D7 }, { 0x04D8, 0x04D9 }, { 0x04DA, 0x04DB }, { 0x04DC, 0x04DD }, { 0x04DE, 0x04DF }, { 0x04E0, 0x04E1 }, { 0x04E2, 0x04E3 }, { 0x04E4, 0x04E5 }, { 0x04E6, 0x04E7 }, { 0x04E8, 0x04E9 }, { 0x04EA, 0x04EB }, { 0x04EE, 0x04EF }, { 0x04F0, 0x04F1 }, { 0x04F2, 0x04F3 }, { 0x04F4, 0x04F5 }, { 0x04F8, 0x04F9 }, { 0x0531, 0x0561 }, { 0x0532, 0x0562 }, { 0x0533, 0x0563 }, { 0x0534, 0x0564 }, { 0x0535, 0x0565 }, { 0x0536, 0x0566 }, { 0x0537, 0x0567 }, { 0x0538, 0x0568 }, { 0x0539, 0x0569 }, { 0x053A, 0x056A }, { 0x053B, 0x056B }, { 0x053C, 0x056C }, { 0x053D, 0x056D }, { 0x053E, 0x056E }, { 0x053F, 0x056F }, { 0x0540, 0x0570 }, { 0x0541, 0x0571 }, { 0x0542, 0x0572 }, { 0x0543, 0x0573 }, { 0x0544, 0x0574 }, { 0x0545, 0x0575 }, { 0x0546, 0x0576 }, { 0x0547, 0x0577 }, { 0x0548, 0x0578 }, { 0x0549, 0x0579 }, { 0x054A, 0x057A }, { 0x054B, 0x057B }, { 0x054C, 0x057C }, { 0x054D, 0x057D }, { 0x054E, 0x057E }, { 0x054F, 0x057F }, { 0x0550, 0x0580 }, { 0x0551, 0x0581 }, { 0x0552, 0x0582 }, { 0x0553, 0x0583 }, { 0x0554, 0x0584 }, { 0x0555, 0x0585 }, { 0x0556, 0x0586 }, { 0x10A0, 0x10D0 }, { 0x10A1, 0x10D1 }, { 0x10A2, 0x10D2 }, { 0x10A3, 0x10D3 }, { 0x10A4, 0x10D4 }, { 0x10A5, 0x10D5 }, { 0x10A6, 0x10D6 }, { 0x10A7, 0x10D7 }, { 0x10A8, 0x10D8 }, { 0x10A9, 0x10D9 }, { 0x10AA, 0x10DA }, { 0x10AB, 0x10DB }, { 0x10AC, 0x10DC }, { 0x10AD, 0x10DD }, { 0x10AE, 0x10DE }, { 0x10AF, 0x10DF }, { 0x10B0, 0x10E0 }, { 0x10B1, 0x10E1 }, { 0x10B2, 0x10E2 }, { 0x10B3, 0x10E3 }, { 0x10B4, 0x10E4 }, { 0x10B5, 0x10E5 }, { 0x10B6, 0x10E6 }, { 0x10B7, 0x10E7 }, { 0x10B8, 0x10E8 }, { 0x10B9, 0x10E9 }, { 0x10BA, 0x10EA }, { 0x10BB, 0x10EB }, { 0x10BC, 0x10EC }, { 0x10BD, 0x10ED }, { 0x10BE, 0x10EE }, { 0x10BF, 0x10EF }, { 0x10C0, 0x10F0 }, { 0x10C1, 0x10F1 }, { 0x10C2, 0x10F2 }, { 0x10C3, 0x10F3 }, { 0x10C4, 0x10F4 }, { 0x10C5, 0x10F5 }, { 0x1E00, 0x1E01 }, { 0x1E02, 0x1E03 }, { 0x1E04, 0x1E05 }, { 0x1E06, 0x1E07 }, { 0x1E08, 0x1E09 }, { 0x1E0A, 0x1E0B }, { 0x1E0C, 0x1E0D }, { 0x1E0E, 0x1E0F }, { 0x1E10, 0x1E11 }, { 0x1E12, 0x1E13 }, { 0x1E14, 0x1E15 }, { 0x1E16, 0x1E17 }, { 0x1E18, 0x1E19 }, { 0x1E1A, 0x1E1B }, { 0x1E1C, 0x1E1D }, { 0x1E1E, 0x1E1F }, { 0x1E20, 0x1E21 }, { 0x1E22, 0x1E23 }, { 0x1E24, 0x1E25 }, { 0x1E26, 0x1E27 }, { 0x1E28, 0x1E29 }, { 0x1E2A, 0x1E2B }, { 0x1E2C, 0x1E2D }, { 0x1E2E, 0x1E2F }, { 0x1E30, 0x1E31 }, { 0x1E32, 0x1E33 }, { 0x1E34, 0x1E35 }, { 0x1E36, 0x1E37 }, { 0x1E38, 0x1E39 }, { 0x1E3A, 0x1E3B }, { 0x1E3C, 0x1E3D }, { 0x1E3E, 0x1E3F }, { 0x1E40, 0x1E41 }, { 0x1E42, 0x1E43 }, { 0x1E44, 0x1E45 }, { 0x1E46, 0x1E47 }, { 0x1E48, 0x1E49 }, { 0x1E4A, 0x1E4B }, { 0x1E4C, 0x1E4D }, { 0x1E4E, 0x1E4F }, { 0x1E50, 0x1E51 }, { 0x1E52, 0x1E53 }, { 0x1E54, 0x1E55 }, { 0x1E56, 0x1E57 }, { 0x1E58, 0x1E59 }, { 0x1E5A, 0x1E5B }, { 0x1E5C, 0x1E5D }, { 0x1E5E, 0x1E5F }, { 0x1E60, 0x1E61 }, { 0x1E62, 0x1E63 }, { 0x1E64, 0x1E65 }, { 0x1E66, 0x1E67 }, { 0x1E68, 0x1E69 }, { 0x1E6A, 0x1E6B }, { 0x1E6C, 0x1E6D }, { 0x1E6E, 0x1E6F }, { 0x1E70, 0x1E71 }, { 0x1E72, 0x1E73 }, { 0x1E74, 0x1E75 }, { 0x1E76, 0x1E77 }, { 0x1E78, 0x1E79 }, { 0x1E7A, 0x1E7B }, { 0x1E7C, 0x1E7D }, { 0x1E7E, 0x1E7F }, { 0x1E80, 0x1E81 }, { 0x1E82, 0x1E83 }, { 0x1E84, 0x1E85 }, { 0x1E86, 0x1E87 }, { 0x1E88, 0x1E89 }, { 0x1E8A, 0x1E8B }, { 0x1E8C, 0x1E8D }, { 0x1E8E, 0x1E8F }, { 0x1E90, 0x1E91 }, { 0x1E92, 0x1E93 }, { 0x1E94, 0x1E95 }, { 0x1EA0, 0x1EA1 }, { 0x1EA2, 0x1EA3 }, { 0x1EA4, 0x1EA5 }, { 0x1EA6, 0x1EA7 }, { 0x1EA8, 0x1EA9 }, { 0x1EAA, 0x1EAB }, { 0x1EAC, 0x1EAD }, { 0x1EAE, 0x1EAF }, { 0x1EB0, 0x1EB1 }, { 0x1EB2, 0x1EB3 }, { 0x1EB4, 0x1EB5 }, { 0x1EB6, 0x1EB7 }, { 0x1EB8, 0x1EB9 }, { 0x1EBA, 0x1EBB }, { 0x1EBC, 0x1EBD }, { 0x1EBE, 0x1EBF }, { 0x1EC0, 0x1EC1 }, { 0x1EC2, 0x1EC3 }, { 0x1EC4, 0x1EC5 }, { 0x1EC6, 0x1EC7 }, { 0x1EC8, 0x1EC9 }, { 0x1ECA, 0x1ECB }, { 0x1ECC, 0x1ECD }, { 0x1ECE, 0x1ECF }, { 0x1ED0, 0x1ED1 }, { 0x1ED2, 0x1ED3 }, { 0x1ED4, 0x1ED5 }, { 0x1ED6, 0x1ED7 }, { 0x1ED8, 0x1ED9 }, { 0x1EDA, 0x1EDB }, { 0x1EDC, 0x1EDD }, { 0x1EDE, 0x1EDF }, { 0x1EE0, 0x1EE1 }, { 0x1EE2, 0x1EE3 }, { 0x1EE4, 0x1EE5 }, { 0x1EE6, 0x1EE7 }, { 0x1EE8, 0x1EE9 }, { 0x1EEA, 0x1EEB }, { 0x1EEC, 0x1EED }, { 0x1EEE, 0x1EEF }, { 0x1EF0, 0x1EF1 }, { 0x1EF2, 0x1EF3 }, { 0x1EF4, 0x1EF5 }, { 0x1EF6, 0x1EF7 }, { 0x1EF8, 0x1EF9 }, { 0x1F08, 0x1F00 }, { 0x1F09, 0x1F01 }, { 0x1F0A, 0x1F02 }, { 0x1F0B, 0x1F03 }, { 0x1F0C, 0x1F04 }, { 0x1F0D, 0x1F05 }, { 0x1F0E, 0x1F06 }, { 0x1F0F, 0x1F07 }, { 0x1F18, 0x1F10 }, { 0x1F19, 0x1F11 }, { 0x1F1A, 0x1F12 }, { 0x1F1B, 0x1F13 }, { 0x1F1C, 0x1F14 }, { 0x1F1D, 0x1F15 }, { 0x1F28, 0x1F20 }, { 0x1F29, 0x1F21 }, { 0x1F2A, 0x1F22 }, { 0x1F2B, 0x1F23 }, { 0x1F2C, 0x1F24 }, { 0x1F2D, 0x1F25 }, { 0x1F2E, 0x1F26 }, { 0x1F2F, 0x1F27 }, { 0x1F38, 0x1F30 }, { 0x1F39, 0x1F31 }, { 0x1F3A, 0x1F32 }, { 0x1F3B, 0x1F33 }, { 0x1F3C, 0x1F34 }, { 0x1F3D, 0x1F35 }, { 0x1F3E, 0x1F36 }, { 0x1F3F, 0x1F37 }, { 0x1F48, 0x1F40 }, { 0x1F49, 0x1F41 }, { 0x1F4A, 0x1F42 }, { 0x1F4B, 0x1F43 }, { 0x1F4C, 0x1F44 }, { 0x1F4D, 0x1F45 }, { 0x1F59, 0x1F51 }, { 0x1F5B, 0x1F53 }, { 0x1F5D, 0x1F55 }, { 0x1F5F, 0x1F57 }, { 0x1F68, 0x1F60 }, { 0x1F69, 0x1F61 }, { 0x1F6A, 0x1F62 }, { 0x1F6B, 0x1F63 }, { 0x1F6C, 0x1F64 }, { 0x1F6D, 0x1F65 }, { 0x1F6E, 0x1F66 }, { 0x1F6F, 0x1F67 }, { 0x1F88, 0x1F80 }, { 0x1F89, 0x1F81 }, { 0x1F8A, 0x1F82 }, { 0x1F8B, 0x1F83 }, { 0x1F8C, 0x1F84 }, { 0x1F8D, 0x1F85 }, { 0x1F8E, 0x1F86 }, { 0x1F8F, 0x1F87 }, { 0x1F98, 0x1F90 }, { 0x1F99, 0x1F91 }, { 0x1F9A, 0x1F92 }, { 0x1F9B, 0x1F93 }, { 0x1F9C, 0x1F94 }, { 0x1F9D, 0x1F95 }, { 0x1F9E, 0x1F96 }, { 0x1F9F, 0x1F97 }, { 0x1FA8, 0x1FA0 }, { 0x1FA9, 0x1FA1 }, { 0x1FAA, 0x1FA2 }, { 0x1FAB, 0x1FA3 }, { 0x1FAC, 0x1FA4 }, { 0x1FAD, 0x1FA5 }, { 0x1FAE, 0x1FA6 }, { 0x1FAF, 0x1FA7 }, { 0x1FB8, 0x1FB0 }, { 0x1FB9, 0x1FB1 }, { 0x1FD8, 0x1FD0 }, { 0x1FD9, 0x1FD1 }, { 0x1FE8, 0x1FE0 }, { 0x1FE9, 0x1FE1 }, { 0x24B6, 0x24D0 }, { 0x24B7, 0x24D1 }, { 0x24B8, 0x24D2 }, { 0x24B9, 0x24D3 }, { 0x24BA, 0x24D4 }, { 0x24BB, 0x24D5 }, { 0x24BC, 0x24D6 }, { 0x24BD, 0x24D7 }, { 0x24BE, 0x24D8 }, { 0x24BF, 0x24D9 }, { 0x24C0, 0x24DA }, { 0x24C1, 0x24DB }, { 0x24C2, 0x24DC }, { 0x24C3, 0x24DD }, { 0x24C4, 0x24DE }, { 0x24C5, 0x24DF }, { 0x24C6, 0x24E0 }, { 0x24C7, 0x24E1 }, { 0x24C8, 0x24E2 }, { 0x24C9, 0x24E3 }, { 0x24CA, 0x24E4 }, { 0x24CB, 0x24E5 }, { 0x24CC, 0x24E6 }, { 0x24CD, 0x24E7 }, { 0x24CE, 0x24E8 }, { 0x24CF, 0x24E9 }, { 0xFF21, 0xFF41 }, { 0xFF22, 0xFF42 }, { 0xFF23, 0xFF43 }, { 0xFF24, 0xFF44 }, { 0xFF25, 0xFF45 }, { 0xFF26, 0xFF46 }, { 0xFF27, 0xFF47 }, { 0xFF28, 0xFF48 }, { 0xFF29, 0xFF49 }, { 0xFF2A, 0xFF4A }, { 0xFF2B, 0xFF4B }, { 0xFF2C, 0xFF4C }, { 0xFF2D, 0xFF4D }, { 0xFF2E, 0xFF4E }, { 0xFF2F, 0xFF4F }, { 0xFF30, 0xFF50 }, { 0xFF31, 0xFF51 }, { 0xFF32, 0xFF52 }, { 0xFF33, 0xFF53 }, { 0xFF34, 0xFF54 }, { 0xFF35, 0xFF55 }, { 0xFF36, 0xFF56 }, { 0xFF37, 0xFF57 }, { 0xFF38, 0xFF58 }, { 0xFF39, 0xFF59 }, { 0xFF3A, 0xFF5A }, }; static int _find_upper(int ch) { int low = 0; int high = CAPS_LEN - 1; int middle; while (low <= high) { middle = (low + high) / 2; if (ch < caps_table[middle][0]) { high = middle - 1; //search low end of array } else if (caps_table[middle][0] < ch) { low = middle + 1; //search high end of array } else { return caps_table[middle][1]; } } return ch; } static int _find_lower(int ch) { int low = 0; int high = CAPS_LEN - 2; int middle; while (low <= high) { middle = (low + high) / 2; if (ch < reverse_caps_table[middle][0]) { high = middle - 1; //search low end of array } else if (reverse_caps_table[middle][0] < ch) { low = middle + 1; //search high end of array } else { return reverse_caps_table[middle][1]; } } return ch; } #endif // UCAPS_H zytrax-master/globals/value_stream.h000066400000000000000000000070521347722000700202010ustar00rootroot00000000000000 // // C++ Interface: value_stream // // Description: // // // Author: Juan Linietsky , (C) 2005 // // Copyright: See COPYING file that comes with this distribution // // #ifndef VALUE_STREAM_H #define VALUE_STREAM_H #include "error_macros.h" #include "typedefs.h" #include "vector.h" template class ValueStream { private: struct Value { T pos; V val; }; Vector stream; _FORCE_INLINE_ int find_internal(T p_pos, bool &p_exact) const; public: int insert(T p_pos, V p_value); /* Insert, and return the position at it was inserted */ int find_exact(T p_pos) const; /* return INVALID_STREAM_INDEX if pos is not exact */ int find(T p_pos) const; /* get index to pos previous or equal to value, if nothing less than it, return -1 */ _FORCE_INLINE_ const V &operator[](int p_idx) const; /* return a const reference to const V& get_index_value(int p_idx); return a const reference to null if index is invalid! */ _FORCE_INLINE_ V &operator[](int p_idx); /* return a reference to const V& get_index_value(int p_idx); return a const reference to null if index is invalid! */ const T &get_pos(int p_idx) const; /* return a const reference to const T& get_pos(int p_idx); return a const reference to null if index is invalid! */ _FORCE_INLINE_ int size() const; void erase(int p_index); void clear(); }; template int ValueStream::find_internal(T p_pos, bool &p_exact) const { /* The core of this class, the binary search */ p_exact = false; if (stream.empty()) return -1; int low = 0; int high = stream.size() - 1; int middle; const Value *a = &stream[0]; while (low <= high) { middle = (low + high) / 2; if (p_pos == a[middle].pos) { //match p_exact = true; return middle; } else if (p_pos < a[middle].pos) high = middle - 1; //search low end of array else low = middle + 1; //search high end of array } if (a[middle].pos > p_pos) middle--; return middle; } template int ValueStream::find(T p_pos) const { bool _e; return find_internal(p_pos, _e); } template int ValueStream::insert(T p_pos, V p_value) { Value new_v; new_v.pos = p_pos; new_v.val = p_value; bool exact; int pos = find_internal(p_pos, exact); if (!exact) { /* no exact position found, make room */ pos++; if (pos == stream.size()) stream.push_back(new_v); //it's at the end, just pushit back else stream.insert(pos, new_v); } else { stream[pos] = new_v; /* Overwrite, sine exact position */ } return pos; } template const V &ValueStream::operator[](int p_index) const { ERR_FAIL_INDEX_V(p_index, stream.size(), *((V *)(NULL))); return stream[p_index].val; } template V &ValueStream::operator[](int p_index) { ERR_FAIL_INDEX_V(p_index, stream.size(), *((V *)(NULL))); return stream[p_index].val; } template const T &ValueStream::get_pos(int p_index) const { ERR_FAIL_INDEX_V(p_index, stream.size(), *((T *)(NULL))); return stream[p_index].pos; } template int ValueStream::find_exact(T p_pos) const { bool exact; int pos = find_internal(p_pos, exact); if (!exact) return -1; return pos; } template int ValueStream::size() const { return stream.size(); } template void ValueStream::erase(int p_index) { ERR_FAIL_INDEX(p_index, stream.size()); stream.remove(p_index); } template void ValueStream::clear() { stream.clear(); } #endif zytrax-master/globals/vector.h000066400000000000000000000077201347722000700170160ustar00rootroot00000000000000// // C++ Interface: vector // // Description: // // // Author: Juan Linietsky , (C) 2007 // // Copyright: See COPYING file that comes with this distribution // // #ifndef VECTOR_H #define VECTOR_H #include "error_list.h" #include /** * @class Vector * @author Juan Linietsky * Vector container. Regular Vector Container. Use with care and for smaller arrays when possible. Use DVector for large arrays. */ #include "error_macros.h" template class Vector { T* ptr; int element_count; void copy_from(const Vector& p_from); public: void clear() { resize(0); } inline int size() const; inline bool empty() const; Error resize(int p_size); bool push_back(T p_elem); void remove(int p_index); template int find(T_val& p_val) const; void set(int p_index,T p_elem); T get(int p_index) const; inline T& operator[](int p_index) { if (p_index<0 || p_index>=element_count) { T& aux=*((T*)0); //nullreturn ERR_FAIL_COND_V(p_index<0 || p_index>=element_count,aux); } return ptr[p_index]; } inline const T& operator[](int p_index) const { if (p_index<0 || p_index>=element_count) { const T& aux=*((T*)0); //nullreturn ERR_FAIL_COND_V(p_index<0 || p_index>=element_count,aux); } return ptr[p_index]; } Error insert(int p_pos,const T& p_val); void operator=(const Vector& p_from); Vector(const Vector& p_from); Vector(); ~Vector(); }; template inline bool Vector::empty() const { return element_count==0; } template inline int Vector::size() const { return element_count; } template template int Vector::find(T_val& p_val) const { int ret = -1; if (element_count == 0) return ret; for (int i=0; i Error Vector::resize(int p_size) { ERR_FAIL_COND_V(p_size<0,ERR_INVALID_PARAMETER); if (p_size>element_count) { //create elements //int new_elems=p_size-element_count; if (element_count==0) { ptr = (T*)malloc(p_size*sizeof(T)); ERR_FAIL_COND_V( !ptr ,ERR_OUT_OF_MEMORY); } else { T *ptrnew = (T*)realloc(ptr,p_size*sizeof(T)); ERR_FAIL_COND_V( !ptrnew ,ERR_OUT_OF_MEMORY); ptr=ptrnew; } for (int i=element_count;i~T(); } if (p_size>0) { T *ptrnew = (T*)realloc(ptr,p_size*sizeof(T)); ERR_FAIL_COND_V( !ptrnew ,ERR_OUT_OF_MEMORY); ptr=ptrnew; } else { free( ptr ); } element_count=p_size; } return OK; } template void Vector::set(int p_index,T p_elem) { operator[](p_index)=p_elem; } template T Vector::get(int p_index) const { return operator[](p_index); } template bool Vector::push_back(T p_elem) { ERR_FAIL_COND_V( resize(element_count+1), true ) set(element_count-1,p_elem); return false; } template void Vector::remove(int p_index) { ERR_FAIL_INDEX(p_index, element_count); for (int i=p_index; i void Vector::copy_from(const Vector& p_from) { resize(p_from.size()); for (int i=0;i void Vector::operator=(const Vector& p_from) { copy_from(p_from); } template Error Vector::insert(int p_pos,const T& p_val) { ERR_FAIL_INDEX_V(p_pos,size()+1,ERR_INVALID_PARAMETER); resize(size()+1); for (int i=(size()-1);i>p_pos;i--) set( i, get(i-1) ); set( p_pos, p_val ); return OK; } template Vector::Vector(const Vector& p_from) { ptr=NULL; element_count=0; copy_from( p_from ); } template Vector::Vector() { ptr=NULL; element_count=0; } template Vector::~Vector() { resize(0); } #endif zytrax-master/gui/000077500000000000000000000000001347722000700144765ustar00rootroot00000000000000zytrax-master/gui/SCsub000066400000000000000000000037331347722000700154460ustar00rootroot00000000000000 import os try: from StringIO import StringIO except ImportError: from io import StringIO Import('env'); Export('env'); targets=[] def make_gui_icons_action(target, source, env): dst = target[0] png_icons = source icons_string = StringIO() icons_sizes = "" for f in png_icons: fname = str(f) icons_string.write('\t"') size=0 with open(fname, 'rb') as pngf: b = pngf.read(1) while(len(b) == 1): icons_string.write("\\" + str(hex(ord(b)))[1:]) size+=1 b = pngf.read(1) if (icons_sizes!=""): icons_sizes+="," icons_sizes+=str(size) icons_string.write('"') if fname != png_icons[-1]: icons_string.write(",") icons_string.write('\n') s = StringIO() s.write("/* THIS FILE IS GENERATED DO NOT EDIT */\n") s.write("#ifndef _EDITOR_ICONS_H\n") s.write("#define _EDITOR_ICONS_H\n") s.write("static const int gui_icons_count = {};\n".format(len(png_icons))) s.write("static const char *gui_icons_sources[] = {\n") s.write(icons_string.getvalue()) s.write('};\n\n') s.write("static const int gui_icons_sizes[] = {\n") s.write("\t"+icons_sizes+"\n") s.write('};\n\n') s.write("static const char *gui_icons_names[] = {\n") # this is used to store the indices of thumbnail icons thumb_medium_indices = []; index = 0 for f in png_icons: fname = str(f) icon_name = os.path.basename(fname)[5:-4].title().replace("_", "") # some special cases s.write('\t"{0}"'.format(icon_name)) if fname != png_icons[-1]: s.write(",") s.write('\n') index += 1 s.write('};\n') s.write("#endif\n") with open(str(dst), "w") as f: f.write(s.getvalue()) s.close() icons_string.close() env.add_sources(targets,"*.cpp") make_gui_icons_builder = Builder(action=make_gui_icons_action, suffix='.h', src_suffix='.png') env['BUILDERS']['MakeEditorIconsBuilder'] = make_gui_icons_builder env.Alias('gui_icons', [env.MakeEditorIconsBuilder('#gui/gui_icons.gen.h', Glob("icons/*.png"))]) env.libs+=env.Library('gui', targets); zytrax-master/gui/add_effect_dialog.cpp000066400000000000000000000055711347722000700205750ustar00rootroot00000000000000#include "add_effect_dialog.h" void AddEffectDialog::_selection_changed() { Gtk::TreeModel::iterator iter = tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int selected = row[model_columns.index]; const AudioEffectInfo *info = fx_factory->get_audio_effect(selected); String text; text = "Name: " + info->caption + "\n"; text += String() + "Type: " + (info->synth ? "Synth" : "Effect") + "\n"; text += "Provider: " + info->provider_caption + "\n"; if (info->category != String()) text += "Category: " + info->category + "\n"; if (info->author != String()) text += "Author: " + info->author + "\n"; if (info->description != String()) text += "Description:\n" + info->description; description_text->set_text(text.utf8().get_data()); } void AddEffectDialog::_activated(const Gtk::TreeModel::Path &, Gtk::TreeViewColumn *p_column) { response(Gtk::RESPONSE_OK); } int AddEffectDialog::get_selected_effect_index() { Gtk::TreeModel::iterator iter = tree_selection->get_selected(); if (!iter) return -1; Gtk::TreeModel::Row row = *iter; return row[model_columns.index]; } void AddEffectDialog::update_effect_list() { list_store->clear(); for (int i = 0; i < fx_factory->get_audio_effect_count(); i++) { Gtk::TreeModel::iterator iter = list_store->append(); Gtk::TreeModel::Row row = *iter; row[model_columns.name] = fx_factory->get_audio_effect(i)->caption.utf8().get_data(); row[model_columns.provider] = fx_factory->get_audio_effect(i)->provider_caption.utf8().get_data(); row[model_columns.index] = i; } } AddEffectDialog::AddEffectDialog(AudioEffectFactory *p_fx_factory) : Gtk::MessageDialog("", false /* use_markup */, Gtk::MESSAGE_QUESTION, Gtk::BUTTONS_OK_CANCEL) { set_title("Choose Effect"); fx_factory = p_fx_factory; Gtk::Box &vbox = *get_vbox(); //add(vbox); description_text = Gtk::TextBuffer::create(); vbox.pack_start(scroll, Gtk::PACK_EXPAND_WIDGET); scroll.add(tree); description_label.set_label("Effect Description"); vbox.pack_start(description_label, Gtk::PACK_SHRINK); vbox.pack_start(description, Gtk::PACK_SHRINK); description.set_buffer(description_text); show_all_children(); vbox.get_children()[0]->hide(); vbox.set_spacing(0); list_store = Gtk::ListStore::create(model_columns); tree_selection = tree.get_selection(); tree_selection->signal_changed().connect(sigc::mem_fun(this, &AddEffectDialog::_selection_changed)); tree.set_model(list_store); tree.append_column("Effect", model_columns.name); tree.get_column(0)->set_expand(true); tree.append_column("Provider", model_columns.provider); tree.get_column(1)->set_expand(false); tree.signal_row_activated().connect(sigc::mem_fun(this, &AddEffectDialog::_activated)); Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); set_default_size(width / 4, height / 2); } zytrax-master/gui/add_effect_dialog.h000066400000000000000000000017451347722000700202410ustar00rootroot00000000000000#ifndef TRACK_SETTINGS_H #define TRACK_SETTINGS_H #include #include "engine/song.h" class AddEffectDialog : public Gtk::MessageDialog { class ModelColumns : public Gtk::TreeModelColumnRecord { public: ModelColumns() { add(name); add(provider); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn provider; Gtk::TreeModelColumn index; }; ModelColumns model_columns; AudioEffectFactory *fx_factory; Glib::RefPtr list_store; Glib::RefPtr tree_selection; Gtk::ScrolledWindow scroll; Gtk::TreeView tree; Gtk::Label description_label; Gtk::TextView description; Glib::RefPtr description_text; void _selection_changed(); void _activated(const Gtk::TreeModel::Path &, Gtk::TreeViewColumn *p_column); public: void update_effect_list(); int get_selected_effect_index(); AddEffectDialog(AudioEffectFactory *p_fx_factory); }; #endif // TRACK_SETTINGS_H zytrax-master/gui/color_theme.cpp000066400000000000000000000064131347722000700175060ustar00rootroot00000000000000#include "color_theme.h" Gdk::RGBA Theme::make_rgba(uint8_t p_red, uint8_t p_green, uint8_t p_blue, uint8_t p_alpha) { Gdk::RGBA rgba; rgba.set_red(float(p_red) / 255.0); rgba.set_green(float(p_green) / 255.0); rgba.set_blue(float(p_blue) / 255.0); rgba.set_alpha(float(p_alpha) / 255.0); return rgba; } const char *Theme::color_names[Theme::COLOR_MAX]{ "Background", "Focus", "TrackSeparator", "Cursor", "RowBar", "RowBeat", "RowSubBeat", "PatterbBg", "PatternBgRackSelected", "Note", "HlBar", "HlBeat", "HlBarSelected", "HlBeatSelected", "MainBgSelected", "NoteSelected", "NoteNofit", "AutomationValue", "AutomationValueSelected", "AutomationHlBar", "AutomationHlBeat", "AutomationHlBarSelected", "AutomationHlBeatSelected", "AutomationValueNofit", "AutomationPoint", "TrackName", "AutomationName" }; void Theme::select_font_face(const Cairo::RefPtr &cr) { Pango::FontDescription font_desc(font.utf8().get_data()); cr->select_font_face(font_desc.get_family(), Cairo::FONT_SLANT_NORMAL, font_desc.get_weight() > Pango::WEIGHT_MEDIUM ? Cairo::FONT_WEIGHT_BOLD : Cairo::FONT_WEIGHT_NORMAL); cr->set_font_size(font_desc.get_size() / Pango::SCALE); } Theme::Theme() { colors[COLOR_BACKGROUND] = make_rgba(0, 0, 0); colors[COLOR_FOCUS] = make_rgba(255, 255, 255); colors[COLOR_PATTERN_EDITOR_TRACK_SEPARATOR] = make_rgba(128, 128, 128); colors[COLOR_PATTERN_EDITOR_CURSOR] = make_rgba(255, 255, 255); colors[COLOR_PATTERN_EDITOR_ROW_BAR] = make_rgba(220, 220, 255); colors[COLOR_PATTERN_EDITOR_ROW_BEAT] = make_rgba(110, 110, 140); colors[COLOR_PATTERN_EDITOR_ROW_SUB_BEAT] = make_rgba(60, 60, 80); colors[COLOR_PATTERN_EDITOR_BG] = make_rgba(26, 26, 42); colors[COLOR_PATTERN_EDITOR_BG_RACK_SELECTED] = make_rgba(36, 36, 52); colors[COLOR_PATTERN_EDITOR_NOTE] = make_rgba(181, 181, 234); colors[COLOR_PATTERN_EDITOR_BG_SELECTED] = make_rgba(80, 80, 100); colors[COLOR_PATTERN_EDITOR_NOTE_SELECTED] = make_rgba(255, 255, 255); colors[COLOR_PATTERN_EDITOR_HL_BAR] = make_rgba(61, 61, 95); colors[COLOR_PATTERN_EDITOR_HL_BEAT] = make_rgba(41, 41, 61); colors[COLOR_PATTERN_EDITOR_HL_BAR_SELECTED] = make_rgba(110, 110, 150); colors[COLOR_PATTERN_EDITOR_HL_BEAT_SELECTED] = make_rgba(95, 95, 120); colors[COLOR_PATTERN_EDITOR_NOTE_NOFIT] = make_rgba(144, 144, 169); colors[COLOR_PATTERN_EDITOR_AUTOMATION_VALUE] = make_rgba(217, 217, 180); colors[COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_SELECTED] = make_rgba(255, 255, 210); colors[COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR] = make_rgba(87, 87, 65); colors[COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT] = make_rgba(63, 63, 49); colors[COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR_SELECTED] = make_rgba(140, 140, 100); colors[COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT_SELECTED] = make_rgba(119, 90, 90); colors[COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_NOFIT] = make_rgba(205, 205, 190); colors[COLOR_PATTERN_EDITOR_AUTOMATION_POINT] = make_rgba(251, 246, 220); colors[COLOR_PATTERN_EDITOR_TRACK_NAME] = make_rgba(181, 181, 234); colors[COLOR_PATTERN_EDITOR_AUTOMATION_NAME] = make_rgba(217, 217, 180); // fonts[FONT_PATTERN].face = "FreeMono"; font = "Consolas Bold 15"; constants[CONSTANT_PATTERN_EDITOR_TRACK_SEPARATION] = 5; constants[CONSTANT_PATTERN_EDITOR_COLUMN_SEPARATION] = 10; color_scheme = COLOR_SCHEME_DEFAULT; } zytrax-master/gui/color_theme.h000066400000000000000000000032141347722000700171470ustar00rootroot00000000000000#ifndef COLOR_THEME_H #define COLOR_THEME_H #include "globals/rstring.h" #include struct Theme { enum { COLOR_BACKGROUND, COLOR_FOCUS, COLOR_PATTERN_EDITOR_TRACK_SEPARATOR, COLOR_PATTERN_EDITOR_CURSOR, COLOR_PATTERN_EDITOR_ROW_BAR, COLOR_PATTERN_EDITOR_ROW_BEAT, COLOR_PATTERN_EDITOR_ROW_SUB_BEAT, COLOR_PATTERN_EDITOR_BG, COLOR_PATTERN_EDITOR_BG_RACK_SELECTED, COLOR_PATTERN_EDITOR_NOTE, COLOR_PATTERN_EDITOR_HL_BAR, COLOR_PATTERN_EDITOR_HL_BEAT, COLOR_PATTERN_EDITOR_HL_BAR_SELECTED, COLOR_PATTERN_EDITOR_HL_BEAT_SELECTED, COLOR_PATTERN_EDITOR_BG_SELECTED, COLOR_PATTERN_EDITOR_NOTE_SELECTED, COLOR_PATTERN_EDITOR_NOTE_NOFIT, COLOR_PATTERN_EDITOR_AUTOMATION_VALUE, COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_SELECTED, COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR, COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT, COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR_SELECTED, COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT_SELECTED, COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_NOFIT, COLOR_PATTERN_EDITOR_AUTOMATION_POINT, COLOR_PATTERN_EDITOR_TRACK_NAME, COLOR_PATTERN_EDITOR_AUTOMATION_NAME, COLOR_MAX }; static const char *color_names[COLOR_MAX]; Gdk::RGBA colors[COLOR_MAX]; String font; void select_font_face(const Cairo::RefPtr &cr); enum { CONSTANT_PATTERN_EDITOR_TRACK_SEPARATION, CONSTANT_PATTERN_EDITOR_COLUMN_SEPARATION, CONSTANT_MAX }; int constants[CONSTANT_MAX]; static Gdk::RGBA make_rgba(uint8_t p_red, uint8_t p_green, uint8_t p_blue, uint8_t p_alpha = 255); enum ColorScheme { COLOR_SCHEME_DEFAULT, COLOR_SCHEME_DARK, }; ColorScheme color_scheme; Theme(); }; #endif // COLOR_THEME_H zytrax-master/gui/effect_editor.cpp000066400000000000000000000146071347722000700200140ustar00rootroot00000000000000#include "effect_editor.h" #include "settings_dialog.h" void EffectEditor::edit(AudioEffect *p_effect, Track *p_track, Gtk::Widget *p_editor) { effect = p_effect; track = p_track; editor = p_editor; effect_vbox.pack_start(*p_editor, Gtk::PACK_EXPAND_WIDGET); p_editor->show(); update_automations(); } void EffectEditor::_automation_toggled(const Glib::ustring &path) { if (updating_automation) { return; } updating_automation = true; Gtk::TreeIter iter = list_store->get_iter(path); ERR_FAIL_COND(!iter); bool visible = (*iter)[model_columns.visible]; (*iter)[model_columns.visible] = !visible; toggle_automation_visibility.emit(track, effect, (*iter)[model_columns.index], !visible); updating_automation = false; } void EffectEditor::update_automations() { if (updating_automation) { return; } updating_automation = true; list_store->clear(); for (int i = 0; i < effect->get_control_port_count(); i++) { ControlPort *port = effect->get_control_port(i); if (!port->is_visible()) { continue; } Gtk::TreeModel::iterator iter = list_store->append(); Gtk::TreeModel::Row row = *iter; bool visible = false; for (int j = 0; j < track->get_automation_count(); j++) { if (track->get_automation(j)->get_control_port() == port) { visible = true; break; } } row[model_columns.name] = port->get_name().utf8().get_data(); row[model_columns.visible] = visible; //row[model_columns.commands] = command_list_store; if (port->get_command() == 0) { row[model_columns.command] = ""; } else { char s[2] = { char('A' + (port->get_command() - 'a')), 0 }; row[model_columns.command] = s; } row[model_columns.index] = i; } updating_automation = false; } void EffectEditor::_command_edited(const Glib::ustring &path, const Glib::ustring &value) { Gtk::TreeIter iter = list_store->get_iter(path); ERR_FAIL_COND(!iter); Glib::ustring us = value; if (value.length() == 0) { return; } updating_automation = true; char valc = value[0]; if (valc == '<') { valc = 0; //unselected (*iter)[model_columns.command] = ""; } else { valc = 'a' + (valc - 'A'); //unselected (*iter)[model_columns.command] = value; } select_automation_command.emit(track, effect, (*iter)[model_columns.index], int(valc)); updating_automation = false; } bool EffectEditor::_automation_menu_timeout() { Gtk::TreeModel::iterator iter = tree_selection->get_selected(); if (!iter) return false; Gtk::TreeModel::Row row = *iter; int selected = row[model_columns.index]; if (effect->get_control_port(selected)->get_command() == 0) { automation_popup_item.set_sensitive(false); } else { automation_popup_item.set_sensitive(true); } return false; } void EffectEditor::_automation_rmb(GdkEventButton *button_event) { if ((button_event->type == GDK_BUTTON_PRESS) && (button_event->button == 3)) { automation_popup.popup_at_pointer((GdkEvent *)button_event); //we can only override this only BEFORE the event, so selection is wrong, adjust sensitivity in a timer :( menu_timer = Glib::signal_timeout().connect(sigc::mem_fun(*this, &EffectEditor::_automation_menu_timeout), 1, Glib::PRIORITY_DEFAULT); } } void EffectEditor::_automation_menu_action() { Gtk::TreeModel::iterator iter = tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int selected = row[model_columns.index]; ERR_FAIL_COND(effect->get_control_port(selected)->get_command() == 0); String identifier = effect->get_control_port(selected)->get_identifier(); SettingsDialog::add_default_command(identifier, effect->get_control_port(selected)->get_command()); } EffectEditor::EffectEditor() { track = NULL; effect = NULL; add(main_vbox); main_vbox.pack_start(split, Gtk::PACK_EXPAND_WIDGET); split.pack1(automation_scroll, false, false); automation_scroll.add(tree); list_store = Gtk::ListStore::create(model_columns); tree_selection = tree.get_selection(); tree.set_model(list_store); column.set_title("Automations"); column.pack_start(cell_render_check, false); column.pack_start(cell_render_text, true); cell_render_check.signal_toggled().connect(sigc::mem_fun(*this, &EffectEditor::_automation_toggled)); column.add_attribute(cell_render_check.property_active(), model_columns.visible); column.add_attribute(cell_render_text.property_text(), model_columns.name); tree.set_model(list_store); tree.append_column(column); tree.get_column(0)->set_expand(true); command_list_store = Gtk::ListStore::create(model_columns.command_model_columns); { { Gtk::TreeModel::iterator iter = command_list_store->append(); Gtk::TreeModel::Row row = *iter; row[model_columns.command_model_columns.name] = ""; row[model_columns.command_model_columns.index] = 0; } for (int i = 'a'; i <= 'z'; i++) { Gtk::TreeModel::iterator iter = command_list_store->append(); Gtk::TreeModel::Row row = *iter; const char s[2] = { char('A' + (i - 'a')), 0 }; row[model_columns.command_model_columns.name] = s; row[model_columns.command_model_columns.index] = i; } } column2.set_title("Command"); column2.pack_start(cell_render_command, false); column2.add_attribute(cell_render_command.property_text(), model_columns.command); cell_render_command.signal_edited().connect(sigc::mem_fun(*this, &EffectEditor::_command_edited)); cell_render_command.property_model() = command_list_store; cell_render_command.property_text_column() = 0; cell_render_command.property_editable() = true; cell_render_command.property_has_entry() = false; cell_render_command.set_visible(true); tree.append_column(column2); tree.get_column(1)->set_expand(false); tree.set_can_focus(false); //tree_selection->set_mode(Gtk::SELECTION_NONE); tree.signal_button_press_event().connect_notify(sigc::mem_fun(*this, &EffectEditor::_automation_rmb)); automation_popup_item.set_label("Make Command Default"); automation_popup_item.signal_activate().connect(sigc::mem_fun(*this, &EffectEditor::_automation_menu_action)); automation_popup_item.show(); automation_popup.append(automation_popup_item); //////////////// split.pack2(effect_vbox, true, false); show_all_children(); Glib::RefPtr screen = Gdk::Screen::get_default(); //automation_scroll.set_size_request(screen->get_height() / 5, screen->get_height() / 4); automation_scroll.set_propagate_natural_width(true); automation_scroll.set_policy(Gtk::POLICY_NEVER, Gtk::POLICY_ALWAYS); updating_automation = false; } zytrax-master/gui/effect_editor.h000066400000000000000000000045521347722000700174570ustar00rootroot00000000000000#ifndef EFFECT_EDITOR_H #define EFFECT_EDITOR_H #include "engine/song.h" #include class EffectEditor : public Gtk::Window { class ModelColumns : public Gtk::TreeModelColumnRecord { public: //GTK is beyond bizarre at this point class CommandModelColumns : public Gtk::TreeModelColumnRecord { public: CommandModelColumns() { add(name); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn index; }; CommandModelColumns command_model_columns; ModelColumns() { add(name); add(visible); add(command); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn visible; Gtk::TreeModelColumn command; Gtk::TreeModelColumn index; }; Gtk::VBox main_vbox; Gtk::HPaned split; ModelColumns model_columns; AudioEffectFactory *fx_factory; Glib::RefPtr list_store; Glib::RefPtr tree_selection; Gtk::ScrolledWindow automation_scroll; Gtk::CellRendererToggle cell_render_check; Gtk::CellRendererCombo cell_render_command; Gtk::CellRendererText cell_render_text; Gtk::TreeViewColumn column; Gtk::TreeViewColumn column2; Gtk::TreeView tree; Gtk::VBox effect_vbox; Gtk::Widget *editor; Song *song; int track_index; Track *track; AudioEffect *effect; Glib::RefPtr command_list_store; //Glib::RefPtr tree_selection; void _automation_toggled(const Glib::ustring &path); void _command_edited(const Glib::ustring &path, const Glib::ustring &value); bool updating_automation; void _automation_rmb(GdkEventButton *button); bool _automation_menu_timeout(); void _automation_menu_action(); Gtk::Menu automation_popup; Gtk::MenuItem automation_popup_item; //because GTK is horrible sigc::connection menu_timer; //hide on escape virtual bool on_key_press_event(GdkEventKey *key_event) { if (key_event->keyval == GDK_KEY_Escape) { hide(); } return false; } public: sigc::signal4 toggle_automation_visibility; sigc::signal4 select_automation_command; void update_automations(); void edit(AudioEffect *p_effect, Track *p_track, Gtk::Widget *p_editor); EffectEditor(); }; typedef Gtk::Widget *(*EffectEditorPluginFunc)(AudioEffect *, EffectEditor *); #endif // EFFECT_EDITOR_H zytrax-master/gui/effect_editor_default.cpp000066400000000000000000000116151347722000700215140ustar00rootroot00000000000000#include "effect_editor_default.h" void EffectEditorDefault::_combo_changed(int p_idx) { ERR_FAIL_COND(!combos.has(p_idx)); Gtk::TreeModel::iterator iter = combos[p_idx].combo->get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; effect->get_control_port(p_idx)->set(float(id)); } } } void EffectEditorDefault::_scale_changed(int p_idx) { ERR_FAIL_COND(!scales.has(p_idx)); effect->get_control_port(p_idx)->set(scales[p_idx]->get_adjustment()->get_value()); } void EffectEditorDefault::_toggle_clicked(int p_idx) { ERR_FAIL_COND(!buttons.has(p_idx)); effect->get_control_port(p_idx)->set(buttons[p_idx]->get_active() ? 1.0 : 0.0); } EffectEditorDefault::EffectEditorDefault(AudioEffect *p_effect) { effect = p_effect; bool natural_height = p_effect->get_control_port_count() <= 10; set_propagate_natural_width(true); set_propagate_natural_height(natural_height); set_policy(Gtk::POLICY_NEVER, natural_height ? Gtk::POLICY_NEVER : Gtk::POLICY_ALWAYS); effect_grid.set_margin_left(8); effect_grid.set_margin_right(8); effect_grid.set_margin_top(8); effect_grid.set_margin_bottom(8); effect_grid.set_row_spacing(2); effect_grid.set_column_spacing(8); effect_grid.set_hexpand(true); effect_grid.set_vexpand(true); Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); set_size_request(width / 4, natural_height ? 1 : height / 3); add(effect_grid); for (int i = 0; i < p_effect->get_control_port_count(); i++) { ControlPort *cp = p_effect->get_control_port(i); if (!cp->is_visible()) { continue; } switch (cp->get_hint()) { case ControlPort::HINT_RANGE: case ControlPort::HINT_RANGE_NORMALIZED: { Gtk::Label *name_label = new Gtk::Label; name_label->set_text(cp->get_name().utf8().get_data()); effect_grid.attach(*name_label, 0, i, 1, 1); Gtk::HScale *scale = new Gtk::HScale; scale->set_adjustment(Gtk::Adjustment::create(cp->get(), cp->get_min(), cp->get_max(), cp->get_step())); //geez GTK, you could guess this from the step.. if (cp->get_step() >= 0.99) { scale->set_digits(0); } else if (cp->get_step() >= 0.099) { scale->set_digits(1); } else if (cp->get_step() >= 0.0099) { scale->set_digits(2); } else { scale->set_digits(3); } scale->set_hexpand(true); effect_grid.attach(*scale, 1, i, 1, 1); if (cp->get_hint() == ControlPort::HINT_RANGE_NORMALIZED) { Gtk::Label *value_label = new Gtk::Label; value_label->set_text(cp->get_value_as_text().utf8().get_data()); effect_grid.attach(*value_label, 2, i, 1, 1); widgets.push_back(value_label); labels[i] = value_label; scale->set_draw_value(false); } scales[i] = scale; widgets.push_back(name_label); widgets.push_back(scale); scale->get_adjustment()->signal_value_changed().connect(sigc::bind(sigc::mem_fun(*this, &EffectEditorDefault::_scale_changed), i)); } break; case ControlPort::HINT_ENUM: { Gtk::Label *name_label = new Gtk::Label; name_label->set_text(cp->get_name().utf8().get_data()); effect_grid.attach(*name_label, 0, i, 1, 1); combos[i] = Combo(); Combo &combo = combos[i]; combo.list_store = Gtk::ListStore::create(model_columns); combo.list_store->clear(); float eff_current = cp->get(); for (int i = 0; i <= cp->get_max(); i++) { Gtk::TreeModel::iterator iter = combo.list_store->append(); Gtk::TreeModel::Row row = *iter; cp->set(i); row[model_columns.name] = cp->get_value_as_text().utf8().get_data(); row[model_columns.index] = i; } combo.combo = new Gtk::ComboBox; combo.combo->set_model(combo.list_store); combo.combo->pack_start(model_columns.name); combo.combo->set_active((int)eff_current); combo.combo->signal_changed().connect(sigc::bind(sigc::mem_fun(*this, &EffectEditorDefault::_combo_changed), i)); cp->set(eff_current); effect_grid.attach(*combo.combo, 1, i, 1, 1); combo.combo->set_hexpand(true); widgets.push_back(name_label); widgets.push_back(combo.combo); } break; case ControlPort::HINT_TOGGLE: { Gtk::CheckButton *check = new Gtk::CheckButton; check->set_label(cp->get_name().utf8().get_data()); check->set_active(cp->get() > 0.5); effect_grid.attach(*check, 0, i, 2, 1); buttons[i] = check; widgets.push_back(check); check->signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &EffectEditorDefault::_toggle_clicked), i)); } break; } } show_all_children(); } EffectEditorDefault::~EffectEditorDefault() { for (int i = 0; i < widgets.size(); i++) { delete widgets[i]; } } Gtk::Widget *create_default_editor_func(AudioEffect *p_effect, EffectEditor *p_editor) { return new EffectEditorDefault(p_effect); } zytrax-master/gui/effect_editor_default.h000066400000000000000000000022511347722000700211550ustar00rootroot00000000000000#ifndef EFECT_EDITOR_DEFAULT_H #define EFECT_EDITOR_DEFAULT_H #include "engine/audio_effect.h" #include "globals/map.h" #include "gui/effect_editor.h" #include class EffectEditorDefault : public Gtk::ScrolledWindow { class ModelColumns : public Gtk::TreeModelColumnRecord { public: ModelColumns() { add(name); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn index; }; ModelColumns model_columns; Glib::RefPtr list_store; struct Combo { Glib::RefPtr list_store; //Glib::RefPtr tree_selection; Gtk::ComboBox *combo; }; Gtk::Grid effect_grid; Vector effects; Map scales; Map labels; Map combos; Map buttons; Vector widgets; AudioEffect *effect; void _combo_changed(int p_idx); void _scale_changed(int p_idx); void _toggle_clicked(int p_idx); public: EffectEditorDefault(AudioEffect *p_effect); ~EffectEditorDefault(); }; Gtk::Widget *create_default_editor_func(AudioEffect *p_effect, EffectEditor *p_editor); #endif // EFECT_EDITOR_DEFAULT_H zytrax-master/gui/effect_editor_midi.cpp000066400000000000000000000131331347722000700210070ustar00rootroot00000000000000#include "effect_editor_midi.h" void EffectEditorMIDI::_cc_toggled(const Glib::ustring &path) { Gtk::TreeIter iter = cc_list_store->get_iter(path); ERR_FAIL_COND(!iter); bool visible = (*iter)[cc_model_columns.visible]; (*iter)[cc_model_columns.visible] = !visible; midi_effect->set_cc_visible(MIDIEvent::CC(int((*iter)[cc_model_columns.index])), !visible); effect_editor->update_automations(); } void EffectEditorMIDI::_midi_channel_changed() { int channel = midi_channel_spinbox.get_adjustment()->get_value(); midi_effect->set_midi_channel(channel); } void EffectEditorMIDI::_midi_pitch_bend_range_changed() { int pitch_bend_range = midi_pitch_bend_range_spinbox.get_adjustment()->get_value(); midi_effect->set_pitch_bend_range(pitch_bend_range); } String EffectEditorMIDI::_get_text_from_hex(const Vector &p_hex) { String macro_text; for (int j = 0; j < p_hex.size(); j++) { const char *hex = "0123456789ABCDEF"; uint8_t b = p_hex[j]; macro_text += String(String::CharType(hex[b >> 4])); macro_text += String(String::CharType(hex[b & 0xF])); macro_text += " "; } return macro_text; } void EffectEditorMIDI::_macro_edited(const Glib::ustring &path, const Glib::ustring &text) { Gtk::TreeIter iter = macro_list_store->get_iter(path); ERR_FAIL_COND(!iter); Vector hex; uint8_t byte = 0; bool msb = true; for (int i = 0; i < text.length(); i++) { char c = text[i]; uint8_t nibble; if (c >= 'a' && c <= 'f') { nibble = 10 + (c - 'a'); } else if (c >= 'A' && c <= 'F') { nibble = 10 + (c - 'A'); } else if (c >= '0' && c <= '9') { nibble = c - '0'; } else { continue; } if (msb) { byte = nibble << 4; msb = false; } else { byte |= nibble; msb = true; hex.push_back(byte); } } (*iter)[macro_model_columns.text] = _get_text_from_hex(hex).ascii().get_data(); int index = (*iter)[macro_model_columns.index]; midi_effect->set_midi_macro(index, hex); } EffectEditorMIDI::EffectEditorMIDI(AudioEffectMIDI *p_effect, EffectEditor *p_editor) { midi_effect = p_effect; effect_editor = p_editor; append_page(cc_vbox, "MIDI Controls"); cc_vbox.pack_start(midi_grid, Gtk::PACK_SHRINK); midi_channel_label.set_text("MIDI Channel:"); midi_channel_label.set_hexpand(true); midi_grid.attach(midi_channel_label, 0, 0, 1, 1); midi_grid.attach(midi_channel_spinbox, 1, 0, 1, 1); midi_channel_spinbox.set_hexpand(true); midi_channel_spinbox.set_adjustment(Gtk::Adjustment::create(0, 0, 15)); midi_channel_spinbox.get_adjustment()->set_value(midi_effect->get_midi_channel()); midi_channel_spinbox.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &EffectEditorMIDI::_midi_channel_changed)); midi_pitch_bend_range_label.set_text("Pitch Bend Range:"); midi_grid.attach(midi_pitch_bend_range_label, 0, 1, 1, 1); midi_pitch_bend_range_label.set_hexpand(true); midi_grid.attach(midi_pitch_bend_range_spinbox, 1, 1, 1, 1); midi_pitch_bend_range_spinbox.set_hexpand(true); midi_pitch_bend_range_spinbox.set_adjustment(Gtk::Adjustment::create(2, 2, 24)); midi_pitch_bend_range_spinbox.get_adjustment()->set_value(midi_effect->get_pitch_bend_range()); midi_pitch_bend_range_spinbox.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &EffectEditorMIDI::_midi_pitch_bend_range_changed)); cc_vbox.pack_start(cc_separator, Gtk::PACK_SHRINK); cc_vbox.pack_start(cc_scroll, Gtk::PACK_EXPAND_WIDGET); cc_scroll.add(cc_tree); cc_list_store = Gtk::ListStore::create(cc_model_columns); cc_tree_selection = cc_tree.get_selection(); cc_column.set_title("Visible Controllers to Automate"); cc_column.pack_start(cc_enabled_check, false); cc_column.pack_start(cc_enabled_text, true); cc_enabled_check.signal_toggled().connect(sigc::mem_fun(*this, &EffectEditorMIDI::_cc_toggled)); cc_column.add_attribute(cc_enabled_check.property_active(), cc_model_columns.visible); cc_column.add_attribute(cc_enabled_text.property_text(), cc_model_columns.name); cc_tree.set_model(cc_list_store); cc_tree.append_column(cc_column); cc_tree.get_column(0)->set_expand(true); for (int i = 0; i < MIDIEvent::CC_MAX; i++) { Gtk::TreeModel::iterator iter = cc_list_store->append(); Gtk::TreeModel::Row row = *iter; row[cc_model_columns.name] = MIDIEvent::cc_names[i]; row[cc_model_columns.visible] = midi_effect->is_cc_visible(MIDIEvent::CC(i)); row[cc_model_columns.index] = i; } append_page(macro_vbox, "MIDI Macros"); macro_vbox.pack_start(macro_scroll, Gtk::PACK_EXPAND_WIDGET); macro_scroll.add(macro_tree); macro_list_store = Gtk::ListStore::create(macro_model_columns); macro_tree_selection = macro_tree.get_selection(); macro_tree.append_column("Index", macro_model_columns.label); macro_column.set_title("Macro"); macro_column.pack_start(macro_column_text, true); macro_column_text.property_editable() = true; macro_column_text.signal_edited().connect(sigc::mem_fun(*this, &EffectEditorMIDI::_macro_edited)); macro_column.add_attribute(macro_column_text.property_text(), macro_model_columns.text); macro_tree.append_column(macro_column); macro_tree.set_model(macro_list_store); macro_tree.get_column(0)->set_expand(false); macro_tree.get_column(1)->set_expand(true); for (int i = 0; i < AudioEffectMIDI::CUSTOM_MIDI_MACRO_MAX; i++) { if (i == 0) { continue; } Gtk::TreeModel::iterator iter = macro_list_store->append(); Gtk::TreeModel::Row row = *iter; Vector macro = midi_effect->get_midi_macro(i); String macro_text = _get_text_from_hex(macro); row[macro_model_columns.label] = String::num(i).utf8().get_data(); row[macro_model_columns.text] = macro_text.utf8().get_data(); row[macro_model_columns.index] = i - 1; } } zytrax-master/gui/effect_editor_midi.h000066400000000000000000000037271347722000700204640ustar00rootroot00000000000000#ifndef EFFECT_EDITOR_MIDI_H #define EFFECT_EDITOR_MIDI_H #include "effect_editor.h" #include "engine/audio_effect_midi.h" #include class EffectEditorMIDI : public Gtk::Notebook { EffectEditor *effect_editor; AudioEffectMIDI *midi_effect; class CCModelColumns : public Gtk::TreeModelColumnRecord { public: CCModelColumns() { add(name); add(visible); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn visible; Gtk::TreeModelColumn index; }; CCModelColumns cc_model_columns; Gtk::VBox cc_vbox; Gtk::Grid midi_grid; Gtk::Label midi_channel_label; Gtk::SpinButton midi_channel_spinbox; Gtk::Label midi_pitch_bend_range_label; Gtk::SpinButton midi_pitch_bend_range_spinbox; Gtk::VSeparator cc_separator; Glib::RefPtr cc_list_store; Glib::RefPtr cc_tree_selection; Gtk::ScrolledWindow cc_scroll; Gtk::CellRendererToggle cc_enabled_check; Gtk::CellRendererText cc_enabled_text; Gtk::TreeViewColumn cc_column; Gtk::TreeView cc_tree; Gtk::VBox macro_vbox; class MacroModelColumns : public Gtk::TreeModelColumnRecord { public: MacroModelColumns() { add(label); add(text); add(index); } Gtk::TreeModelColumn label; Gtk::TreeModelColumn text; Gtk::TreeModelColumn index; }; MacroModelColumns macro_model_columns; Glib::RefPtr macro_list_store; Glib::RefPtr macro_tree_selection; Gtk::TreeViewColumn macro_column; Gtk::CellRendererText macro_column_text; Gtk::ScrolledWindow macro_scroll; Gtk::TreeView macro_tree; void _cc_toggled(const Glib::ustring &path); void _midi_channel_changed(); void _midi_pitch_bend_range_changed(); void _macro_edited(const Glib::ustring &path, const Glib::ustring &text); String _get_text_from_hex(const Vector &p_hex); public: EffectEditorMIDI(AudioEffectMIDI *p_effect, EffectEditor *p_effect_editor); }; #endif // EFFECT_EDITOR_MIDI_H zytrax-master/gui/icons.cpp000066400000000000000000000011551347722000700163170ustar00rootroot00000000000000#include "icons.h" #include "gui_icons.gen.h" #define STB_IMAGE_IMPLEMENTATION #include "stb_image.h" Gtk::Image create_image_from_icon(const String &p_name) { Gtk::Image image; for (int i = 0; i < gui_icons_count; i++) { if (gui_icons_names[i] == p_name) { int w, h, c; unsigned char *data = stbi_load_from_memory((const stbi_uc *)gui_icons_sources[i], gui_icons_sizes[i], &w, &h, &c, 4); if (data) { Glib::RefPtr ref = Gdk::Pixbuf::create_from_data((const guint8 *)data, Gdk::COLORSPACE_RGB, true, 8, w, h, w * 4); image = Gtk::Image(ref); } break; } } return image; } zytrax-master/gui/icons.h000066400000000000000000000002261347722000700157620ustar00rootroot00000000000000#ifndef ICONS_H #define ICONS_H #include "rstring.h" #include Gtk::Image create_image_from_icon(const String &p_name); #endif // ICONS_H zytrax-master/gui/icons/000077500000000000000000000000001347722000700156115ustar00rootroot00000000000000zytrax-master/gui/icons/icon_add_track.png000066400000000000000000000007271347722000700212510ustar00rootroot00000000000000PNG  IHDRw=bKGDC pHYs  tIME76T iTXtCommentCreated with GIMPd.e;IDATHJ@H! {S/%׾BԧG$B!$=W_A=$d5^Rk" Z;3uK)J{yW{WmD&IR'IR馗d(:{%MP-Ig@>"1E, ht>yFP u@JynL&DQt >eeYc`viLSZӨ8[R ql8^K.`+.iYsDEAT Ftal`wSY_j3AIENDB`zytrax-master/gui/icons/icon_next_pattern.png000066400000000000000000000106241347722000700220450ustar00rootroot00000000000000PNG  IHDRw=^zTXtRaw profile type exifxYrq { HɞnPKfZfW_LLRr5Vjx> ϩ1?K~/{8 TEE{9(;}6(gxwY6"0q?F?bSvފ(mAoy]ɯX0?8F>}z o=u:X,7/ӟo"mC__"ߎwy>'nfmmwÚ_Qӟg{յYr~ sW-$[ %1$:qknu L~y B0'Qn{!3=HYf;3ݠĦ1;7\.kfcF9czi\Lͅ6uߓ\:pax|EX0"` eg{#B}',.%?12 {ĝK}4@H R&4 VXȡB)$TS!k,YI(IL %TrRJ-@TZj1icݍ Z뾇{K/6ađF2ʨM?äg2ˬ-HWZy*Ivi-jv{_"j? -jy Nƌh\0klq1zVOU$I3F|-v?" ODh@~gQ]82T@Wi4eow@)J:NvvIwp{ Ks6>C77R[ip .B.lݮ>[dʞw9F*T/iNƖ}MaV]KMݨ2)LA602MvhQqs=R%jsT80BΊM- dzl+F]Y65/ƳK&何+#0T3 %a(9#7 k'!4 eVfNenLBjOWnY;ϕeGbF% 2*L=/kPmǫ&\LCəB'Uȡ3-J5q ۍFuxO XR#@l(w?!c#R1 rb+@ʸ.ʗzVFyrD>NԛKUN+ϒnMc[LwB=(K508d|&q>㈺A?U2| zP[Ԋs sU0#Lڴ@{l3YQ"} Mu/zꋣywJȈ ҷ DLQT T>>a~hfj w#Tbb ;NE18_3s2E&/h$EFwNk:O+ߥ\$C޺*ġ,iwIR8`\t}%T0M+ ._{mD5T\[u1 ġdhH[ LtboTՀMMȮ6:0첊 Ou4FӢ{A6.$%156w}4qŨ$猂 f(y ]h絭>U3Q?Hw ĵ7 :&L q iOo{zl_t _D\yV[v6F aZ.hɓƄ? Qmr9CG,dQ+cv3l 5p`<(>X숬~85B<وoO>r|HbcZrf1zz Kfa0d0R抆L)?nsa!sHh^)xHdC$ bu uGa0aZUD &f/s2j|EUQ4`WH`4g0DoJ/$AkR&(Z+o9ŌvQʛ&kdrZ*h&Bm0iX+?G7}E&K`4[;BZ9{vV<a;r4*4J'# & ,1 [; x#k-%Jof叡R,ȡDuB(C^#( "#0/MZ [uH}3-bu_FxÃ[iJRǛ8U#_&JFjZUW2ZҖmnW^YNQn$6LQZ _uP-􂣖o{GGtІ`- QH XrwkR٭$ ?~e$6F-z"WFӧos;۾hTqSa^ql?8Izxdz,"Z*42Ak2 TX^L&4w'^;gE<9ƪZ$k`j!u_w-yҗҔ8pݶ^9ҚO[Y* [TAЉnf?ݒ!sxKu7GulB&-))Sf˦ tJ;_pJ4͇Vua14vnEiP霙7)M^EU.jL-;8fNw}=n[FF\DžTQ- pR0.:0K%,CN2o2}3c n 6J7ΚUIOtz\УD(hf.::EǑ&>Y2V.rXW.B7rҽp~}h`x3J8S=}prb̊g.z#h"0uY7ʄT-OokUM8Y:Or o!:BlA;=nd[ڼߗRp#["H!>CH˿̪;ierTk:MK<@E#٭۞C5ݽy7k>#oGU ϢagOGťU<7Z<7 2p4g -hUvJBU@k`Lc:;P>d{k)_6j30l6NڢD!t^9Q-ֿj xW_@]6pbKGD*-8j pHYs  tIME6g@tEXtCommentCreated with GIMPWIDATH1k"aEA8q-^+Rli R i,V!pb&E!D}d9b6!dxaa738} 8Y.oG=qdټX,slbr`d2vF!,P%>@z,PTe$+N 8^$j5q?<ϓ$f3eYGU$5M% Uh4R>Xi۲m !\n u'5nWT 6X4LTVX,jZl |7ZbBt:Yc{%t(J|^xwcꢅj]v??__TqiȥE1tIENDB`zytrax-master/gui/icons/icon_play.png000066400000000000000000000007241347722000700202770ustar00rootroot00000000000000PNG  IHDRw=bKGDC pHYs  tIME8iTXtCommentCreated with GIMPd.e8IDATHݕOa?ch)duR Hh /ݬ ͍jwtmC?Mg> mVq\G&:?^V4M/PR̈́jZDZ0T ׀in+""b+tivbČu}̃ ]ա%m[V`0۶9鴄aPwܺ,H⺮ɺd2($"d2SP`ZYj*jU7Ob$J7r]|N6 fI^QTJ h4D)5Jz]a|w/RZugZMR}2_չajfj0 Ώmϋd2)}ߗd2)ũ3piY֤w:,k\TNRcߩ:8}m#)f;"/IENDB`zytrax-master/gui/icons/icon_prev_pattern.png000066400000000000000000000106361347722000700220460ustar00rootroot00000000000000PNG  IHDRw=zTXtRaw profile type exifxݙWv#߱ /xw+03Ҍ-TPY6'M\RMkΣ<^Kp-鹾?3N| 3s5Qpw l3Pl%x' }!$c6Ty]M]hg~߼nɯ0PC9C{^+%[?ܼ߭SJ^/ȴ9}~Pz{u'ǔ}9?]۹O{~eMeuwbbٔKྮ:Jf V~ -1H:UIvMv<7Xbg>aXHF;> )C[;3ݠŦ1;_?iom(\Nͯf9uFF~&O0y dPN l{/pQx}| D@X d&%g&;G j,݇;iq"~HCH$.`n>ݹՋB!! Ib'B 5 E$I"UZ I;,Q[9f)\r͔PbJ.ҪĕJ?RkmI#7>ݸ{KO=koÛF2ȣ:3Lxgu⒕V^e6Î[vy]noY{5d͟L-k\5S8ёpx95s3[=]!E&g:y_{Ϝ@oɛ!ȜA~ۯ6!p2mAu|iʔ>/ը'6e gZ{TVF` e&cN~yR"elk^D+spueWddX%f]{7 ۋ 6 f}ޢZ^ígr%к5"g2*n9c1$Uzi0$$LwbKE&c]+Z'RDWwiv#X'FkHgc0qnБp2OfnWzj^z#+$MAlvӳOYi.{bȧK=S1 ְ+5Z. Et56w+wtipmmS0.h##\`4rN~7:!]E[g?@Sf$yp_l|P,жZ "Ě9dB(#]ml}m5t`Swԉ#)~y'w>?'ፀmLobUmt*!cDUB,Ǔ#gC9`4nbiܦ}J7e_0RGRMVX"'y ftD_=0S;1a#y`M!'O."yga}wU81EڌK<hpU. xUl1A]`Iht0%|9GaBnǀsaۃTVZ) OO2QJvUoM!(xM<48n΋ Rm'&}[u}ahC# BncU;*Z\ԖPY;Vھ%bw3]W%O9NgE"(BSkXɜ'a.lfEˏӢV]Yo6Q}7f5 T5T}g*\'p!a26BYH{RH\wˬitFO.GZ 5-%xmaJ7gt-ўmuG硾`g7b4j,2>%]RlA"g,$YA8D.{=q@/1)HB`@~̂ |   0isiʞ6PB<6mӢWc E|[̢myIh/II LIT 0(vwj6+3Ek)oa8PGɁJe 0O-*(1\ ;h:%8Ζc׻kAe l3Y*adCCaTNCiḘͭԶn Nh4]pmʔg3/u;/?dHD;=(M< zP+Z)كݬ@k!A9vd)05;yIS 655B׌1ώar% d@fnmsI}I74"Y2L6(ubo0 DP~E(]PG0 d߿![|iU= oH)^>:Ysn x7ns $@~HCT ~+?? :HX5ѐ1 7d@>M6D9Mau /m4" /JI?Y$x퐧tBS~NKJp$=xEQ6SW`@ Z,mhXN;G䂭1|{1^FCy!juRH6 4L#U{UCʆ~Qg@{5ia 99Uf=B2 J\ʨՊb`+_BAlr:fM|xtE N B@ 67ɥY̾P)YUDm/mknG/a+2k) hB$20 zF'P谛6, i=t3y Y B1PvZ' PNvzdηtl CgXz5M9ӧ(r$HRoߝI5T70יy@DZUl,$PNAakpP8j,0ݾV[ QGU'lVU&{`>C+Nкz(A|d5QT0._\EHJ )x(= 6`sKuhDs#;Vn0=\*˳jy"6*,> Aa_w2Rl:jD}zJ6]sFJLKTTC뙠ZXQ=MT8Q,)|buv/a2_-qe8=N*HĶ-d@ӀCW~yHpnMkqSj='8?Z;ҭ0Z F3&zjPD)H;~@U߸.|lf9H=t-F$e}9T2{!VV O+V1gCWX-T'39!89ׯIgvڟCp]"Xߝ1:m.=,\6X0.=B' DӐC5DWFJɔuG"cBߺ`zTQ * =+:>0Ŝ|T=~uvQZ{TV:*V=ߺ_i*/Uwxj<u0ڴ @;T:-|<۶@ orND"v\.`03R1+J$nxRfI{B"HoXVkpYT`& F#Rjg@4EVq0poY{!$l47 ziuL&K*81MZO锅Bm=XG@)q|&YuuRJT ~L|4-BKҮ -#oehqIENDB`zytrax-master/gui/icons/icon_settings.png000066400000000000000000000032771347722000700212000ustar00rootroot00000000000000PNG  IHDRazTXtRaw profile type exifxڭWm& )rKYil3l*Z# YHBxU7eD6L0#cMMC1Q갏g>Ïh9a#͇Ƴ h^l(4+F:u7"m!fpe9u^;wRL=Ei2XPuþ^& o.I_娵^Dᦃ`50:s+BmfKl/j0hw=[:\$Kgmg ;[\ֶ4MBcvDуYN`BxkG-[>h- j`,c,V[b6(ZXq{Ttg8 .z퍷^f!ĈE#\Gac⤓I6IrɧbFdm!El"ũK(RE)USmꪯƆZkf4| -.х?FΘMۉ휁16]g͓͝1ܙm),5B10h*m;#oʚ_O̩N }`D;A}vL7`P}dl )e>OI -9-SHꢨy+}?"0ic 0Z@Jq9RYVsfnj%9$dXGh͊;nSV eo+9OեFdysoNRF5b¾yPc2'ջղni=4v]UNCv+*+l;⴮ ߪC/7_DdN꾈d#1\ 37ɟ3o/-o͕޽-<[rj*y7ݙ~,P f"I~jw'M4T.H -& xoFIENDB`zytrax-master/gui/icons/icon_stop.png000066400000000000000000000005351347722000700203170ustar00rootroot00000000000000PNG  IHDRw=bKGDC pHYs  tIME7,_6UpiTXtCommentCreated with GIMPd.eIDATHAj0EԈ&E9E;Z I*`{BWǍ p= 9KDΪJ9J)t]G{gSU9MU JPk%SJZ'g?yGDO;#H۶4MH80cllO12K)̾ o+Y wzIENDB`zytrax-master/gui/interface.cpp000066400000000000000000001763771347722000700171670ustar00rootroot00000000000000#include "interface.h" #include "effect_editor_default.h" #include "icons.h" void Interface::_add_track() { key_bindings->action_activated.emit(KeyBindings::TRACK_ADD_TRACK); } void Interface::_pattern_changed() { pattern_editor.set_current_pattern(pattern.get_adjustment()->get_value()); } void Interface::_octave_changed() { pattern_editor.set_current_octave(octave.get_adjustment()->get_value()); } void Interface::_step_changed() { pattern_editor.set_current_cursor_advance(step.get_adjustment()->get_value()); } void Interface::_volume_changed() { pattern_editor.set_current_volume_mask(volume.get_adjustment()->get_value(), volume_mask.get_active()); } void Interface::_tempo_changed() { if (updating_editors) { return; } undo_redo.begin_action("Set Tempo", true); undo_redo.do_method(&song, &Song::set_bpm, (float)tempo.get_adjustment()->get_value()); undo_redo.undo_method(&song, &Song::set_bpm, song.get_bpm()); undo_redo.do_method(this, &Interface::_update_editors); undo_redo.undo_method(this, &Interface::_update_editors); updating_editors = true; undo_redo.commit_action(); updating_editors = false; } void Interface::_swing_changed() { if (updating_editors) { return; } undo_redo.begin_action("Set Swing", true); undo_redo.do_method(&song, &Song::set_swing, float(swing.get_adjustment()->get_value() / 100.0)); undo_redo.undo_method(&song, &Song::set_swing, song.get_swing()); undo_redo.do_method(this, &Interface::_update_editors); undo_redo.undo_method(this, &Interface::_update_editors); updating_editors = true; undo_redo.commit_action(); updating_editors = false; } void Interface::_zoom_changed() { Gtk::TreeModel::iterator iter = zoom.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[zoom_model_columns.index]; pattern_editor.set_beat_zoom(PatternEditor::BeatZoom(id)); } } } void Interface::_update_editors() { if (updating_editors) { return; } updating_editors = true; pattern.get_adjustment()->set_value(pattern_editor.get_current_pattern()); octave.get_adjustment()->set_value(pattern_editor.get_current_octave()); step.get_adjustment()->set_value(pattern_editor.get_current_cursor_advance()); volume.get_adjustment()->set_value(pattern_editor.get_current_volume_mask()); volume_mask.set_active(pattern_editor.is_current_volume_mask_active()); tempo.get_adjustment()->set_value(song.get_bpm()); swing.get_adjustment()->set_value(song.get_swing() * 100); zoom.set_active(pattern_editor.get_beat_zoom()); updating_editors = false; } void Interface::_update_title() { String song_name; if (song.get_name() != String()) { song_name = song.get_name(); } if (song_path != String()) { int last = MAX(song_path.find_last("/"), song_path.find_last("\\")); if (last != -1) { song_name = song_path.substr(last + 1, song_path.length()); } } String title; if (song_name != String()) { title = String() + VERSION_SOFTWARE_NAME + " " + VERSION_MKSTRING + " - " + song_name; } else { title = String() + VERSION_SOFTWARE_NAME + " " + VERSION_MKSTRING + " " + VERSION_COPYRIGHT; } if (undo_redo.get_current_version() != save_version) { title += " (*)"; } set_title(title.utf8().get_data()); } void Interface::_export_dialog_callback(int p_order, void *p_userdata) { Gtk::ProgressBar *pb = (Gtk::ProgressBar *)p_userdata; int last_order = -1; for (int i = 0; i < Song::ORDER_MAX; i++) { last_order = i; if (singleton->song.order_get(i) == Song::ORDER_EMPTY) { break; } } if (last_order <= 0) { last_order = 1; } float progress = float(p_order) / float(last_order); pb->set_fraction(progress); singleton->export_wav_label.set_text(String(String("Exporting Order ") + String::num(p_order) + "/" + String::num(last_order) + " (" + String::num(int(progress * 100)) + "%)").utf8().get_data()); while (gtk_events_pending()) { gtk_main_iteration_do(false); } } bool Interface::_export_dialog_key(GdkEvent *p_key) { //avoid closing scan with escape key if (p_key->type == GDK_KEY_PRESS && ((GdkEventKey *)(p_key))->keyval == GDK_KEY_Escape) { return true; } else { return false; } } void Interface::_on_action_activated(KeyBindings::KeyBind p_bind) { switch (p_bind) { case KeyBindings::FILE_NEW: { Gtk::MessageDialog error_box("Clear song? (no undo)", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_OK_CANCEL); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); if (error_box.run() == Gtk::RESPONSE_OK) { for (Map::Element *E = active_effect_editors.front(); E; E = E->next()) { delete E->get(); } active_effect_editors.clear(); song.clear(); undo_redo.clean(); save_version = 0; _update_editors(); _update_tracks(); _update_title(); _update_colors(); song_path = String(); } error_box.hide(); } break; case KeyBindings::FILE_OPEN: { Gtk::FileChooserDialog dialog("Select a file to open", Gtk::FILE_CHOOSER_ACTION_OPEN); dialog.set_transient_for(*this); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); //Add response buttons the the dialog: gboolean swap_buttons; g_object_get(gtk_settings_get_default(), "gtk-alternative-button-order", &swap_buttons, NULL); if (swap_buttons) { dialog.add_button("Select", Gtk::RESPONSE_OK); dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); } else { dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); dialog.add_button("Select", Gtk::RESPONSE_OK); } auto filter_zt = Gtk::FileFilter::create(); filter_zt->set_name((String(VERSION_SOFTWARE_NAME) + " files").utf8().get_data()); filter_zt->add_pattern("*.zyt"); dialog.add_filter(filter_zt); if (song_path != String()) { dialog.set_filename(song_path.utf8().get_data()); } int result = dialog.run(); dialog.hide(); if (result == Gtk::RESPONSE_OK) { String path; path.parse_utf8(dialog.get_filename().c_str()); List missing; Error err = song_file.load(path, &missing); if (err != OK) { String err_str; if (err == ERR_FILE_CANT_OPEN) { err_str = "Error: Can't open file (no access?)."; } else if (err == ERR_FILE_TOO_NEW) { err_str = "Error: File was saved by a newer version."; } else if (err == ERR_FILE_CORRUPT) { err_str = "Error: File is corrupted."; } else { err_str = "Error: Could not load."; } Gtk::MessageDialog error_box(err_str.utf8().get_data(), false, Gtk::MESSAGE_ERROR, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); } else { song_path = path; undo_redo.clean(); save_version = 0; _update_editors(); _update_tracks(); _update_title(); _update_colors(); if (missing.size()) { String error_text = "The following plugins were not found:\n\n"; for (List::Element *E = missing.front(); E; E = E->next()) { error_text += E->get().provider + ": " + E->get().id + "\n"; } error_text += "\nDummy plugins were used instead."; Gtk::MessageDialog error_box(error_text.utf8().get_data(), false, Gtk::MESSAGE_ERROR, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); } } } } break; case KeyBindings::FILE_SAVE: { if (song_path != String()) { //just save Error err = song_file.save(song_path); if (err != OK) { Gtk::MessageDialog error_box("Error saving file", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); } else { save_version = undo_redo.get_current_version(); _update_title(); } break; } }; //fall through, no file case KeyBindings::FILE_SAVE_AS: { Gtk::FileChooserDialog dialog("Select a file to save the song", Gtk::FILE_CHOOSER_ACTION_SAVE); dialog.set_transient_for(*this); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); //Add response buttons the the dialog: gboolean swap_buttons; g_object_get(gtk_settings_get_default(), "gtk-alternative-button-order", &swap_buttons, NULL); if (swap_buttons) { dialog.add_button("Select", Gtk::RESPONSE_OK); dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); } else { dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); dialog.add_button("Select", Gtk::RESPONSE_OK); } auto filter_zt = Gtk::FileFilter::create(); filter_zt->set_name((String(VERSION_SOFTWARE_NAME) + " files").utf8().get_data()); filter_zt->add_pattern("*.zyt"); dialog.add_filter(filter_zt); if (song_path != String()) { dialog.set_filename(song_path.utf8().get_data()); } int result = dialog.run(); if (result == Gtk::RESPONSE_OK) { String path; path.parse_utf8(dialog.get_filename().c_str()); if (path.to_lower().get_extension() != "zyt") { path += ".zyt"; } Error err = song_file.save(path); if (err != OK) { Gtk::MessageDialog error_box("Error saving file", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); } else { song_path = path; save_version = undo_redo.get_current_version(); _update_title(); } } dialog.hide(); } break; case KeyBindings::FILE_QUIT: { if (save_version != undo_redo.get_current_version()) { Gtk::MessageDialog error_box("There are unsaved changes.\nQuit anyway?", false, Gtk::MESSAGE_QUESTION, Gtk::BUTTONS_OK_CANCEL); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); int response = error_box.run(); error_box.hide(); if (response != Gtk::RESPONSE_OK) { break; } } application->quit(); } break; case KeyBindings::FILE_EXPORT_WAV: { if (!song.can_play()) { Gtk::MessageDialog error_box("Songs without an order list can't be exported.", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); break; } Gtk::FileChooserDialog dialog("Select a Microsoft Waveform(tm) file to export", Gtk::FILE_CHOOSER_ACTION_SAVE); dialog.set_transient_for(*this); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); //Add response buttons the the dialog: gboolean swap_buttons; g_object_get(gtk_settings_get_default(), "gtk-alternative-button-order", &swap_buttons, NULL); if (swap_buttons) { dialog.add_button("Select", Gtk::RESPONSE_OK); dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); } else { dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); dialog.add_button("Select", Gtk::RESPONSE_OK); } auto filter_zt = Gtk::FileFilter::create(); filter_zt->set_name("Microsoft Waveform"); filter_zt->add_pattern("*.wav"); dialog.add_filter(filter_zt); if (last_wav_export_path != String()) { dialog.set_filename(last_wav_export_path.utf8().get_data()); } int result = dialog.run(); dialog.hide(); if (result == Gtk::RESPONSE_OK) { String path; path.parse_utf8(dialog.get_filename().c_str()); if (path.to_lower().get_extension() != "wav") { path += ".wav"; } Error err; { Gtk::MessageDialog export_dialog("", false /* use_markup */, Gtk::MESSAGE_OTHER, Gtk::BUTTONS_NONE); export_dialog.get_vbox()->get_children()[0]->hide(); export_dialog.get_vbox()->set_spacing(0); Gtk::Button *response_button = export_dialog.add_button("Close", Gtk::RESPONSE_OK); response_button->set_sensitive(false); Gtk::ProgressBar progress; export_dialog.get_vbox()->pack_start(progress, Gtk::PACK_EXPAND_WIDGET); export_dialog.get_vbox()->pack_start(export_wav_label, Gtk::PACK_EXPAND_WIDGET); Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); export_dialog.set_default_size(width / 5, 1); export_dialog.show_all_children(); export_dialog.get_vbox()->get_children()[0]->hide(); export_dialog.set_deletable(false); export_dialog.set_transient_for(*this); export_dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); export_dialog.set_title("Exporting Song.."); export_dialog.signal_event().connect(sigc::mem_fun(*this, &Interface::_export_dialog_key)); export_dialog.show(); err = song_file.export_wav(path, 96000, _export_dialog_callback, &progress); response_button->set_sensitive(true); export_dialog.set_deletable(true); export_dialog.set_title("Exporting Done"); export_wav_label.set_text("All Done!"); export_dialog.run(); export_dialog.hide(); } if (err != OK) { Gtk::MessageDialog error_box("Error exporting file", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); } else { last_wav_export_path = path; } } } break; case KeyBindings::PLAYBACK_PLAY: { song.play(); } break; case KeyBindings::PLAYBACK_STOP: { song.stop(); } break; case KeyBindings::PLAYBACK_NEXT_PATTERN: { song.play_next_pattern(); } break; case KeyBindings::PLAYBACK_PREV_PATTERN: { song.play_prev_pattern(); } break; case KeyBindings::PLAYBACK_PLAY_PATTERN: { song.play_pattern(pattern_editor.get_current_pattern()); } break; case KeyBindings::PLAYBACK_PLAY_FROM_CURSOR: { int current_pattern = pattern_editor.get_current_pattern(); int current_order = -1; for (int i = 0; i <= Song::ORDER_MAX; i++) { if (song.order_get(i) == current_pattern) { current_order = i; break; } } if (current_order >= 0) { song.play(current_order, pattern_editor.get_cursor_tick()); } else { song.play_pattern(current_pattern, pattern_editor.get_cursor_tick()); } } break; case KeyBindings::PLAYBACK_PLAY_FROM_ORDER: { song.play(orderlist_editor.get_cursor_order()); } break; case KeyBindings::PLAYBACK_CURSOR_FOLLOW: { playback_cursor_follow = !playback_cursor_follow; key_bindings->set_action_checked(KeyBindings::PLAYBACK_CURSOR_FOLLOW, playback_cursor_follow); } break; case KeyBindings::EDIT_UNDO: { undo_redo.undo(); } break; case KeyBindings::EDIT_REDO: { undo_redo.redo(); } break; case KeyBindings::EDIT_SONG_INFO: { Gtk::MessageDialog info_box("", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); info_box.set_transient_for(*this); info_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); info_box.set_title("Song Information"); info_box.get_vbox()->set_spacing(4); Gtk::Label label_name; label_name.set_text("Song Name:"); info_box.get_vbox()->pack_start(label_name, Gtk::PACK_SHRINK); Gtk::Entry name; info_box.get_vbox()->pack_start(name, Gtk::PACK_SHRINK); name.set_text(song.get_name().utf8().get_data()); Gtk::Label label_author; label_author.set_text("Author:"); info_box.get_vbox()->pack_start(label_author, Gtk::PACK_SHRINK); Gtk::Entry author; info_box.get_vbox()->pack_start(author, Gtk::PACK_SHRINK); author.set_text(song.get_author().utf8().get_data()); Gtk::Label label_description; label_description.set_text("Description:"); info_box.get_vbox()->pack_start(label_description, Gtk::PACK_SHRINK); Gtk::ScrolledWindow window; Gtk::TextView description; info_box.get_vbox()->pack_start(window, Gtk::PACK_EXPAND_WIDGET); window.add(description); Glib::RefPtr buffer = Gtk::TextBuffer::create(); description.set_buffer(buffer); buffer->set_text(song.get_description().utf8().get_data()); Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); info_box.set_default_size(width / 5, height / 3); info_box.show_all_children(); info_box.get_vbox()->get_children()[0]->hide(); info_box.run(); info_box.hide(); String new_name; String new_author; String new_description; new_name.parse_utf8(name.get_text().c_str()); new_author.parse_utf8(author.get_text().c_str()); new_description.parse_utf8(buffer->get_text().c_str()); if (new_name != song.get_name() || new_author != song.get_author() || new_description != song.get_description()) { undo_redo.begin_action("Change Song Description"); undo_redo.do_method(&song, &Song::set_name, new_name); undo_redo.undo_method(&song, &Song::set_name, song.get_name()); undo_redo.do_method(&song, &Song::set_author, new_author); undo_redo.undo_method(&song, &Song::set_author, song.get_author()); undo_redo.do_method(&song, &Song::set_description, new_description); undo_redo.undo_method(&song, &Song::set_description, song.get_description()); undo_redo.commit_action(); } } break; case KeyBindings::EDIT_FOCUS_PATTERN: { pattern_editor.grab_focus(); } break; case KeyBindings::EDIT_FOCUS_ORDERLIST: { orderlist_editor.grab_focus(); } break; case KeyBindings::EDIT_FOCUS_LAST_EDITED_EFFECT: { } break; case KeyBindings::SETTINGS_OPEN: { settings_dialog.run(); settings_dialog.hide(); } break; case KeyBindings::SETTINGS_PATTERN_INPUT_KEYS: { String text; //text = "::Pattern Editing Cheatsheet::\n\n"; text += "Navigation:\n\n"; text += "Row Up / Down: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_MOVE_UP) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_MOVE_DOWN) + "\n"; text += "Field Left / Right: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_MOVE_LEFT) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_MOVE_RIGHT) + "\n"; text += "Next/Prev Column: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_TAB) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_BACKTAB) + "\n"; text += "Page Up / Down: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_PAGE_UP) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_PAGE_DOWN) + "\n"; text += "First/Last Track: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_HOME) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_END) + "\n"; text += "Scroll Up / Down: " + key_bindings->get_keybind_text(KeyBindings::PATTERN_PAN_WINDOW_UP) + ", " + key_bindings->get_keybind_text(KeyBindings::PATTERN_PAN_WINDOW_DOWN) + "\n"; text += "\n\nEditing:\n\n"; text += "Insert/Delete Row: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_INSERT) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_DELETE) + "\n"; text += "Clear Field: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_INSERT) + ", " + key_bindings->get_keybind_text(KeyBindings::CURSOR_DELETE) + "\n"; text += "Raise/Lower Octave: " + key_bindings->get_keybind_text(KeyBindings::PATTERN_OCTAVE_RAISE) + ", " + key_bindings->get_keybind_text(KeyBindings::PATTERN_OCTAVE_LOWER) + "\n"; text += "Copy Volume Mask: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_COPY_VOLUME_MASK) + "\n"; text += "Toggle Volume Mask: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_TOGGLE_VOLUME_MASK) + "\n"; text += "Play Note: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_PLAY_NOTE) + "\n"; text += "Play Row: " + key_bindings->get_keybind_text(KeyBindings::CURSOR_PLAY_ROW) + "\n"; text += "Insert Note Off: " + key_bindings->get_keybind_text(KeyBindings::PATTERN_CURSOR_NOTE_OFF) + "\n"; text += "\n\nVirtual Piano Lower Row (C0 -> B0):\n\n"; for (int i = 0; i < 12; i++) { if (i > 0) { text += ", "; } text += key_bindings->get_keybind_text(KeyBindings::KeyBind(KeyBindings::PIANO_C0 + i)); } text += "\n\nVirtual Piano Upper Row (C1 -> E2):\n\n"; for (int i = 0; i < 17; i++) { if (i > 0) { text += ", "; } text += key_bindings->get_keybind_text(KeyBindings::KeyBind(KeyBindings::PIANO_C1 + i)); } Gtk::MessageDialog about_box(text.ascii().get_data(), false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); about_box.set_transient_for(*this); about_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); about_box.set_title("Pattern Editing Cheatsheet"); about_box.run(); about_box.hide(); } break; case KeyBindings::SETTINGS_ABOUT: { Gtk::MessageDialog about_box(Glib::ustring(VERSION_WITH_COPYRIGHT) + "\nhttp://zytrax.org", false, Gtk::MESSAGE_INFO, Gtk::BUTTONS_CLOSE); about_box.set_transient_for(*this); about_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); about_box.set_title("About"); about_box.run(); about_box.hide(); } break; default: { } } } void Interface::_update_selected_track() { int current_track = pattern_editor.get_current_track(); for (int i = 0; i < racks.size(); i++) { racks[i].rack->set_selected(i == current_track); racks[i].volume->set_selected(i == current_track); } _ensure_selected_track_visible(); } void Interface::_ensure_selected_track_visible() { int current_track = pattern_editor.get_current_track(); if (racks.size() == 0 || current_track < 0 || current_track >= racks.size()) { return; } int total_size = track_hbox.get_allocated_width(); int page_offset = track_scroll.get_hadjustment()->get_value(); int page_size = track_scroll.get_allocated_width(); int ofs = 0; int size = 0; for (int i = 0; i <= current_track; i++) { ofs += size; size = 0; size += racks[i].rack->get_allocated_width(); size += racks[i].volume->get_allocated_width(); } if (ofs < page_offset) { track_scroll.get_hadjustment()->set_value(ofs); } else if (ofs + size > page_offset + page_size) { track_scroll.get_hadjustment()->set_value(ofs - page_size + size); } } void Interface::_update_tracks() { double previous_scroll = track_scroll.get_hadjustment()->get_value(); int current_track = pattern_editor.get_current_track(); Map offsets; for (int i = 0; i < racks.size(); i++) { offsets[racks[i].rack->get_track()] = racks[i].rack->get_v_offset(); delete racks[i].rack; delete racks[i].volume; } if (rack_filler) { delete rack_filler; rack_filler = NULL; } racks.clear(); for (int i = 0; i < song.get_track_count(); i++) { TrackRacks rack; rack.volume = new TrackRackVolume(i, &song, &undo_redo, theme, key_bindings); rack.v_scroll = new Gtk::VScrollbar; rack.rack = new TrackRackEditor(i, &song, &undo_redo, theme, key_bindings, rack.v_scroll); rack.rack->add_effect.connect(sigc::mem_fun(*this, &Interface::_on_add_effect)); rack.rack->toggle_effect_skip.connect(sigc::mem_fun(*this, &Interface::_on_toggle_effect_skip)); rack.rack->toggle_send_mute.connect(sigc::mem_fun(*this, &Interface::_on_toggle_send_mute)); rack.rack->remove_effect.connect(sigc::mem_fun(*this, &Interface::_on_remove_effect)); rack.rack->remove_send.connect(sigc::mem_fun(*this, &Interface::_on_remove_send)); rack.rack->insert_send_to_track.connect(sigc::mem_fun(*this, &Interface::_on_track_insert_send)); rack.rack->send_amount_changed.connect(sigc::mem_fun(*this, &Interface::_on_track_send_amount_changed)); rack.rack->track_swap_effects.connect(sigc::mem_fun(*this, &Interface::_on_track_swap_effects)); rack.rack->track_swap_sends.connect(sigc::mem_fun(*this, &Interface::_on_track_swap_sends)); rack.rack->effect_request_editor.connect(sigc::mem_fun(*this, &Interface::_on_effect_request_editor)); rack.volume->volume_db_changed.connect(sigc::mem_fun(*this, &Interface::_on_track_volume_changed)); if (offsets.has(song.get_track(i))) { rack.rack->set_v_offset(offsets[song.get_track(i)]); } track_hbox.pack_start(*rack.volume, Gtk::PACK_SHRINK); track_hbox.pack_start(*rack.rack, Gtk::PACK_SHRINK); track_hbox.pack_start(*rack.v_scroll, Gtk::PACK_SHRINK); rack.rack->set_selected(i == current_track); rack.volume->set_selected(i == current_track); rack.volume->show(); rack.rack->show(); racks.push_back(rack); } rack_filler = new TrackRackFiller(theme); rack_filler->show(); track_hbox.pack_start(*rack_filler, Gtk::PACK_EXPAND_WIDGET); track_scroll.get_hadjustment()->set_value(previous_scroll); _ensure_selected_track_visible(); } void Interface::_update_volume_mask() { updating_editors = true; volume.get_adjustment()->set_value(pattern_editor.get_current_volume_mask()); volume_mask.set_active(pattern_editor.is_current_volume_mask_active()); updating_editors = false; } void Interface::_update_octave() { updating_editors = true; octave.get_adjustment()->set_value(pattern_editor.get_current_octave()); updating_editors = false; } void Interface::_update_pattern() { updating_editors = true; pattern.get_adjustment()->set_value(pattern_editor.get_current_pattern()); updating_editors = false; } void Interface::_update_step() { updating_editors = true; step.get_adjustment()->set_value(pattern_editor.get_current_cursor_advance()); updating_editors = false; } void Interface::_update_zoom() { updating_editors = true; zoom.set_active(pattern_editor.get_beat_zoom()); updating_editors = false; } void Interface::_on_add_effect(int p_track) { add_effect_dialog.set_transient_for(*this); add_effect_dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); add_effect_dialog.update_effect_list(); if (add_effect_dialog.run() == Gtk::RESPONSE_OK) { int idx = add_effect_dialog.get_selected_effect_index(); ERR_FAIL_COND(idx == -1); AudioEffect *effect = fx_factory->instantiate_effect(idx); ERR_FAIL_COND(!effect); //cant create { //configure default commands if they exist for (int j = 0; j < effect->get_control_port_count(); j++) { ControlPort *cp = effect->get_control_port(j); for (int i = 0; i < SettingsDialog::MAX_DEFAULT_COMMANDS; i++) { if (SettingsDialog::get_default_command_name(i) == cp->get_identifier()) { cp->set_command(SettingsDialog::get_default_command_command(i)); break; } } } } undo_redo.begin_action("Create Effect: " + effect->get_name()); Track *track = song.get_track(p_track); undo_redo.do_method(track, &Track::add_audio_effect, effect, -1); undo_redo.undo_method(track, &Track::remove_audio_effect, track->get_audio_effect_count()); undo_redo.do_data(effect); undo_redo.do_method(this, &Interface::_update_tracks); undo_redo.undo_method(this, &Interface::_update_tracks); undo_redo.commit_action(); } add_effect_dialog.hide(); } void Interface::_redraw_track_edits() { for (int i = 0; i < racks.size(); i++) { racks[i].rack->queue_draw(); racks[i].volume->queue_draw(); } main_vu.queue_draw(); } void Interface::_on_toggle_effect_skip(int p_track, int p_effect) { ERR_FAIL_INDEX(p_track, song.get_track_count()); ERR_FAIL_INDEX(p_effect, song.get_track(p_track)->get_audio_effect_count()); undo_redo.begin_action("Toggle Effect Skip"); bool skip = song.get_track(p_track)->get_audio_effect(p_effect)->is_skipped(); undo_redo.do_method(song.get_track(p_track)->get_audio_effect(p_effect), &AudioEffect::set_skip, !skip); undo_redo.undo_method(song.get_track(p_track)->get_audio_effect(p_effect), &AudioEffect::set_skip, skip); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_toggle_send_mute(int p_track, int p_send) { ERR_FAIL_INDEX(p_track, song.get_track_count()); ERR_FAIL_INDEX(p_send, song.get_track(p_track)->get_send_count()); undo_redo.begin_action("Toggle Send Mute"); bool mute = song.get_track(p_track)->is_send_muted(p_send); undo_redo.do_method(song.get_track(p_track), &Track::set_send_mute, p_send, !mute); undo_redo.undo_method(song.get_track(p_track), &Track::set_send_mute, p_send, mute); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_erase_effect_editors_for_effect(AudioEffect *p_effect) { if (active_effect_editors.has(p_effect)) { delete active_effect_editors[p_effect]; active_effect_editors.erase(p_effect); } } void Interface::_on_remove_effect(int p_track, int p_effect) { ERR_FAIL_INDEX(p_track, song.get_track_count()); ERR_FAIL_INDEX(p_effect, song.get_track(p_track)->get_audio_effect_count()); undo_redo.begin_action("Remove Effect"); undo_redo.do_method(song.get_track(p_track), &Track::remove_audio_effect, p_effect); AudioEffect *effect = song.get_track(p_track)->get_audio_effect(p_effect); undo_redo.undo_method(song.get_track(p_track), &Track::add_audio_effect, effect, p_effect); undo_redo.undo_data(effect); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); _erase_effect_editors_for_effect(effect); } void Interface::_on_remove_send(int p_track, int p_send) { ERR_FAIL_INDEX(p_track, song.get_track_count()); ERR_FAIL_INDEX(p_send, song.get_track(p_track)->get_send_count()); undo_redo.begin_action("Remove Send"); Track *track = song.get_track(p_track); undo_redo.do_method(track, &Track::remove_send, p_send); undo_redo.undo_method(track, &Track::add_send, track->get_send_track(p_send), p_send); undo_redo.undo_method(track, &Track::set_send_amount, p_send, track->get_send_amount(p_send)); undo_redo.undo_method(track, &Track::set_send_mute, p_send, track->is_send_muted(p_send)); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_update_song_process_order() { song.update_process_order(); } void Interface::_on_track_insert_send(int p_track, int p_to_track) { ERR_FAIL_INDEX(p_track, song.get_track_count()); ERR_FAIL_COND(p_to_track != Track::SEND_SPEAKERS && (p_to_track < 0 || p_to_track >= song.get_track_count())); ERR_FAIL_COND(p_to_track == p_track); //first try this Track *track = song.get_track(p_track); SoundDriverManager::lock_driver(); track->add_send(p_to_track); bool valid = song.update_process_order(); track->remove_send(track->get_send_count() - 1); SoundDriverManager::unlock_driver(); if (!valid) { Gtk::MessageDialog error_box("Unable to add send, cyclic reference?", false, Gtk::MESSAGE_ERROR, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); return; } undo_redo.begin_action("Add Send"); undo_redo.do_method(track, &Track::add_send, p_to_track, -1); undo_redo.undo_method(track, &Track::remove_send, track->get_send_count()); undo_redo.do_method(this, &Interface::_update_song_process_order); undo_redo.undo_method(this, &Interface::_update_song_process_order); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_track_send_amount_changed(int p_track, int p_send, float p_amount) { ERR_FAIL_INDEX(p_track, song.get_track_count()); Track *track = song.get_track(p_track); ERR_FAIL_INDEX(p_send, track->get_send_count()); undo_redo.begin_action("Set Send Amount", true); undo_redo.do_method(track, &Track::set_send_amount, p_send, p_amount); undo_redo.undo_method(track, &Track::set_send_amount, p_send, track->get_send_amount(p_send)); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_track_swap_effects(int p_track, int p_effect, int p_with_effect) { ERR_FAIL_INDEX(p_track, song.get_track_count()); Track *track = song.get_track(p_track); undo_redo.begin_action("Swap Effects"); undo_redo.do_method(track, &Track::swap_audio_effects, p_effect, p_with_effect); undo_redo.undo_method(track, &Track::swap_audio_effects, p_effect, p_with_effect); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_track_swap_sends(int p_track, int p_send, int p_with_send) { ERR_FAIL_INDEX(p_track, song.get_track_count()); Track *track = song.get_track(p_track); undo_redo.begin_action("Swap Sends"); undo_redo.do_method(track, &Track::swap_sends, p_send, p_with_send); undo_redo.undo_method(track, &Track::swap_sends, p_send, p_with_send); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_track_volume_changed(int p_track, float p_volume_db) { ERR_FAIL_INDEX(p_track, song.get_track_count()); Track *track = song.get_track(p_track); undo_redo.begin_action("Change Track #" + String::num(p_track) + " volume.", true); undo_redo.do_method(track, &Track::set_mix_volume_db, p_volume_db); undo_redo.undo_method(track, &Track::set_mix_volume_db, track->get_mix_volume_db()); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_effect_request_editor(int p_track, int p_effect) { ERR_FAIL_INDEX(p_track, song.get_track_count()); Track *track = song.get_track(p_track); ERR_FAIL_INDEX(p_effect, track->get_audio_effect_count()); AudioEffect *effect = track->get_audio_effect(p_effect); if (effect->get_name() == "DummyPlugin") { //do not attempt editing dummy plugins Gtk::MessageDialog error_box("This is a placeholder for a missing plugin.\nIt can't be edited.", false, Gtk::MESSAGE_ERROR, Gtk::BUTTONS_CLOSE); error_box.set_transient_for(*this); error_box.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); error_box.run(); error_box.hide(); return; } if (!active_effect_editors.has(effect)) { //create a new editor EffectEditor *effect_editor = new EffectEditor; Gtk::Widget *editor = NULL; for (int i = plugin_editor_function_count - 1; i >= 0; i--) { editor = plugin_editor_create_functions[i](effect, effect_editor); if (editor) { break; } } if (!editor) { delete editor; ERR_FAIL_COND(!editor); } effect_editor->edit(effect, track, editor); effect_editor->toggle_automation_visibility.connect(sigc::mem_fun(*this, &Interface::_on_toggle_automation_visibility)); effect_editor->select_automation_command.connect(sigc::mem_fun(*this, &Interface::_on_select_automation_command)); active_effect_editors[effect] = effect_editor; effect_editor->set_title(effect->get_name().utf8().get_data()); //nope, dont do it //effect_editor->set_transient_for(*this); effect_editor->set_position(Gtk::WIN_POS_CENTER_ALWAYS); effect_editor->signal_focus_in_event().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_editor_window_gained_focus), track)); } else { active_effect_editors[effect]->hide(); } active_effect_editors[effect]->show(); } void Interface::_update_editor_automations_for_effect(AudioEffect *p_effect) { if (active_effect_editors.has(p_effect)) { active_effect_editors[p_effect]->update_automations(); } pattern_editor.redraw_and_validate_cursor(); } void Interface::_on_select_automation_command(Track *p_track, AudioEffect *p_effect, int p_automation, int p_command) { ControlPort *port = p_effect->get_control_port(p_automation); undo_redo.begin_action("Change Automation Command"); undo_redo.do_method(port, &ControlPort::set_command, (char)p_command); undo_redo.undo_method(port, &ControlPort::set_command, port->get_command()); undo_redo.do_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.undo_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.commit_action(); } void Interface::_on_toggle_automation_visibility(Track *p_track, AudioEffect *p_effect, int p_automation, bool p_visible) { if (p_visible) { undo_redo.begin_action("Create Automation"); int disabled_index = -1; for (int i = 0; i < p_track->get_disabled_automation_count(); i++) { Automation *a = p_track->get_disabled_automation(i); if (a->get_control_port() == p_effect->get_control_port(p_automation)) { disabled_index = i; break; } } if (disabled_index >= 0) { //move from disabled to enabled Automation *a = p_track->get_disabled_automation(disabled_index); undo_redo.do_method(p_track, &Track::remove_disabled_automation, disabled_index); undo_redo.do_method(p_track, &Track::add_automation, a, -1); undo_redo.undo_method(p_track, &Track::remove_automation, p_track->get_automation_count()); undo_redo.undo_method(p_track, &Track::add_disabled_automation, a, disabled_index); } else { Automation *a = new Automation(p_effect->get_control_port(p_automation), p_effect); undo_redo.do_method(p_track, &Track::add_automation, a, -1); undo_redo.undo_method(p_track, &Track::remove_automation, p_track->get_automation_count()); undo_redo.do_data(a); } undo_redo.do_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.undo_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.commit_action(); } else { int index = -1; for (int i = 0; i < p_track->get_automation_count(); i++) { Automation *a = p_track->get_automation(i); if (a->get_control_port() == p_effect->get_control_port(p_automation)) { index = i; break; } } ERR_FAIL_COND(index == -1); Automation *a = p_track->get_automation(index); undo_redo.begin_action("Remove Automation"); undo_redo.do_method(p_track, &Track::remove_automation, index); if (!a->is_empty()) { undo_redo.do_method(p_track, &Track::add_disabled_automation, a, -1); undo_redo.undo_method(p_track, &Track::remove_disabled_automation, p_track->get_disabled_automation_count()); } undo_redo.undo_method(p_track, &Track::add_automation, a, index); undo_redo.do_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.undo_method(this, &Interface::_update_editor_automations_for_effect, p_effect); undo_redo.commit_action(); } } void Interface::_on_application_startup() { key_bindings->initialize(application, this); menu = Gio::Menu::create(); file_menu = Gio::Menu::create(); file_menu_file = Gio::Menu::create(); file_menu->append_section(file_menu_file); file_menu_file->append("New", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_NEW).ascii().get_data()); file_menu_file->append("Open", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_OPEN).ascii().get_data()); file_menu_file->append("Save", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_SAVE).ascii().get_data()); file_menu_file->append("Save As", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_SAVE_AS).ascii().get_data()); file_menu_export = Gio::Menu::create(); file_menu->append_section(file_menu_export); file_menu_export->append("Export to WAV", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_EXPORT_WAV).ascii().get_data()); file_menu_exit = Gio::Menu::create(); file_menu->append_section(file_menu_exit); file_menu_exit->append("Quit", key_bindings->get_keybind_detailed_name(KeyBindings::FILE_QUIT).ascii().get_data()); menu->append_submenu("File", file_menu); play_menu = Gio::Menu::create(); play_menu_play = Gio::Menu::create(); play_menu->append_section(play_menu_play); play_menu_play->append("Play Song", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_PLAY).ascii().get_data()); play_menu_play->append("Stop", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_STOP).ascii().get_data()); play_menu_seek = Gio::Menu::create(); play_menu->append_section(play_menu_seek); play_menu_seek->append("Skip to Next Pattern", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_NEXT_PATTERN).ascii().get_data()); play_menu_seek->append("Skip to Prev Pattern", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_PREV_PATTERN).ascii().get_data()); play_menu_pattern = Gio::Menu::create(); play_menu->append_section(play_menu_pattern); play_menu_pattern->append("Play Current Pattern", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_PLAY_PATTERN).ascii().get_data()); play_menu_pattern->append("Play From Cursor", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_PLAY_FROM_CURSOR).ascii().get_data()); play_menu_pattern->append("Play From Current Order", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_PLAY_FROM_ORDER).ascii().get_data()); play_menu_extra = Gio::Menu::create(); play_menu->append_section(play_menu_extra); play_menu_extra->append("Toggle Follow Song", key_bindings->get_keybind_detailed_name(KeyBindings::PLAYBACK_CURSOR_FOLLOW).ascii().get_data()); menu->append_submenu("Play", play_menu); edit_menu = Gio::Menu::create(); edit_menu_undo = Gio::Menu::create(); edit_menu->append_section(edit_menu_undo); edit_menu_undo->append("Undo", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_UNDO).ascii().get_data()); edit_menu_undo->append("Redo", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_REDO).ascii().get_data()); edit_menu_info = Gio::Menu::create(); edit_menu->append_section(edit_menu_info); edit_menu_info->append("Song Information", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_SONG_INFO).ascii().get_data()); edit_menu_focus = Gio::Menu::create(); edit_menu->append_section(edit_menu_focus); edit_menu_focus->append("Focus on Pattern", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_FOCUS_PATTERN).ascii().get_data()); edit_menu_focus->append("Focus on Orderlist", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_FOCUS_ORDERLIST).ascii().get_data()); edit_menu_focus->append("Open Last Edited Effect", key_bindings->get_keybind_detailed_name(KeyBindings::EDIT_FOCUS_LAST_EDITED_EFFECT).ascii().get_data()); menu->append_submenu("Edit", edit_menu); select_menu = Gio::Menu::create(); select_menu_select = Gio::Menu::create(); select_menu->append_section(select_menu_select); select_menu_select->append("Begin", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECT_BEGIN).ascii().get_data()); select_menu_select->append("End", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECT_END).ascii().get_data()); select_menu_select->append("Column/Track/Pattern", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECT_COLUMN_TRACK_ALL).ascii().get_data()); select_menu_select->append("Clear", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_DISABLE).ascii().get_data()); select_menu_clipboard = Gio::Menu::create(); select_menu->append_section(select_menu_clipboard); select_menu_clipboard->append("Cut", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_CUT).ascii().get_data()); select_menu_clipboard->append("Copy", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_COPY).ascii().get_data()); select_menu_clipboard->append("Paste Insert", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_PASTE_INSERT).ascii().get_data()); select_menu_clipboard->append("Paste Overwrite", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_PASTE_OVERWRITE).ascii().get_data()); select_menu_clipboard->append("Paste Mix", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_PASTE_MIX).ascii().get_data()); select_menu_transpose = Gio::Menu::create(); select_menu->append_section(select_menu_transpose); select_menu_transpose->append("Raise Note(s) a Semitone", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_RAISE_NOTES_SEMITONE).ascii().get_data()); select_menu_transpose->append("Raise Note(s) an Octave", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_RAISE_NOTES_OCTAVE).ascii().get_data()); select_menu_transpose->append("Lower Note(s) a Semitone", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_LOWER_NOTES_SEMITONE).ascii().get_data()); select_menu_transpose->append("Lower Note(s) an Octave", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_LOWER_NOTES_OCTAVE).ascii().get_data()); select_menu_operations = Gio::Menu::create(); select_menu->append_section(select_menu_operations); select_menu_operations->append("Set Volume Mask", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_SET_VOLUME).ascii().get_data()); select_menu_operations->append("Interpolate Volume or Automation", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_INTERPOLATE_VOLUME_AUTOMATION).ascii().get_data()); select_menu_operations->append("Amplify Volume or Automation", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_AMPLIFY_VOLUME_AUTOMATION).ascii().get_data()); select_menu_length = Gio::Menu::create(); select_menu->append_section(select_menu_length); select_menu_length->append("Double Length", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_DOUBLE_LENGTH).ascii().get_data()); select_menu_length->append("Halve Length", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_HALVE_LENGTH).ascii().get_data()); select_menu_length->append("Scale Length", key_bindings->get_keybind_detailed_name(KeyBindings::PATTERN_SELECTION_SCALE_LENGTH).ascii().get_data()); menu->append_submenu("Selection", select_menu); settings_menu = Gio::Menu::create(); settings_menu_preferences = Gio::Menu::create(); settings_menu->append_section(settings_menu_preferences); settings_menu_preferences->append("Preferences", key_bindings->get_keybind_detailed_name(KeyBindings::SETTINGS_OPEN).ascii().get_data()); settings_menu_cheat = Gio::Menu::create(); settings_menu->append_section(settings_menu_cheat); settings_menu_cheat->append("Pattern Editing Cheat-Sheet", key_bindings->get_keybind_detailed_name(KeyBindings::SETTINGS_PATTERN_INPUT_KEYS).ascii().get_data()); settings_menu_about = Gio::Menu::create(); settings_menu->append_section(settings_menu_about); settings_menu_about->append("About", key_bindings->get_keybind_detailed_name(KeyBindings::SETTINGS_ABOUT).ascii().get_data()); menu->append_submenu("Settings", settings_menu); application->set_menubar(menu); key_bindings->action_activated.connect(sigc::mem_fun(*this, &Interface::_on_action_activated)); settings_dialog.initialize_bindings(); pattern_editor.initialize_menus(); } void Interface::add_editor_plugin_function(EffectEditorPluginFunc p_plugin) { ERR_FAIL_COND(plugin_editor_function_count == MAX_EFFECT_EDITOR_PLUGINS); plugin_editor_create_functions[plugin_editor_function_count++] = p_plugin; } void Interface::_on_pattern_settings_open() { pattern_settings_popover.popup(); pattern_settings_length.get_adjustment()->set_value(song.pattern_get_beats(pattern_editor.get_current_pattern())); bar_length.get_adjustment()->set_value(song.pattern_get_beats_per_bar(pattern_editor.get_current_pattern())); change_swing.set_active(song.pattern_get_swing_beat_divisor(pattern_editor.get_current_pattern())); change_next.get_adjustment()->set_value(1.0); } void Interface::_on_pattern_settings_change() { undo_redo.begin_action("Change Pattern(s)"); int patterns = change_next.get_adjustment()->get_value(); int beats = pattern_settings_length.get_adjustment()->get_value(); int beats_per_bar = bar_length.get_adjustment()->get_value(); Song::SwingBeatDivisor swing_divisor = Song::SWING_BEAT_DIVISOR_1; Gtk::TreeModel::iterator iter = change_swing.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[zoom_model_columns.index]; swing_divisor = Song::SwingBeatDivisor(id); } } for (int i = 0; i < patterns; i++) { int pattern = pattern_editor.get_current_pattern() + i; undo_redo.do_method(&song, &Song::pattern_set_beats, pattern, beats); undo_redo.undo_method(&song, &Song::pattern_set_beats, pattern, song.pattern_get_beats(pattern)); undo_redo.do_method(&song, &Song::pattern_set_beats_per_bar, pattern, beats_per_bar); undo_redo.undo_method(&song, &Song::pattern_set_beats_per_bar, pattern, song.pattern_get_beats_per_bar(pattern)); undo_redo.do_method(&song, &Song::pattern_set_swing_beat_divisor, pattern, swing_divisor); undo_redo.undo_method(&song, &Song::pattern_set_swing_beat_divisor, pattern, song.pattern_get_swing_beat_divisor(pattern)); undo_redo.do_method(&pattern_editor, &PatternEditor::redraw_and_validate_cursor); undo_redo.undo_method(&pattern_editor, &PatternEditor::redraw_and_validate_cursor); } undo_redo.commit_action(); pattern_settings_popover.popdown(); } void Interface::_update_colors() { pattern_editor.queue_draw(); orderlist_editor.queue_draw(); if (rack_filler) { rack_filler->queue_draw(); } for (int i = 0; i < racks.size(); i++) { racks[i].rack->queue_draw(); racks[i].volume->queue_draw(); } } void Interface::_undo_redo_action(const String &p_name, void *p_userdata) { Interface *interface = (Interface *)p_userdata; interface->_update_title(); } bool Interface::_close_request(GdkEventAny *event) { _on_action_activated(KeyBindings::FILE_QUIT); return true; } void Interface::_process_audio(AudioFrame *p_frames, int p_amount) { singleton->song.process_audio(p_frames, p_amount); } void Interface::_process_midi(double p_delta, const MIDIEvent &p_event) { AudioEffect::Event ev; switch (p_event.type) { case MIDIEvent::MIDI_NOTE_ON: { ev.type = AudioEffect::Event::TYPE_NOTE; ev.param8 = p_event.note.note; ev.paramf = p_event.note.velocity / 127.0; } break; case MIDIEvent::MIDI_NOTE_OFF: { ev.type = AudioEffect::Event::TYPE_NOTE_OFF; ev.param8 = p_event.note.note; ev.paramf = p_event.note.velocity / 127.0; } break; case MIDIEvent::MIDI_NOTE_PRESSURE: { ev.type = AudioEffect::Event::TYPE_AFTERTOUCH; ev.param8 = p_event.note.note; ev.paramf = p_event.note.velocity / 127.0; } break; default: { return; } } ev.offset = 0; singleton->song.play_single_event(singleton->pattern_editor.get_current_track(), ev); } Interface *Interface::singleton = NULL; bool Interface::_playback_timer_callback() { if (!song.is_playing()) { pattern_editor.set_playback_pos(-1, -1); orderlist_editor.set_playback_order(-1); } else { pattern_editor.set_playback_pos(song.get_playing_pattern(), song.get_playing_tick()); orderlist_editor.set_playback_order(song.get_playing_order()); if (playback_cursor_follow) { pattern_editor.set_playback_cursor(song.get_playing_pattern(), song.get_playing_tick()); } } for (int i = 0; i < racks.size(); i++) { racks[i].volume->update_peak(); } main_vu.update_peak(); return true; } void Interface::_on_main_volume_db_changed(float p_db) { undo_redo.begin_action("Change Master Volume", true); undo_redo.do_method(&song, &Song::set_main_volume_db, p_db); undo_redo.undo_method(&song, &Song::set_main_volume_db, song.get_main_volume_db()); undo_redo.do_method(this, &Interface::_redraw_track_edits); undo_redo.undo_method(this, &Interface::_redraw_track_edits); undo_redo.commit_action(); } void Interface::_on_song_step_buffer_changed() { int step_size = SoundDriverManager::get_buffer_size_frames(SoundDriverManager::get_step_buffer_size()); song.set_process_buffer_size(step_size); } void Interface::_on_song_mix_rate_changed() { int mix_rate = SoundDriverManager::get_mix_frequency_hz(SoundDriverManager::get_mix_frequency()); song.set_sampling_rate(mix_rate); } void Interface::_update_song_mixing_parameters() { song.set_sampling_rate(SoundDriverManager::get_mix_frequency_hz(SoundDriverManager::get_mix_frequency())); song.set_process_buffer_size(SoundDriverManager::get_buffer_size_frames(SoundDriverManager::get_step_buffer_size())); } bool Interface::_on_editor_window_gained_focus(GdkEventFocus *, Track *p_track) { for (int i = 0; i < song.get_track_count(); i++) { if (song.get_track(i) == p_track) { pattern_editor.set_focus_on_track(i); return false; } } return false; } Interface::Interface(Gtk::Application *p_application, AudioEffectFactory *p_fx_factory, Theme *p_theme, KeyBindings *p_key_bindings) : add_effect_dialog(p_fx_factory), song_file(&song, p_fx_factory), pattern_editor(&song, &undo_redo, p_theme, p_key_bindings), orderlist_editor(&song, &undo_redo, p_theme, p_key_bindings), settings_dialog(p_theme, p_key_bindings, p_fx_factory), main_vu(&song, &undo_redo, p_theme) { theme = p_theme; key_bindings = p_key_bindings; plugin_editor_function_count = 0; application = p_application; application->signal_startup().connect(sigc::mem_fun(*this, &Interface::_on_application_startup)); updating_editors = true; fx_factory = p_fx_factory; add(main_vbox); main_vbox.pack_start(grid, Gtk::PACK_SHRINK); grid.set_column_spacing(4); grid.set_row_spacing(4); grid.attach(play_hbox, 0, 0, 1, 1); prev_pattern_icon = create_image_from_icon("PrevPattern"); prev_pattern.set_image(prev_pattern_icon); play_hbox.pack_start(prev_pattern, Gtk::PACK_SHRINK); prev_pattern.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_PREV_PATTERN)); play_icon = create_image_from_icon("Play"); play.set_image(play_icon); play_hbox.pack_start(play, Gtk::PACK_SHRINK); play.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_PLAY)); stop_icon = create_image_from_icon("Stop"); stop.set_image(stop_icon); play_hbox.pack_start(stop, Gtk::PACK_SHRINK); stop.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_STOP)); next_pattern_icon = create_image_from_icon("NextPattern"); next_pattern.set_image(next_pattern_icon); play_hbox.pack_start(next_pattern, Gtk::PACK_SHRINK); next_pattern.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_NEXT_PATTERN)); sep1.set_text(" "); play_hbox.pack_start(sep1, Gtk::PACK_SHRINK); play_pattern_icon = create_image_from_icon("PlayPattern"); play_pattern.set_image(play_pattern_icon); play_hbox.pack_start(play_pattern, Gtk::PACK_SHRINK); play_pattern.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_PLAY_PATTERN)); play_cursor_icon = create_image_from_icon("PlayFromCursor"); play_cursor.set_image(play_cursor_icon); play_hbox.pack_start(play_cursor, Gtk::PACK_SHRINK); play_cursor.signal_clicked().connect(sigc::bind(sigc::mem_fun(*this, &Interface::_on_action_activated), KeyBindings::PLAYBACK_PLAY_FROM_CURSOR)); sep2.set_text(" "); play_hbox.pack_start(sep2, Gtk::PACK_SHRINK); add_track_icon = create_image_from_icon("AddTrack"); add_track.set_image(add_track_icon); add_track.set_label(" New Track"); add_track.set_always_show_image(true); play_hbox.pack_start(add_track, Gtk::PACK_SHRINK); add_track.signal_clicked().connect(sigc::mem_fun(*this, &Interface::_add_track)); //play_hbox.pack_start(spacer1, Gtk::PACK_EXPAND_WIDGET); grid.attach(main_vu, 1, 0, 3, 1); main_vu.main_volume_db_changed.connect(sigc::mem_fun(*this, &Interface::_on_main_volume_db_changed)); tempo_label.set_text(" Tempo: "); grid.attach(tempo_label, 4, 0, 1, 1); grid.attach(tempo, 5, 0, 1, 1); tempo.set_adjustment(Gtk::Adjustment::create(1, 30, 299)); tempo.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_tempo_changed)); swing_label.set_text(" Swing(%): "); grid.attach(swing_label, 6, 0, 1, 1); grid.attach(swing, 7, 0, 1, 1); swing.set_adjustment(Gtk::Adjustment::create(0, 0, 100)); swing.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_swing_changed)); grid.attach(pattern_hbox, 0, 1, 1, 1); pattern_label.set_text(" Pattern: "); pattern_hbox.pack_start(pattern_label, Gtk::PACK_SHRINK); pattern_hbox.pack_start(pattern, Gtk::PACK_SHRINK); pattern.set_adjustment(Gtk::Adjustment::create(0, 0, 999)); pattern.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_pattern_changed)); pattern_settings_icon = create_image_from_icon("Settings"); pattern_settings.set_image(pattern_settings_icon); pattern_hbox.pack_start(pattern_settings, Gtk::PACK_SHRINK); pattern_settings.signal_clicked().connect(sigc::mem_fun(*this, &Interface::_on_pattern_settings_open)); /*pattern_length_label.set_text(" Length: "); top_hbox.pack_start(pattern_length_label, Gtk::PACK_SHRINK); top_hbox.pack_start(pattern_length, Gtk::PACK_SHRINK); pattern_length_set_next.set_label("Next.."); top_hbox.pack_start(pattern_length_set_next, Gtk::PACK_SHRINK); */ zoom_label.set_text(" Zoom: "); pattern_hbox.pack_start(zoom_label, Gtk::PACK_SHRINK); pattern_hbox.pack_start(zoom, Gtk::PACK_SHRINK); { zoomlist_store = Gtk::ListStore::create(zoom_model_columns); zoom.set_model(zoomlist_store); const char *beat_zoom_text[PatternEditor::BEAT_ZOOM_MAX] = { "1 Beat", "1/2 Beat", "1/3 Beat", "1/4 Beat", "1/6 Beat", "1/8 Beat", "1/12 Beat", "1/16 Beat", "1/24 Beat", "1/32 Beat", "1/48 Beat", "1/64 Beat" }; for (int i = 0; i < PatternEditor::BEAT_ZOOM_MAX; i++) { Gtk::TreeModel::Row row = *(zoomlist_store->append()); row[zoom_model_columns.name] = beat_zoom_text[i]; row[zoom_model_columns.index] = i; zoom_rows.push_back(row); } zoom.pack_start(zoom_model_columns.name); } zoom.set_active(zoom_rows[3]); zoom.signal_changed().connect(sigc::mem_fun(*this, &Interface::_zoom_changed)); grid.attach(spacer1, 1, 1, 1, 1); spacer1.set_hexpand(true); volume_mask.set_label(" Volume: "); grid.attach(volume_mask, 2, 1, 1, 1); grid.attach(volume, 3, 1, 1, 1); volume.set_adjustment(Gtk::Adjustment::create(0, 0, 99)); volume.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_volume_changed)); volume_mask.signal_clicked().connect(sigc::mem_fun(*this, &Interface::_volume_changed)); octave_label.set_text(" Octave: "); grid.attach(octave_label, 4, 1, 1, 1); grid.attach(octave, 5, 1, 1, 1); octave.set_adjustment(Gtk::Adjustment::create(0, 0, 10)); octave.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_octave_changed)); step_label.set_text(" Step: "); grid.attach(step_label, 6, 1, 1, 1); grid.attach(step, 7, 1, 1, 1); step.set_adjustment(Gtk::Adjustment::create(1, 0, 16)); step.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(*this, &Interface::_step_changed)); main_vbox.pack_start(main_split, Gtk::PACK_EXPAND_WIDGET); main_split.pack1(main_hbox, true, false); main_hbox.pack_start(pattern_vbox, Gtk::PACK_EXPAND_WIDGET); pattern_vbox.pack_start(pattern_editor, Gtk::PACK_EXPAND_WIDGET); main_hbox.pack_start(pattern_vscroll, Gtk::PACK_SHRINK); main_hbox.pack_start(orderlist_editor, Gtk::PACK_SHRINK); main_hbox.pack_start(orderlist_vscroll, Gtk::PACK_SHRINK); pattern_vbox.pack_start(pattern_hscroll, Gtk::PACK_SHRINK); main_split.pack2(track_scroll, false, false); track_scroll.add(track_hbox); track_scroll.set_propagate_natural_height(true); track_scroll.set_policy(Gtk::POLICY_ALWAYS, Gtk::POLICY_NEVER); //pattern_editor.track_edited.connect(sigc::mem_fun(this, &Interface::_track_edited)); pattern_editor.track_layout_changed.connect(sigc::mem_fun(this, &Interface::_update_tracks)); pattern_editor.current_track_changed.connect(sigc::mem_fun(this, &Interface::_update_selected_track)); pattern_editor.volume_mask_changed.connect(sigc::mem_fun(this, &Interface::_update_volume_mask)); pattern_editor.octave_changed.connect(sigc::mem_fun(this, &Interface::_update_octave)); pattern_editor.pattern_changed.connect(sigc::mem_fun(this, &Interface::_update_pattern)); pattern_editor.step_changed.connect(sigc::mem_fun(this, &Interface::_update_step)); pattern_editor.zoom_changed.connect(sigc::mem_fun(this, &Interface::_update_zoom)); pattern_editor.erase_effect_editor_request.connect(sigc::mem_fun(this, &Interface::_erase_effect_editors_for_effect)); pattern_editor.set_hscroll(pattern_hscroll.get_adjustment()); pattern_editor.set_vscroll(pattern_vscroll.get_adjustment()); orderlist_editor.set_vscroll(orderlist_vscroll.get_adjustment()); { change_swing_store = Gtk::ListStore::create(zoom_model_columns); change_swing.set_model(change_swing_store); const char *swing_divisor[Song::SWING_BEAT_DIVISOR_MAX] = { "1 Beat", "1/2 Beat", "1/3 Beat", "1/4 Beat", "1/6 Beat", "1/8 Beat" }; for (int i = 0; i < Song::SWING_BEAT_DIVISOR_MAX; i++) { Gtk::TreeModel::Row row = *(change_swing_store->append()); row[zoom_model_columns.name] = swing_divisor[i]; row[zoom_model_columns.index] = i; swing_rows.push_back(row); } change_swing.pack_start(zoom_model_columns.name); } pattern_settings_popover.add(pattern_settings_grid); pattern_settings_length_label.set_text(" Pattern Length (beats):"); pattern_settings_grid.attach(pattern_settings_length_label, 0, 0, 1, 1); pattern_settings_length.set_adjustment(Gtk::Adjustment::create(16, 4, 128)); pattern_settings_grid.attach(pattern_settings_length, 1, 0, 1, 1); bar_length_label.set_text("Bar Length (beats):"); pattern_settings_grid.attach(bar_length_label, 0, 1, 1, 1); bar_length.set_adjustment(Gtk::Adjustment::create(4, 2, 16)); pattern_settings_grid.attach(bar_length, 1, 1, 1, 1); change_swing_label.set_text("Swing:"); pattern_settings_grid.attach(change_swing_label, 0, 2, 1, 1); pattern_settings_grid.attach(change_swing, 1, 2, 1, 1); //pattern_settings_vsep.set_text("sas "); pattern_settings_vsep.set_size_request(20, 2); pattern_settings_grid.attach(pattern_settings_vsep, 0, 3, 2, 1); change_next_label.set_text("Patterns to Apply:"); pattern_settings_grid.attach(change_next_label, 0, 4, 1, 1); change_next.set_adjustment(Gtk::Adjustment::create(1, 1, 999)); pattern_settings_grid.attach(change_next, 1, 4, 1, 1); pattern_settings_grid.attach(pattern_settings_change_button, 0, 5, 2, 1); pattern_settings_change_button.set_label("Change"); pattern_settings_change_button.signal_clicked().connect(sigc::mem_fun(*this, &Interface::_on_pattern_settings_change)); pattern_settings_popover.set_relative_to(pattern_settings); pattern_settings_popover.set_position(Gtk::POS_BOTTOM); pattern_settings_popover.show_all_children(); settings_dialog.update_colors.connect(sigc::mem_fun(*this, &Interface::_update_colors)); settings_dialog.set_transient_for(*this); settings_dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); settings_dialog.update_song_step_buffer.connect(sigc::mem_fun(*this, &Interface::_on_song_step_buffer_changed)); settings_dialog.update_mix_rate.connect(sigc::mem_fun(*this, &Interface::_on_song_mix_rate_changed)); show_all_children(); rack_filler = NULL; updating_editors = false; save_version = 0; //pattern_editor.init(); _update_editors(); undo_redo.set_action_callback(_undo_redo_action, this); _update_title(); signal_delete_event().connect(sigc::mem_fun(*this, &Interface::_close_request)); singleton = this; playback_cursor_follow = false; //setup song _update_song_mixing_parameters(); SoundDriverManager::set_mix_callback(_process_audio); MIDIDriverManager::set_event_callback(_process_midi); playback_timer = Glib::signal_timeout().connect(sigc::mem_fun(*this, &Interface::_playback_timer_callback), 10, Glib::PRIORITY_DEFAULT); add_editor_plugin_function(&create_default_editor_func); } Interface::~Interface() { for (int i = 0; i < menu_items.size(); i++) { delete menu_items[i]; } for (int i = 0; i < racks.size(); i++) { delete racks[i].rack; delete racks[i].volume; } if (rack_filler) { delete rack_filler; } } zytrax-master/gui/interface.h000066400000000000000000000164611347722000700166170ustar00rootroot00000000000000#ifndef INTERFACE_H #define INTERFACE_H #include "engine/song.h" #include "engine/song_file.h" #include "gui/add_effect_dialog.h" #include "gui/effect_editor.h" #include "gui/master_vu.h" #include "gui/orderlist_editor.h" #include "gui/pattern_editor.h" #include "gui/settings_dialog.h" #include "gui/track_editor.h" #include class Interface : public Gtk::ApplicationWindow { private: enum { FILE_NEW, FILE_OPEN, FILE_SAVE, FILE_SAVE_AS, FILE_QUIT, SETTINGS_CONFIG, SETTINGS_ABOUT }; //dear GTK, why all this for a simple combo? class ModelColumns : public Gtk::TreeModelColumnRecord { public: ModelColumns() { add(name); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn index; }; ModelColumns zoom_model_columns; Glib::RefPtr zoomlist_store; Vector zoom_rows; Gtk::Image prev_pattern_icon; Gtk::Button prev_pattern; Gtk::Image play_icon; Gtk::Button play; Gtk::Image stop_icon; Gtk::Button stop; Gtk::Image next_pattern_icon; Gtk::Button next_pattern; //separator is broken in GTK, so using a Label :( Gtk::Label sep1, sep2, sep3; Gtk::Image play_pattern_icon; Gtk::Button play_pattern; Gtk::Image play_cursor_icon; Gtk::Button play_cursor; Gtk::Image add_track_icon; Gtk::Button add_track; Gtk::Label spacer1, spacer2; Glib::RefPtr menu; Glib::RefPtr file_menu; Glib::RefPtr file_menu_file; Glib::RefPtr file_menu_export; Glib::RefPtr file_menu_exit; Glib::RefPtr play_menu; Glib::RefPtr play_menu_play; Glib::RefPtr play_menu_seek; Glib::RefPtr play_menu_pattern; Glib::RefPtr play_menu_extra; Glib::RefPtr edit_menu; Glib::RefPtr edit_menu_info; Glib::RefPtr edit_menu_undo; Glib::RefPtr edit_menu_focus; Glib::RefPtr select_menu; Glib::RefPtr select_menu_select; Glib::RefPtr select_menu_clipboard; Glib::RefPtr select_menu_transpose; Glib::RefPtr select_menu_operations; Glib::RefPtr select_menu_length; Glib::RefPtr settings_menu; Glib::RefPtr settings_menu_preferences; Glib::RefPtr settings_menu_cheat; Glib::RefPtr settings_menu_about; AddEffectDialog add_effect_dialog; Gtk::MenuItem menu_item_file_open; Vector menu_items; /* Boxes */ Gtk::Grid grid; Gtk::VBox main_vbox; Gtk::HBox play_hbox; Gtk::HBox pattern_hbox; Gtk::VBox pattern_vbox; Gtk::HBox main_hbox; /* Labels */ Gtk::Label pattern_label; Gtk::Label pattern_length_label; Gtk::Label octave_label; Gtk::Label step_label; Gtk::Label zoom_label; Gtk::CheckButton volume_mask; Gtk::Label tempo_label; Gtk::Label swing_label; /* Scrolls */ Gtk::VScrollbar pattern_vscroll; Gtk::HScrollbar pattern_hscroll; Gtk::VScrollbar orderlist_vscroll; Gtk::SpinButton pattern; Gtk::Image pattern_settings_icon; Gtk::Button pattern_settings; Gtk::SpinButton pattern_length; Gtk::Button pattern_length_set_next; Gtk::SpinButton octave; Gtk::SpinButton volume; Gtk::SpinButton tempo; Gtk::SpinButton swing; Gtk::SpinButton step; Gtk::ComboBox zoom; Gtk::VPaned main_split; Gtk::ScrolledWindow track_scroll; Gtk::HBox track_hbox; KeyBindings *key_bindings; /* Editors */ UndoRedo undo_redo; Song song; SongFile song_file; String song_path; int save_version; void _update_title(); Theme *theme; PatternEditor pattern_editor; OrderlistEditor orderlist_editor; AudioEffectFactory *fx_factory; struct TrackRacks { TrackRackVolume *volume; TrackRackEditor *rack; Gtk::VScrollbar *v_scroll; }; Vector racks; TrackRackFiller *rack_filler; SettingsDialog settings_dialog; /* Data */ void _add_track(); void _pattern_changed(); void _octave_changed(); void _step_changed(); void _volume_changed(); void _tempo_changed(); void _swing_changed(); void _zoom_changed(); bool updating_editors; void _update_editors(); Gtk::Application *application; void _on_application_startup(); void _on_action_activated(KeyBindings::KeyBind p_bind); void _update_selected_track(); void _update_tracks(); void _ensure_selected_track_visible(); void _update_volume_mask(); void _update_octave(); void _update_pattern(); void _update_step(); void _update_zoom(); void _redraw_track_edits(); void _on_add_effect(int p_track); void _on_toggle_effect_skip(int p_track, int p_effect); void _on_toggle_send_mute(int p_track, int p_send); void _on_remove_effect(int p_track, int p_effect); void _on_remove_send(int p_track, int p_send); void _on_track_insert_send(int p_track, int p_to_track); void _on_track_send_amount_changed(int p_track, int p_send, float p_amount); void _on_track_swap_effects(int p_track, int p_effect, int p_with_effect); void _on_track_swap_sends(int p_track, int p_send, int p_with_send); void _on_effect_request_editor(int p_track, int p_effect); void _update_editor_automations_for_effect(AudioEffect *p_effect); void _on_toggle_automation_visibility(Track *p_track, AudioEffect *p_effect, int p_automation, bool p_visible); void _on_select_automation_command(Track *p_track, AudioEffect *p_effect, int p_automation, int p_command); void _on_track_volume_changed(int p_track, float p_volume_db); enum { MAX_EFFECT_EDITOR_PLUGINS = 1024 }; EffectEditorPluginFunc plugin_editor_create_functions[MAX_EFFECT_EDITOR_PLUGINS]; int plugin_editor_function_count; Map active_effect_editors; void _erase_effect_editors_for_effect(AudioEffect *p_effect); Gtk::Popover pattern_settings_popover; Gtk::Grid pattern_settings_grid; Gtk::HSeparator pattern_settings_vsep; Gtk::SpinButton pattern_settings_length; Gtk::SpinButton bar_length; Gtk::SpinButton change_next; Gtk::ComboBox change_swing; Gtk::Label pattern_settings_length_label; Gtk::Label bar_length_label; Gtk::Label change_swing_label; Vector swing_rows; Glib::RefPtr change_swing_store; Vector change_swing_rows; Gtk::Label change_next_label; Gtk::Button pattern_settings_change_button; void _on_pattern_settings_open(); void _on_pattern_settings_change(); void _update_colors(); static void _undo_redo_action(const String &p_name, void *p_userdata); bool _close_request(GdkEventAny *event); static void _process_audio(AudioFrame *p_frames, int p_amount); static void _process_midi(double p_delta, const MIDIEvent &p_event); void _update_song_process_order(); static Interface *singleton; void _update_song_mixing_parameters(); bool playback_cursor_follow; sigc::connection playback_timer; bool _playback_timer_callback(); MasterVU main_vu; void _on_main_volume_db_changed(float p_db); void _on_song_step_buffer_changed(); void _on_song_mix_rate_changed(); bool _export_dialog_key(GdkEvent *p_key); static void _export_dialog_callback(int p_order, void *p_userdata); Gtk::Label export_wav_label; String last_wav_export_path; bool _on_editor_window_gained_focus(GdkEventFocus *, Track *p_track); public: void add_editor_plugin_function(EffectEditorPluginFunc p_plugin); Interface(Gtk::Application *p_application, AudioEffectFactory *p_fx_factory, Theme *p_theme, KeyBindings *p_key_bindings); ~Interface(); }; #endif // INTERFACE_H zytrax-master/gui/key_bindings.cpp000066400000000000000000000425251347722000700176570ustar00rootroot00000000000000#include "key_bindings.h" #include "error_macros.h" bool KeyBindings::is_keybind(GdkEventKey *ev, KeyBind p_bind) const { guint state = ev->state & (GDK_CONTROL_MASK | GDK_SHIFT_MASK | GDK_MOD1_MASK | GDK_META_MASK); return (binds[p_bind].state == state && binds[p_bind].keyval == ev->keyval); } bool KeyBindings::is_keybind_noshift(GdkEventKey *ev, KeyBind p_bind) const { guint state = ev->state & (GDK_CONTROL_MASK | GDK_MOD1_MASK | GDK_META_MASK); return (binds[p_bind].state == state && binds[p_bind].keyval == ev->keyval); } const char *KeyBindings::bind_names[BIND_MAX] = { "FileNew", "FileOpen", "FileSave", "FileSaveAs", "FileExportWav", "FileQuit", "PlaybackPlay", "PlaybackStop", "PlaybackNextPattern", "PlaybackPrevPattern", "PlaybackPlayPattern", "PlaybackPlayFromCursor", "PlaybackPlayFromOrder", "PlaybackCursorFollow", "EditUndo", "EditRedo", "EditSongSettings", "EditFocusPattern", "EditFocusOrderlist", "EditFocusLastEffect", "TrackAddTrack", "TrackAddColumn", "TrackRemoveColumn", "TrackAddCommandColumn", "TrackRemoveCommandColumn", "TrackMoveLeft", "TrackMoveRight", "TrackMute", "TrackSolo", "TrackRename", "TrackRemove", "AutomationRadioDiscreteRows", "AutomationRadioEnvelopeSmall", "AutomationRadioEnvelopeLarge", "AutomationMoveLeft", "AutomationMoveRight", "AutomationRemove", "SettingsOpen", "SettingsPatternInputKeys", "SettingsAbout", "CursorMoveUp", "CursorMoveDown", "CursorMoveUp1Row", "CursorMoveDown1Row", "CursorPageUp", "CursorPageDown", "CursorMoveLeft", "CursorMoveRight", "CursorTab", "CursorBacktab", "CursorHome", "CursorEnd", "CursorFieldClear", "CursorInsert", "CursorDelete", "CursorTrackInsert", "CursorTrackDelete", "CursorCopyVolumeMask", "CursorToggleVolumeMask", "CursorPlayNote", "CursorPlayRow", "CursorAdvance1", "CursorAdvance2", "CursorAdvance3", "CursorAdvance4", "CursorAdvance5", "CursorAdvance6", "CursorAdvance7", "CursorAdvance8", "CursorAdvance9", "CursorAdvance10", "CursorZoom1", "CursorZoom2", "CursorZoom3", "CursorZoom4", "CursorZoom6", "CursorZoom8", "CursorZoom12", "CursorZoom16", "CursorZoom24", "CursorZoom32", "PatternPanWindowUp", "PatternPanWindowDown", "PatternCursorNoteOff", "PatternOctaveLower", "PatternOctaveRaise", "PatternPrevPattern", "PatternNextPattern", "PatternSelectBegin", "PatternSelectEnd", "PatternSelectColumnTrackAll", "PatternSelectionRaiseNotesSemitone", "PatternSelectionRaiseNotesOctave", "PatternSelectionLowerNotesSemitone", "PatternSelectionLowerNotesOctave", "PatternSelectionSetVolume", "PatternSelectionInterpolateVolumeAutomation", "PatternSelectionAmplifyVolumeAutomation", "PatternSelectionCut", "PatternSelectionCopy", "PatternSelectionPasteInsert", "PatternSelectionPasteOverwrite", "PatternSelectionPasteMix", "PatternSelectionDisable", "PatternSelectionDoubleLength", "PatternSelectionHalveLength", "PatternSelectionScaleLength", "PianoC0", "PianoCs0", "PianoD0", "PianoDs0", "PianoE0", "PianoF0", "PianoFs0", "PianoG0", "PianoGs0", "PianoA0", "PianoAs0", "PianoB0", "PianoC1", "PianoCs1", "PianoD1", "PianoDs1", "PianoE1", "PianoF1", "PianoFs1", "PianoG1", "PianoGs1", "PianoA1", "PianoAs1", "PianoB1", "PianoC2", "PianoCs2", "PianoD2", "PianoDs2", "PianoE2", }; String KeyBindings::get_keybind_detailed_name(KeyBind p_bind) { return binds[p_bind].detailed_name; } String KeyBindings::get_keybind_text(KeyBind p_bind) { Gtk::AccelKey accel(binds[p_bind].keyval, Gdk::ModifierType(binds[p_bind].state)); return accel.get_abbrev().c_str(); } String KeyBindings::get_keybind_action_name(KeyBind p_bind) { if (binds[p_bind].mode == KeyState::MODE_RADIO) { return bind_names[binds[p_bind].radio_base]; } else { return bind_names[p_bind]; } } const char *KeyBindings::get_keybind_name(KeyBind p_bind) { return bind_names[p_bind]; } int KeyBindings::get_keybind_key(KeyBind p_bind) { return binds[p_bind].keyval; } int KeyBindings::get_keybind_mod(KeyBind p_bind) { return binds[p_bind].state; } void KeyBindings::_add_keybind(KeyBind p_bind, KeyState p_state) { binds[p_bind] = p_state; } void KeyBindings::initialize(Gtk::Application *p_application, Gtk::ApplicationWindow *p_window) { initialized = true; application = p_application; window = p_window; for (int i = 0; i < BIND_MAX; i++) { if (!binds[i].shortcut) { continue; } KeyState &state = binds[i]; //create action if (state.mode == KeyState::MODE_TOGGLE) { actions[i] = window->add_action_bool(bind_names[i], sigc::bind(sigc::mem_fun(*this, &KeyBindings::_on_action), KeyBind(i))); } else if (state.mode == KeyState::MODE_RADIO) { if (i == state.radio_base) { //only add if it's the base one actions[i] = window->add_action_radio_string(bind_names[i], sigc::mem_fun(*this, &KeyBindings::_on_action_string), bind_names[i]); } else { actions[i] = actions[state.radio_base]; } } else { actions[i] = window->add_action(bind_names[i], sigc::bind(sigc::mem_fun(*this, &KeyBindings::_on_action), KeyBind(i))); } //create accelerator if (state.mode == KeyState::MODE_RADIO) { gchar *text = gtk_accelerator_name(state.keyval, GdkModifierType(state.state)); GVariant *v = g_variant_new_string(bind_names[i]); gchar *detailed_name = g_action_print_detailed_name(bind_names[state.radio_base], v); g_variant_unref(v); String dname = String("win.") + detailed_name; free(detailed_name); if (state.keyval > 0) { application->set_accel_for_action(dname.ascii().get_data(), text); } binds[i].detailed_name = dname; free(text); } else { String dname = "win." + String(bind_names[i]); gchar *text = gtk_accelerator_name(state.keyval, GdkModifierType(state.state)); if (state.keyval > 0) { application->set_accel_for_action(dname.ascii().get_data(), text); } binds[i].detailed_name = dname; free(text); } } } void KeyBindings::set_action_enabled(KeyBind p_bind, bool p_enabled) { if (actions[p_bind].operator->()) { actions[p_bind]->set_enabled(p_enabled); } } void KeyBindings::set_action_checked(KeyBind p_bind, bool p_checked) { actions[p_bind]->set_state(Glib::Variant::create(p_checked)); } void KeyBindings::set_action_state(KeyBind p_bind, const String &p_state) { int idx; if (binds[p_bind].mode == KeyState::MODE_RADIO) { idx = binds[p_bind].radio_base; } else { idx = p_bind; } actions[idx]->set_state(Glib::Variant::create(p_state.ascii().get_data())); } Glib::RefPtr KeyBindings::get_keybind_action(KeyBind p_bind) { return actions[p_bind]; } void KeyBindings::set_keybind(KeyBind p_bind, guint p_keyval, guint p_state) { ERR_FAIL_INDEX(p_bind, BIND_MAX); binds[p_bind].keyval = p_keyval; binds[p_bind].state = p_state; if (!initialized || !binds[p_bind].shortcut) { return; } //unset existing application->unset_accels_for_action(binds[p_bind].detailed_name.ascii().get_data()); if (p_keyval == 0) { return; } //set new gchar *text = gtk_accelerator_name(p_keyval, GdkModifierType(p_state)); application->set_accel_for_action(binds[p_bind].detailed_name.ascii().get_data(), text); free(text); } void KeyBindings::reset_keybind(KeyBind p_bind) { ERR_FAIL_INDEX(p_bind, BIND_MAX); set_keybind(p_bind, binds[p_bind].initial_keyval, binds[p_bind].initial_state); } void KeyBindings::clear_keybind(KeyBind p_bind) { ERR_FAIL_INDEX(p_bind, BIND_MAX); set_keybind(p_bind, 0, 0); } void KeyBindings::_on_action(KeyBind p_bind) { action_activated.emit(p_bind); } void KeyBindings::_on_action_string(Glib::ustring p_string) { for (int i = 0; i < BIND_MAX; i++) { if (p_string == bind_names[i]) { _on_action(KeyBind(i)); } } } KeyBindings::KeyBindings() { initialized = false; _add_keybind(FILE_NEW, KeyState(GDK_KEY_n, GDK_CONTROL_MASK, true)); _add_keybind(FILE_OPEN, KeyState(GDK_KEY_o, GDK_CONTROL_MASK, true)); _add_keybind(FILE_SAVE, KeyState(GDK_KEY_s, GDK_CONTROL_MASK, true)); _add_keybind(FILE_SAVE_AS, KeyState(0, 0, true)); _add_keybind(FILE_EXPORT_WAV, KeyState(0, 0, true)); _add_keybind(FILE_QUIT, KeyState(GDK_KEY_q, GDK_CONTROL_MASK, true)); _add_keybind(PLAYBACK_PLAY, KeyState(GDK_KEY_F5, 0, true)); _add_keybind(PLAYBACK_STOP, KeyState(GDK_KEY_F8, 0, true)); _add_keybind(PLAYBACK_NEXT_PATTERN, KeyState(GDK_KEY_Down, GDK_CONTROL_MASK | GDK_SHIFT_MASK | GDK_MOD1_MASK, true)); _add_keybind(PLAYBACK_PREV_PATTERN, KeyState(GDK_KEY_Up, GDK_CONTROL_MASK | GDK_SHIFT_MASK | GDK_MOD1_MASK, true)); _add_keybind(PLAYBACK_PLAY_PATTERN, KeyState(GDK_KEY_F6, 0, true)); _add_keybind(PLAYBACK_PLAY_FROM_CURSOR, KeyState(GDK_KEY_F7, 0, true)); _add_keybind(PLAYBACK_PLAY_FROM_ORDER, KeyState(GDK_KEY_F6, GDK_SHIFT_MASK, true)); _add_keybind(PLAYBACK_CURSOR_FOLLOW, KeyState(GDK_KEY_F4, 0, true, KeyState::MODE_TOGGLE)); _add_keybind(EDIT_UNDO, KeyState(GDK_KEY_z, GDK_CONTROL_MASK, true)); _add_keybind(EDIT_REDO, KeyState(GDK_KEY_z, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(EDIT_SONG_INFO, KeyState(GDK_KEY_i, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(EDIT_FOCUS_PATTERN, KeyState(GDK_KEY_F2, 0, true)); _add_keybind(EDIT_FOCUS_ORDERLIST, KeyState(GDK_KEY_F11, 0, true)); _add_keybind(EDIT_FOCUS_LAST_EDITED_EFFECT, KeyState(GDK_KEY_F3, 0, true)); _add_keybind(TRACK_ADD_TRACK, KeyState(GDK_KEY_a, GDK_CONTROL_MASK, true)); _add_keybind(TRACK_ADD_COLUMN, KeyState(GDK_KEY_bracketright, GDK_MOD1_MASK, true)); _add_keybind(TRACK_REMOVE_COLUMN, KeyState(GDK_KEY_bracketleft, GDK_MOD1_MASK, true)); _add_keybind(TRACK_ADD_COMMAND_COLUMN, KeyState(GDK_KEY_bracketright, GDK_MOD1_MASK | GDK_CONTROL_MASK, true)); _add_keybind(TRACK_REMOVE_COMMAND_COLUMN, KeyState(GDK_KEY_bracketleft, GDK_MOD1_MASK | GDK_CONTROL_MASK, true)); _add_keybind(TRACK_MOVE_LEFT, KeyState(GDK_KEY_Left, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(TRACK_MOVE_RIGHT, KeyState(GDK_KEY_Right, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(TRACK_MUTE, KeyState(GDK_KEY_F9, 0, true, KeyState::MODE_TOGGLE)); _add_keybind(TRACK_SOLO, KeyState(GDK_KEY_F10, 0, true)); _add_keybind(TRACK_RENAME, KeyState(GDK_KEY_r, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(TRACK_REMOVE, KeyState(GDK_KEY_x, GDK_SHIFT_MASK | GDK_CONTROL_MASK, true)); _add_keybind(AUTOMATION_RADIO_DISCRETE_ROWS, KeyState(GDK_KEY_n, GDK_CONTROL_MASK | GDK_SHIFT_MASK, true, KeyState::MODE_RADIO, AUTOMATION_RADIO_DISCRETE_ROWS)); _add_keybind(AUTOMATION_RADIO_ENVELOPE_SMALL, KeyState(GDK_KEY_s, GDK_CONTROL_MASK | GDK_SHIFT_MASK, true, KeyState::MODE_RADIO, AUTOMATION_RADIO_DISCRETE_ROWS)); _add_keybind(AUTOMATION_RADIO_ENVELOPE_LARGE, KeyState(GDK_KEY_l, GDK_CONTROL_MASK | GDK_SHIFT_MASK, true, KeyState::MODE_RADIO, AUTOMATION_RADIO_DISCRETE_ROWS)); _add_keybind(AUTOMATION_MOVE_LEFT, KeyState(GDK_KEY_Left, GDK_MOD1_MASK | GDK_CONTROL_MASK | GDK_SHIFT_MASK, true)); _add_keybind(AUTOMATION_MOVE_RIGHT, KeyState(GDK_KEY_Right, GDK_MOD1_MASK | GDK_CONTROL_MASK | GDK_SHIFT_MASK, true)); _add_keybind(AUTOMATION_REMOVE, KeyState(GDK_KEY_x, GDK_MOD1_MASK | GDK_CONTROL_MASK | GDK_SHIFT_MASK, true)); _add_keybind(SETTINGS_OPEN, KeyState(GDK_KEY_s, GDK_CONTROL_MASK | GDK_SHIFT_MASK, true)); _add_keybind(SETTINGS_PATTERN_INPUT_KEYS, KeyState(GDK_KEY_p, GDK_CONTROL_MASK | GDK_SHIFT_MASK, true)); _add_keybind(SETTINGS_ABOUT, KeyState(0, 0, true)); _add_keybind(CURSOR_MOVE_UP, KeyState(GDK_KEY_Up)); _add_keybind(CURSOR_MOVE_DOWN, KeyState(GDK_KEY_Down)); _add_keybind(CURSOR_MOVE_UP_1_ROW, KeyState(GDK_KEY_Up, GDK_CONTROL_MASK)); _add_keybind(CURSOR_MOVE_DOWN_1_ROW, KeyState(GDK_KEY_Down, GDK_CONTROL_MASK)); _add_keybind(CURSOR_PAGE_UP, KeyState(GDK_KEY_Page_Up)); _add_keybind(CURSOR_PAGE_DOWN, KeyState(GDK_KEY_Page_Down)); _add_keybind(CURSOR_MOVE_LEFT, KeyState(GDK_KEY_Left)); _add_keybind(CURSOR_MOVE_RIGHT, KeyState(GDK_KEY_Right)); _add_keybind(CURSOR_TAB, KeyState(GDK_KEY_Tab)); _add_keybind(CURSOR_BACKTAB, KeyState(GDK_KEY_ISO_Left_Tab, GDK_SHIFT_MASK)); _add_keybind(CURSOR_HOME, KeyState(GDK_KEY_Home)); _add_keybind(CURSOR_END, KeyState(GDK_KEY_End)); _add_keybind(CURSOR_FIELD_CLEAR, KeyState(GDK_KEY_period)); _add_keybind(CURSOR_INSERT, KeyState(GDK_KEY_Insert)); _add_keybind(CURSOR_DELETE, KeyState(GDK_KEY_Delete)); _add_keybind(CURSOR_TRACK_INSERT, KeyState(GDK_KEY_Insert, GDK_SHIFT_MASK)); _add_keybind(CURSOR_TRACK_DELETE, KeyState(GDK_KEY_Delete, GDK_SHIFT_MASK)); _add_keybind(CURSOR_COPY_VOLUME_MASK, KeyState(GDK_KEY_Return)); _add_keybind(CURSOR_TOGGLE_VOLUME_MASK, KeyState(GDK_KEY_comma)); _add_keybind(CURSOR_PLAY_NOTE, KeyState(GDK_KEY_4)); _add_keybind(CURSOR_PLAY_ROW, KeyState(GDK_KEY_8)); _add_keybind(CURSOR_ADVANCE_1, KeyState(GDK_KEY_1, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_2, KeyState(GDK_KEY_2, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_3, KeyState(GDK_KEY_3, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_4, KeyState(GDK_KEY_4, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_5, KeyState(GDK_KEY_5, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_6, KeyState(GDK_KEY_6, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_7, KeyState(GDK_KEY_7, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_8, KeyState(GDK_KEY_8, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_9, KeyState(GDK_KEY_9, GDK_MOD1_MASK)); _add_keybind(CURSOR_ADVANCE_10, KeyState(GDK_KEY_0, GDK_MOD1_MASK)); _add_keybind(CURSOR_ZOOM_1, KeyState(GDK_KEY_1, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_2, KeyState(GDK_KEY_2, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_3, KeyState(GDK_KEY_3, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_4, KeyState(GDK_KEY_4, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_6, KeyState(GDK_KEY_5, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_8, KeyState(GDK_KEY_6, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_12, KeyState(GDK_KEY_7, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_16, KeyState(GDK_KEY_8, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_24, KeyState(GDK_KEY_9, GDK_CONTROL_MASK)); _add_keybind(CURSOR_ZOOM_32, KeyState(GDK_KEY_0, GDK_CONTROL_MASK)); _add_keybind(PATTERN_PAN_WINDOW_UP, KeyState(GDK_KEY_Up, GDK_MOD1_MASK)); _add_keybind(PATTERN_PAN_WINDOW_DOWN, KeyState(GDK_KEY_Left, GDK_MOD1_MASK)); _add_keybind(PATTERN_CURSOR_NOTE_OFF, KeyState(GDK_KEY_grave)); _add_keybind(PATTERN_OCTAVE_LOWER, KeyState(GDK_KEY_minus)); _add_keybind(PATTERN_OCTAVE_RAISE, KeyState(GDK_KEY_equal)); _add_keybind(PATTERN_PREV_PATTERN, KeyState(GDK_KEY_bracketleft)); _add_keybind(PATTERN_NEXT_PATTERN, KeyState(GDK_KEY_bracketright)); _add_keybind(PATTERN_SELECT_BEGIN, KeyState(GDK_KEY_b, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECT_END, KeyState(GDK_KEY_e, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECT_COLUMN_TRACK_ALL, KeyState(GDK_KEY_l, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_RAISE_NOTES_SEMITONE, KeyState(GDK_KEY_q, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_RAISE_NOTES_OCTAVE, KeyState(GDK_KEY_q, GDK_MOD1_MASK | GDK_SHIFT_MASK, true)); _add_keybind(PATTERN_SELECTION_LOWER_NOTES_SEMITONE, KeyState(GDK_KEY_a, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_LOWER_NOTES_OCTAVE, KeyState(GDK_KEY_a, GDK_MOD1_MASK | GDK_SHIFT_MASK, true)); _add_keybind(PATTERN_SELECTION_SET_VOLUME, KeyState(GDK_KEY_v, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_INTERPOLATE_VOLUME_AUTOMATION, KeyState(GDK_KEY_k, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_AMPLIFY_VOLUME_AUTOMATION, KeyState(GDK_KEY_j, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_CUT, KeyState(GDK_KEY_z, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_COPY, KeyState(GDK_KEY_c, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_PASTE_INSERT, KeyState(GDK_KEY_p, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_PASTE_OVERWRITE, KeyState(GDK_KEY_o, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_PASTE_MIX, KeyState(GDK_KEY_m, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_DISABLE, KeyState(GDK_KEY_u, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_DOUBLE_LENGTH, KeyState(GDK_KEY_f, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_HALVE_LENGTH, KeyState(GDK_KEY_g, GDK_MOD1_MASK, true)); _add_keybind(PATTERN_SELECTION_SCALE_LENGTH, KeyState(GDK_KEY_t, GDK_MOD1_MASK, true)); _add_keybind(PIANO_C0, KeyState(GDK_KEY_z)); _add_keybind(PIANO_Cs0, KeyState(GDK_KEY_s)); _add_keybind(PIANO_D0, KeyState(GDK_KEY_x)); _add_keybind(PIANO_Ds0, KeyState(GDK_KEY_d)); _add_keybind(PIANO_E0, KeyState(GDK_KEY_c)); _add_keybind(PIANO_F0, KeyState(GDK_KEY_v)); _add_keybind(PIANO_Fs0, KeyState(GDK_KEY_g)); _add_keybind(PIANO_G0, KeyState(GDK_KEY_b)); _add_keybind(PIANO_Gs0, KeyState(GDK_KEY_h)); _add_keybind(PIANO_A0, KeyState(GDK_KEY_n)); _add_keybind(PIANO_As0, KeyState(GDK_KEY_j)); _add_keybind(PIANO_B0, KeyState(GDK_KEY_m)); _add_keybind(PIANO_C1, KeyState(GDK_KEY_q)); _add_keybind(PIANO_Cs1, KeyState(GDK_KEY_2)); _add_keybind(PIANO_D1, KeyState(GDK_KEY_w)); _add_keybind(PIANO_Ds1, KeyState(GDK_KEY_3)); _add_keybind(PIANO_E1, KeyState(GDK_KEY_e)); _add_keybind(PIANO_F1, KeyState(GDK_KEY_r)); _add_keybind(PIANO_Fs1, KeyState(GDK_KEY_5)); _add_keybind(PIANO_G1, KeyState(GDK_KEY_t)); _add_keybind(PIANO_Gs1, KeyState(GDK_KEY_6)); _add_keybind(PIANO_A1, KeyState(GDK_KEY_y)); _add_keybind(PIANO_As1, KeyState(GDK_KEY_7)); _add_keybind(PIANO_B1, KeyState(GDK_KEY_u)); _add_keybind(PIANO_C2, KeyState(GDK_KEY_i)); _add_keybind(PIANO_Cs2, KeyState(GDK_KEY_9)); _add_keybind(PIANO_D2, KeyState(GDK_KEY_o)); _add_keybind(PIANO_Ds2, KeyState(GDK_KEY_0)); _add_keybind(PIANO_E2, KeyState(GDK_KEY_p)); } zytrax-master/gui/key_bindings.h000066400000000000000000000115371347722000700173230ustar00rootroot00000000000000#ifndef KEY_BINDINGS_H #define KEY_BINDINGS_H #include "rstring.h" #include class KeyBindings { public: enum KeyBind { FILE_NEW, FILE_OPEN, FILE_SAVE, FILE_SAVE_AS, FILE_EXPORT_WAV, FILE_QUIT, PLAYBACK_PLAY, PLAYBACK_STOP, PLAYBACK_NEXT_PATTERN, PLAYBACK_PREV_PATTERN, PLAYBACK_PLAY_PATTERN, PLAYBACK_PLAY_FROM_CURSOR, PLAYBACK_PLAY_FROM_ORDER, PLAYBACK_CURSOR_FOLLOW, EDIT_UNDO, EDIT_REDO, EDIT_SONG_INFO, EDIT_FOCUS_PATTERN, EDIT_FOCUS_ORDERLIST, EDIT_FOCUS_LAST_EDITED_EFFECT, TRACK_ADD_TRACK, TRACK_ADD_COLUMN, TRACK_REMOVE_COLUMN, TRACK_ADD_COMMAND_COLUMN, TRACK_REMOVE_COMMAND_COLUMN, TRACK_MOVE_LEFT, TRACK_MOVE_RIGHT, TRACK_MUTE, TRACK_SOLO, TRACK_RENAME, TRACK_REMOVE, AUTOMATION_RADIO_DISCRETE_ROWS, AUTOMATION_RADIO_ENVELOPE_SMALL, AUTOMATION_RADIO_ENVELOPE_LARGE, AUTOMATION_MOVE_LEFT, AUTOMATION_MOVE_RIGHT, AUTOMATION_REMOVE, SETTINGS_OPEN, SETTINGS_PATTERN_INPUT_KEYS, SETTINGS_ABOUT, CURSOR_MOVE_UP, CURSOR_MOVE_DOWN, CURSOR_MOVE_UP_1_ROW, CURSOR_MOVE_DOWN_1_ROW, CURSOR_PAGE_UP, CURSOR_PAGE_DOWN, CURSOR_MOVE_LEFT, CURSOR_MOVE_RIGHT, CURSOR_TAB, CURSOR_BACKTAB, CURSOR_HOME, CURSOR_END, CURSOR_FIELD_CLEAR, CURSOR_INSERT, CURSOR_DELETE, CURSOR_TRACK_INSERT, CURSOR_TRACK_DELETE, CURSOR_COPY_VOLUME_MASK, CURSOR_TOGGLE_VOLUME_MASK, CURSOR_PLAY_NOTE, CURSOR_PLAY_ROW, CURSOR_ADVANCE_1, CURSOR_ADVANCE_2, CURSOR_ADVANCE_3, CURSOR_ADVANCE_4, CURSOR_ADVANCE_5, CURSOR_ADVANCE_6, CURSOR_ADVANCE_7, CURSOR_ADVANCE_8, CURSOR_ADVANCE_9, CURSOR_ADVANCE_10, CURSOR_ZOOM_1, CURSOR_ZOOM_2, CURSOR_ZOOM_3, CURSOR_ZOOM_4, CURSOR_ZOOM_6, CURSOR_ZOOM_8, CURSOR_ZOOM_12, CURSOR_ZOOM_16, CURSOR_ZOOM_24, CURSOR_ZOOM_32, PATTERN_PAN_WINDOW_UP, PATTERN_PAN_WINDOW_DOWN, PATTERN_CURSOR_NOTE_OFF, PATTERN_OCTAVE_RAISE, PATTERN_OCTAVE_LOWER, PATTERN_PREV_PATTERN, PATTERN_NEXT_PATTERN, PATTERN_SELECT_BEGIN, PATTERN_SELECT_END, PATTERN_SELECT_COLUMN_TRACK_ALL, PATTERN_SELECTION_RAISE_NOTES_SEMITONE, PATTERN_SELECTION_RAISE_NOTES_OCTAVE, PATTERN_SELECTION_LOWER_NOTES_SEMITONE, PATTERN_SELECTION_LOWER_NOTES_OCTAVE, PATTERN_SELECTION_SET_VOLUME, PATTERN_SELECTION_INTERPOLATE_VOLUME_AUTOMATION, PATTERN_SELECTION_AMPLIFY_VOLUME_AUTOMATION, PATTERN_SELECTION_CUT, PATTERN_SELECTION_COPY, PATTERN_SELECTION_PASTE_INSERT, PATTERN_SELECTION_PASTE_OVERWRITE, PATTERN_SELECTION_PASTE_MIX, PATTERN_SELECTION_DISABLE, PATTERN_SELECTION_DOUBLE_LENGTH, PATTERN_SELECTION_HALVE_LENGTH, PATTERN_SELECTION_SCALE_LENGTH, PIANO_C0, PIANO_Cs0, PIANO_D0, PIANO_Ds0, PIANO_E0, PIANO_F0, PIANO_Fs0, PIANO_G0, PIANO_Gs0, PIANO_A0, PIANO_As0, PIANO_B0, PIANO_C1, PIANO_Cs1, PIANO_D1, PIANO_Ds1, PIANO_E1, PIANO_F1, PIANO_Fs1, PIANO_G1, PIANO_Gs1, PIANO_A1, PIANO_As1, PIANO_B1, PIANO_C2, PIANO_Cs2, PIANO_D2, PIANO_Ds2, PIANO_E2, BIND_MAX }; private: Gtk::Application *application; Gtk::ApplicationWindow *window; struct KeyState { enum Mode { MODE_NORMAL, MODE_TOGGLE, MODE_RADIO, }; guint keyval; guint state; bool shortcut; Mode mode; int radio_base; String detailed_name; //cache guint initial_keyval; guint initial_state; KeyState(guint p_keyval = 0, guint p_state = 0, bool p_shortcut = false, Mode p_mode = MODE_NORMAL, int p_radio_base = 0) { keyval = p_keyval; state = p_state; shortcut = p_shortcut; mode = p_mode; radio_base = p_radio_base; initial_keyval = p_keyval; initial_state = p_state; } }; static const char *bind_names[BIND_MAX]; KeyState binds[BIND_MAX]; Glib::RefPtr actions[BIND_MAX]; void _add_keybind(KeyBind p_bind, KeyState p_state); void _on_action(KeyBind p_bind); void _on_action_string(Glib::ustring p_string); bool initialized; public: //done this way so each UI can capture whathever action it wants sigc::signal1 action_activated; String get_keybind_detailed_name(KeyBind p_bind); String get_keybind_action_name(KeyBind p_bind); String get_keybind_text(KeyBind p_bind); const char *get_keybind_name(KeyBind p_bind); int get_keybind_key(KeyBind p_bind); int get_keybind_mod(KeyBind p_bind); Glib::RefPtr get_keybind_action(KeyBind p_bind); void set_action_enabled(KeyBind p_bind, bool p_enabled); void set_action_checked(KeyBind p_bind, bool p_checked); void set_action_state(KeyBind p_bind, const String &p_state); bool is_keybind(GdkEventKey *ev, KeyBind p_bind) const; bool is_keybind_noshift(GdkEventKey *ev, KeyBind p_bind) const; void set_keybind(KeyBind p_bind, guint p_keyval, guint p_state); void reset_keybind(KeyBind p_bind); void clear_keybind(KeyBind p_bind); void initialize(Gtk::Application *p_application, Gtk::ApplicationWindow *p_window); KeyBindings(); }; #endif // KEY_BINDINGS_H zytrax-master/gui/master_vu.cpp000066400000000000000000000247301347722000700172150ustar00rootroot00000000000000#include "master_vu.h" void MasterVU::_mouse_button_event(GdkEventButton *event, bool p_press) { if (event->button == 1) { if (p_press && event->x >= grabber_x && event->x < grabber_x + grabber_w && event->y >= grabber_y && event->y < grabber_y + grabber_h) { grabbing_x = event->x; grabbing_db = song->get_main_volume_db(); grabbing = true; queue_draw(); } if (!p_press) { grabbing = false; queue_draw(); } } } bool MasterVU::on_button_press_event(GdkEventButton *event) { grab_focus(); _mouse_button_event(event, true); return false; } bool MasterVU::on_button_release_event(GdkEventButton *release_event) { _mouse_button_event(release_event, false); return false; } bool MasterVU::on_motion_notify_event(GdkEventMotion *motion_event) { if (grabbing) { float new_db = grabbing_db + (motion_event->x - grabbing_x) * (TRACK_MAX_DB - TRACK_MIN_DB) / vu_w; new_db = CLAMP(new_db, TRACK_MIN_DB, TRACK_MAX_DB); main_volume_db_changed.emit(new_db); queue_draw(); } return false; } bool MasterVU::on_key_press_event(GdkEventKey *key_event) { return true; } bool MasterVU::on_key_release_event(GdkEventKey *key_event) { return false; } Gtk::SizeRequestMode MasterVU::get_request_mode_vfunc() const { // Accept the default value supplied by the base class. return Gtk::Widget::get_request_mode_vfunc(); } // Discover the total amount of minimum space and natural space needed by // this widget. // Let's make this simple example widget always need minimum 60 by 50 and // natural 100 by 70. void MasterVU::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void MasterVU::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void MasterVU::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void MasterVU::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void MasterVU::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void MasterVU::on_map() { // Call base class: Gtk::Widget::on_map(); } void MasterVU::on_unmap() { // Call base class: Gtk::Widget::on_unmap(); } void MasterVU::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void MasterVU::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void MasterVU::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } int MasterVU::_get_text_width(const Cairo::RefPtr &cr, const String &p_text) const { Cairo::TextExtents te; cr->get_text_extents(p_text.utf8().get_data(), te); return te.width; } void MasterVU::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void MasterVU::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } void MasterVU::_draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x + w / 4, y + h / 4); cr->line_to(x + w * 3 / 4, y + h / 4); cr->line_to(x + w / 2, y + h * 3 / 4); cr->line_to(x + w / 4, y + h / 4); cr->fill(); cr->stroke(); } bool MasterVU::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); #if 0 { //update min width theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); Cairo::TextExtents te; cr->get_text_extents("XXX", te); int fw = te.width; cr->get_text_extents("XX", te); fw -= te.width; int new_width = fw; int new_height = fe.height; if (new_width != min_width || new_height != min_height) { min_width = new_width; min_height = new_height; char_width = fw; font_height = fe.height; font_ascent = fe.ascent; queue_resize(); /*Gtk::Widget *w = this; while (w) { w->queue_resize(); w = w->get_parent(); }*/ } } #endif int grabber_width = h / 2; Gdk::Cairo::set_source_rgba(cr, Theme::make_rgba(0, 0, 0)); cr->rectangle(0, 0, w, h); cr->fill(); vu_x = grabber_width / 2; vu_y = 0; vu_w = w - grabber_width; vu_h = h; Gdk::RGBA rgba; rgba.set_alpha(1); cr->set_line_width(1); for (int i = 0; i < vu_w; i += 2) { float db = TRACK_MAX_DB - float(i) * (TRACK_MAX_DB - TRACK_MIN_DB) / vu_w; float r = 0, g = 0, b = 0; if (db > 0) { r = 1.0; g = 1.0 - db / TRACK_MAX_DB; } else { r = 1.0 - db / TRACK_MIN_DB; g = 1.0; } float lr = r; float lg = g; float lb = b; float rr = r; float rg = g; float rb = b; { if (db > peak_db_l) { lr *= 0.3; lg *= 0.3; lb *= 0.3; } if (db > peak_db_r) { rr *= 0.3; rg *= 0.3; rb *= 0.3; } int middle = vu_y + vu_h / 2; { rgba.set_red(lr); rgba.set_green(lg); rgba.set_blue(lb); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(vu_x + vu_w - i + 0.5, vu_y); cr->line_to(vu_x + vu_w - i + 0.5, middle - 1); cr->stroke(); } { rgba.set_red(rr); rgba.set_green(rg); rgba.set_blue(rb); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(vu_x + vu_w - i + 0.5, middle + 1); cr->line_to(vu_x + vu_w - i + 0.5, vu_y + vu_h); cr->stroke(); } } } //white line at 0db rgba.set_red(1); rgba.set_green(1); rgba.set_blue(1); rgba.set_alpha(0.5); Gdk::Cairo::set_source_rgba(cr, rgba); int db0 = TRACK_MAX_DB * float(vu_w) / float(TRACK_MAX_DB - TRACK_MIN_DB); cr->move_to(vu_x + vu_w - db0 + 0.5, vu_y); cr->line_to(vu_x + vu_w - db0 + 0.5, vu_y + vu_h); cr->move_to(vu_x + vu_w - db0 + 1.5, vu_y); cr->line_to(vu_x + vu_w - db0 + 1.5, vu_y + vu_h); cr->stroke(); //draw handle float track_db = song->get_main_volume_db(); int db_handle = (TRACK_MAX_DB - track_db) * vu_w / float(TRACK_MAX_DB - TRACK_MIN_DB); rgba.set_red(1); rgba.set_green(0.5); rgba.set_blue(0.5); rgba.set_alpha(1.0); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(vu_x + vu_w - db_handle - 0.5, vu_y); cr->line_to(vu_x + vu_w - db_handle - 0.5, vu_y + vu_h); cr->stroke(); rgba.set_red(1); rgba.set_green(1); rgba.set_blue(1); rgba.set_alpha(1.0); grabber_x = vu_x + vu_w - db_handle - grabber_width / 2; grabber_w = grabber_width; grabber_y = 0; grabber_h = vu_h; cr->set_line_width(2); _draw_rect(cr, grabber_x + 0.5, grabber_y + 0.5, grabber_w, grabber_h - 1, rgba); return false; } void MasterVU::update_peak() { uint64_t current_time = g_get_monotonic_time(); double diff = double(current_time - last_time) / 1000000.0; last_time = current_time; { float current_peak_l = song->get_peak_volume_db_l(); float new_peak_l; if (current_peak_l > peak_db_l) { new_peak_l = current_peak_l; } else { //decrement new_peak_l = peak_db_l - 48 * diff; //24db per second? } if (new_peak_l < TRACK_MIN_DB) { //so it stops redrawing eventually; new_peak_l = TRACK_MIN_DB; } if (new_peak_l != peak_db_l) { peak_db_l = new_peak_l; queue_draw(); } } { float current_peak_r = song->get_peak_volume_db_r(); float new_peak_r; if (current_peak_r > peak_db_r) { new_peak_r = current_peak_r; } else { //decrement new_peak_r = peak_db_r - 48 * diff; //24db per second? } if (new_peak_r < TRACK_MIN_DB) { //so it stops redrawing eventually; new_peak_r = TRACK_MIN_DB; } if (new_peak_r != peak_db_r) { peak_db_r = new_peak_r; queue_draw(); } } } void MasterVU::on_parsing_error( const Glib::RefPtr §ion, const Glib::Error &error) {} MasterVU::MasterVU(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("main_vu"), Gtk::Widget() { // This shows the GType name, which must be used in the CSS file. // std::cout << "GType name: " << G_OBJECT_TYPE_NAME(gobj()) << std::endl; // This shows that the GType still derives from GtkWidget: // std::cout << "Gtype is a GtkWidget?:" << GTK_IS_WIDGET(gobj()) << // std::endl; song = p_song; undo_redo = p_undo_redo; theme = p_theme; set_has_window(true); // Gives Exposure & Button presses to the widget. set_name("main_vu"); min_width = 1; min_height = 1; char_width = 1; font_height = 1; font_ascent = 1; grabbing = false; grabber_x = grabber_y = grabber_w = grabber_h = -1; last_time = 0; peak_db_l = -100; peak_db_r = -100; } MasterVU::~MasterVU() { } zytrax-master/gui/master_vu.h000066400000000000000000000052171347722000700166610ustar00rootroot00000000000000#ifndef MASTER_VU_H #define MASTER_VU_H #include "engine/song.h" #include "engine/undo_redo.h" #include #include #include #include "gui/color_theme.h" #include "gui/key_bindings.h" class MasterVU : public Gtk::Widget { protected: enum { TRACK_MAX_DB = 12, TRACK_MIN_DB = -60 }; int min_width; int min_height; int char_width; int font_height; int font_ascent; int vu_x, vu_y; int vu_w, vu_h; int grabber_x, grabber_y; int grabber_w, grabber_h; bool grabbing; float grabbing_db; int grabbing_x; // Overrides: Gtk::SizeRequestMode get_request_mode_vfunc() const override; void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_map() override; void on_unmap() override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; // Signal handler: void on_parsing_error(const Glib::RefPtr §ion, const Glib::Error &error); void _mouse_button_event(GdkEventButton *event, bool p_press); bool on_button_press_event(GdkEventButton *event); bool on_button_release_event(GdkEventButton *event); bool on_motion_notify_event(GdkEventMotion *motion_event); bool on_key_press_event(GdkEventKey *key_event); bool on_key_release_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; UndoRedo *undo_redo; Theme *theme; Song *song; void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down); int _get_text_width(const Cairo::RefPtr &cr, const String &p_text) const; void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); uint64_t last_time; float peak_db_l; float peak_db_r; public: sigc::signal1 main_volume_db_changed; void update_peak(); MasterVU(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme); ~MasterVU(); }; #endif // MASTER_VU_H zytrax-master/gui/orderlist_editor.cpp000066400000000000000000000337551347722000700205740ustar00rootroot00000000000000#include "orderlist_editor.h" void OrderlistEditor::_mouse_button_event(GdkEventButton *event, bool p_press) { if (p_press && event->button == 1) { cursor.row = (event->y - top_ofs) / fh_cache; if (event->x > fw_cache * 4) { cursor.field = CLAMP((event->x - fw_cache * 5) / fw_cache, 0, 2); } _validate_cursor(); queue_draw(); } } void OrderlistEditor::_adjust_cursor_to_view() { if (cursor.row < v_offset) { cursor.row = v_offset; } else if (cursor.row >= v_offset + visible_rows) { cursor.row = v_offset + visible_rows - 1; } } bool OrderlistEditor::on_scroll_event(GdkEventScroll *scroll_event) { if (scroll_event->direction == GDK_SCROLL_UP) { v_scroll->set_value(v_scroll->get_value() - 4); _adjust_cursor_to_view(); return true; } if (scroll_event->direction == GDK_SCROLL_DOWN) { v_scroll->set_value(v_scroll->get_value() + 4); _adjust_cursor_to_view(); return true; } return false; } bool OrderlistEditor::on_button_press_event(GdkEventButton *event) { grab_focus(); _mouse_button_event(event, true); return false; } bool OrderlistEditor::on_button_release_event(GdkEventButton *release_event) { _mouse_button_event(release_event, false); return false; } bool OrderlistEditor::on_motion_notify_event(GdkEventMotion *motion_event) { return false; } void OrderlistEditor::_update_oderlist() { queue_draw(); } void OrderlistEditor::_validate_cursor() { if (cursor.row < 0) { cursor.row = 0; } if (cursor.row > Song::ORDER_MAX) { cursor.row = Song::ORDER_MAX; } if (cursor.row < v_offset) { v_offset = cursor.row; } else if (cursor.row >= v_offset + visible_rows) { v_offset = cursor.row - visible_rows + 1; } } bool OrderlistEditor::on_key_press_event(GdkEventKey *key_event) { bool shift_pressed = key_event->state & GDK_SHIFT_MASK; if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_UP)) { cursor.row -= 1; _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_DOWN)) { cursor.row += 1; _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_PAGE_UP)) { cursor.row -= 16; _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_PAGE_DOWN)) { cursor.row += 16; _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_LEFT)) { if (cursor.field > 0) { cursor.field--; } queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_RIGHT)) { if (cursor.field < 2) { cursor.field++; } queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_HOME)) { if (cursor.field > 0) { cursor.field = 0; } else { cursor.row = 0; } _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_END)) { if (cursor.field < 2) { cursor.field = 2; } else { cursor.row = Song::ORDER_MAX - 1; } _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { undo_redo->begin_action("Clear Order"); undo_redo->do_method(song, &Song::order_set, cursor.row, int(Song::ORDER_EMPTY)); undo_redo->undo_method(song, &Song::order_set, cursor.row, song->order_get(cursor.row)); undo_redo->do_method(this, &OrderlistEditor::_update_oderlist); undo_redo->undo_method(this, &OrderlistEditor::_update_oderlist); undo_redo->commit_action(); cursor.row++; _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_INSERT)) { undo_redo->begin_action("Insert Empty"); for (int i = cursor.row; i <= Song::ORDER_MAX; i++) { int existing = song->order_get(i); int fresh = i == cursor.row ? Song::ORDER_EMPTY : song->order_get(i - 1); if (existing == fresh) { continue; } undo_redo->do_method(song, &Song::order_set, i, fresh); undo_redo->undo_method(song, &Song::order_set, i, existing); } undo_redo->do_method(this, &OrderlistEditor::_update_oderlist); undo_redo->undo_method(this, &OrderlistEditor::_update_oderlist); undo_redo->commit_action(); _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_DELETE)) { undo_redo->begin_action("Delete"); for (int i = cursor.row; i <= Song::ORDER_MAX; i++) { int existing = song->order_get(i); int fresh = i == cursor.row == Song::ORDER_MAX ? Song::ORDER_EMPTY : song->order_get(i + 1); if (existing == fresh) { continue; } undo_redo->do_method(song, &Song::order_set, i, fresh); undo_redo->undo_method(song, &Song::order_set, i, existing); } undo_redo->do_method(this, &OrderlistEditor::_update_oderlist); undo_redo->undo_method(this, &OrderlistEditor::_update_oderlist); undo_redo->commit_action(); _validate_cursor(); queue_draw(); } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_CURSOR_NOTE_OFF)) { undo_redo->begin_action("Insert Skip"); undo_redo->do_method(song, &Song::order_set, cursor.row, int(Song::ORDER_SKIP)); undo_redo->undo_method(song, &Song::order_set, cursor.row, song->order_get(cursor.row)); undo_redo->do_method(this, &OrderlistEditor::_update_oderlist); undo_redo->undo_method(this, &OrderlistEditor::_update_oderlist); undo_redo->commit_action(); cursor.row++; _validate_cursor(); queue_draw(); } else if ((key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9)) { int number = key_event->keyval - GDK_KEY_0; int existing = song->order_get(cursor.row); int base; if (existing == Song::ORDER_EMPTY || existing == Song::ORDER_SKIP) { base = 0; } else { base = existing; } int num[3] = { base / 100, (base / 10) % 10, base % 10 }; num[cursor.field] = number; int new_number = num[0] * 100 + num[1] * 10 + num[2]; undo_redo->begin_action("Insert Number"); undo_redo->do_method(song, &Song::order_set, cursor.row, new_number); undo_redo->undo_method(song, &Song::order_set, cursor.row, existing); undo_redo->do_method(this, &OrderlistEditor::_update_oderlist); undo_redo->undo_method(this, &OrderlistEditor::_update_oderlist); undo_redo->commit_action(); if (cursor.field < 2) { cursor.field++; } else { cursor.field = 0; cursor.row++; } _validate_cursor(); queue_draw(); } else { return false; //not handled } return true; //handled } bool OrderlistEditor::on_key_release_event(GdkEventKey *key_event) { return false; } void OrderlistEditor::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = fw_cache * 9; //7 and half to each side natural_width = fw_cache * 9; //7 and half to each side } void OrderlistEditor::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = 64; natural_height = 64; } void OrderlistEditor::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = 64; natural_height = 64; } void OrderlistEditor::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = fw_cache * 9; //7 and half to each side natural_width = fw_cache * 9; //7 and half to each side } void OrderlistEditor::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void OrderlistEditor::on_map() { // Call base class: Gtk::Widget::on_map(); } void OrderlistEditor::on_unmap() { // Call base class: Gtk::Widget::on_unmap(); } void OrderlistEditor::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::SCROLL_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void OrderlistEditor::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void OrderlistEditor::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } void OrderlistEditor::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void OrderlistEditor::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } void OrderlistEditor::_draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x + w / 4, y + h / 4); cr->line_to(x + w * 3 / 4, y + h / 4); cr->line_to(x + w / 2, y + h * 3 / 4); cr->line_to(x + w / 4, y + h / 4); cr->fill(); cr->stroke(); } void OrderlistEditor::_v_scroll_changed() { if (drawing) { return; } v_offset = v_scroll->get_value(); queue_draw(); } bool OrderlistEditor::on_draw(const Cairo::RefPtr &cr) { drawing = true; const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); Gdk::Cairo::set_source_rgba(cr, theme->colors[Theme::COLOR_BACKGROUND]); cr->rectangle(0, 0, w, h); cr->fill(); theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); // Believe it or not, this the only reliable way to get the width of // a monospace char in GTK. Yes. Cairo::TextExtents te; cr->get_text_extents("XXX", te); int fw = te.width; cr->get_text_extents("XX", te); fw -= te.width; int fh = fe.height; int fa = fe.ascent; int sep = 1; fh += sep; int row_height = fh; if (fw_cache != fw || fh_cache != fh) { queue_resize(); fw_cache = fw; fh_cache = fh; } visible_rows = (h - top_ofs) / fh; Gdk::RGBA note_color = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE]; Gdk::RGBA order_color = theme->colors[Theme::COLOR_PATTERN_EDITOR_ROW_BEAT]; Gdk::RGBA playing_order_color = theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR]; _draw_fill_rect(cr, w / 2, 0, w / 2, h, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]); for (int i = 0; i < visible_rows; i++) { char text[4] = { '0', '0', '0', 0 }; int row = i + v_offset; if (row > Song::ORDER_MAX) { break; } text[0] = '0' + row / 100; text[1] = '0' + (row / 10) % 10; text[2] = '0' + row % 10; _draw_text(cr, fw, i * row_height + fa + top_ofs, text, row == playback_order ? playing_order_color : order_color); int pattern = song->order_get(row); if (pattern == Song::ORDER_EMPTY) { text[0] = '.'; text[1] = '.'; text[2] = '.'; } else if (pattern == Song::ORDER_SKIP) { text[0] = '-'; text[1] = '-'; text[2] = '-'; } else { text[0] = '0' + pattern / 100; text[1] = '0' + (pattern / 10) % 10; text[2] = '0' + pattern % 10; } _draw_text(cr, fw + fw * 4, i * row_height + fa + top_ofs, text, note_color); if (has_focus() && row == cursor.row) { _draw_rect(cr, fw + fw * 4 + fw * cursor.field, i * row_height + top_ofs, fw, fh - 1, theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR]); } } v_scroll->set_upper(Song::ORDER_MAX + 1); v_scroll->set_page_size(visible_rows); v_scroll->set_value(v_offset); if (has_focus()) { cr->set_source_rgba(1, 1, 1, 1); cr->rectangle(0, 0, w, h); cr->stroke(); } drawing = false; return true; } void OrderlistEditor::set_vscroll(Glib::RefPtr p_v_scroll) { v_scroll = p_v_scroll; v_scroll->signal_value_changed().connect(sigc::mem_fun(*this, &OrderlistEditor::_v_scroll_changed)); } void OrderlistEditor::on_parsing_error( const Glib::RefPtr §ion, const Glib::Error &error) {} int OrderlistEditor::get_cursor_order() const { return cursor.row; } void OrderlistEditor::set_playback_order(int p_order) { if (p_order != playback_order) { playback_order = p_order; queue_draw(); } } OrderlistEditor::OrderlistEditor(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("orderlist_editor"), Gtk::Widget() { song = p_song; undo_redo = p_undo_redo; key_bindings = p_bindings; theme = p_theme; set_has_window(true); set_can_focus(true); set_focus_on_click(true); // Gives Exposure & Button presses to the widget. set_name("orderlist_editor"); v_offset = 0; visible_rows = 4; cursor.row = 0; cursor.field = 0; fw_cache = 0; fh_cache = 0; top_ofs = 4; drawing = false; playback_order = -1; } OrderlistEditor::~OrderlistEditor() { } zytrax-master/gui/orderlist_editor.h000066400000000000000000000052301347722000700202240ustar00rootroot00000000000000#ifndef ORDERLIST_EDITOR_H #define ORDERLIST_EDITOR_H #include "engine/song.h" #include "engine/undo_redo.h" #include #include #include #include "gui/color_theme.h" #include "gui/key_bindings.h" class OrderlistEditor : public Gtk::Widget { protected: //Overrides: void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_map() override; void on_unmap() override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; //Signal handler: void on_parsing_error(const Glib::RefPtr §ion, const Glib::Error &error); void _mouse_button_event(GdkEventButton *event, bool p_press); bool on_scroll_event(GdkEventScroll *scroll_event); bool on_button_press_event(GdkEventButton *event); bool on_button_release_event(GdkEventButton *event); bool on_motion_notify_event(GdkEventMotion *motion_event); bool on_key_press_event(GdkEventKey *key_event); bool on_key_release_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; UndoRedo *undo_redo; Song *song; struct Cursor { int row; int field; } cursor; int v_offset; int fw_cache; int fh_cache; bool drawing; int top_ofs; int visible_rows; void _adjust_cursor_to_view(); void _validate_cursor(); void _update_oderlist(); void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down = false); void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); Theme *theme; KeyBindings *key_bindings; Glib::RefPtr v_scroll; void _v_scroll_changed(); int playback_order; public: void set_vscroll(Glib::RefPtr p_v_scroll); int get_cursor_order() const; void set_playback_order(int p_order); OrderlistEditor(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings); ~OrderlistEditor(); }; #endif // ORDERLIST_EDITOR_H zytrax-master/gui/pattern_editor.cpp000066400000000000000000003651511347722000700202400ustar00rootroot00000000000000#include "pattern_editor.h" void PatternEditor::_cursor_advance() { cursor.row += cursor_advance; if (cursor.row >= get_total_rows() - 1) cursor.row = get_total_rows() - 1; } int PatternEditor::_get_rows_per_beat() const { static const int rows_per_beat[BEAT_ZOOM_MAX] = { 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64 }; return rows_per_beat[beat_zoom]; } void PatternEditor::_field_clear() { Track::Pos from; from.tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); from.column = cursor.column; Track::Pos to; to.tick = from.tick + TICKS_PER_BEAT / _get_rows_per_beat(); to.column = cursor.column; List events; song->get_events_in_range(current_pattern, from, to, &events); if (events.size() == 0) { _cursor_advance(); queue_draw(); return; } if (song->get_event_column_type(cursor.column) == Track::Event::TYPE_COMMAND) { //command works a little different undo_redo->begin_action("Clear Command"); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; Track::Event old_ev = ev; if (cursor.field == 0) { ev.a = Track::Command::EMPTY; } else { ev.b = 0; } undo_redo->do_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, ev); undo_redo->undo_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); } undo_redo->commit_action(); } else if (cursor.field == 0 || cursor.field == 1) { // just clear whathever undo_redo->begin_action("Clear Event"); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; Track::Event old_ev = ev; ev.a = Track::Note::EMPTY; ev.b = 0xFF; undo_redo->do_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, ev); undo_redo->undo_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); } undo_redo->commit_action(); } else { undo_redo->begin_action("Clear Volume"); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; Track::Event old_ev = ev; ev.b = Track::Note::EMPTY; undo_redo->do_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, ev); undo_redo->undo_method(song, &Song::set_event, current_pattern, cursor.column, E->get().pos.tick, old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); } undo_redo->commit_action(); } queue_draw(); _cursor_advance(); _validate_cursor(); } bool PatternEditor::_is_in_selection(int p_column, Tick p_tick) { return (selection.active && p_column >= selection.begin_column && p_column <= selection.end_column && p_tick >= selection.begin_tick && p_tick < (selection.end_tick + selection.row_tick_size - 1)); } void PatternEditor::_validate_selection() { if (selection.begin_column > selection.end_column) { SWAP(selection.begin_column, selection.end_column); } if (selection.begin_tick > selection.end_tick) { SWAP(selection.begin_tick, selection.end_tick); } } void PatternEditor::_validate_cursor() { if (song->get_track_count() == 0) { return; } if (cursor.row < 0) cursor.row = 0; else if (cursor.row >= get_total_rows()) cursor.row = get_total_rows() - 1; if (cursor.row < v_offset) v_offset = cursor.row; if (cursor.row >= v_offset + visible_rows) v_offset = cursor.row - visible_rows + 1; if (cursor.column < h_offset) { h_offset = cursor.column; } while (true) { int base_ofs = h_offset ? get_column_offset(h_offset - 1) : 0; int window_size = get_allocated_width() - fw_cache * 4; //minus number row int cursor_end = get_column_offset(cursor.column); if (h_offset == song->get_event_column_count() - 1) { break; } if (base_ofs + window_size >= cursor_end) { break; } h_offset++; } queue_draw(); } int PatternEditor::get_total_rows() const { return song->pattern_get_beats(current_pattern) * _get_rows_per_beat(); } int PatternEditor::get_visible_rows() const { return visible_rows; } int PatternEditor::_cursor_get_track_begin_column() { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); int ccolumn = cursor.column; if (column != 0) { if (column >= 0) { ccolumn -= column; } else if (command >= 0) { ccolumn -= track->get_column_count() + command; } else { ccolumn -= automation + track->get_column_count() + track->get_command_column_count(); } } return ccolumn; } int PatternEditor::_cursor_get_track_end_column() { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); int ccolumn = cursor.column; if (column != 0) { if (column >= 0) { ccolumn -= column; } else if (command >= 0) { ccolumn -= track->get_column_count() + command; } else { ccolumn -= automation + track->get_column_count() + track->get_command_column_count(); } } ccolumn += track->get_event_column_count() - 1; return ccolumn; } void PatternEditor::get_cursor_column_data(Track **r_track, int &r_command_column, int &r_automation, int &r_track_column) { int cc = cursor.column; r_automation = -1; r_track_column = -1; r_command_column = -1; *r_track = NULL; for (int i = 0; i < song->get_track_count(); i++) { Track *t = song->get_track(i); *r_track = t; r_automation = -1; r_command_column = -1; r_track_column = -1; for (int j = 0; j < t->get_column_count(); j++) { r_track_column = j; if (cc == 0) { return; } cc--; } r_track_column = -1; for (int j = 0; j < t->get_command_column_count(); j++) { r_command_column = j; if (cc == 0) { return; } cc--; } r_command_column = -1; for (int j = 0; j < t->get_automation_count(); j++) { r_automation = j; if (cc == 0) { return; } cc--; } } } void PatternEditor::set_current_pattern(int p_pattern) { current_pattern = p_pattern; queue_draw(); } int PatternEditor::get_current_pattern() const { return current_pattern; } void PatternEditor::set_current_octave(int p_octave) { current_octave = p_octave; } int PatternEditor::get_current_octave() const { return current_octave; } void PatternEditor::set_current_cursor_advance(int p_cursor_advance) { cursor_advance = p_cursor_advance; } int PatternEditor::get_current_cursor_advance() const { return cursor_advance; } void PatternEditor::set_current_volume_mask(int p_volume_mask, bool p_active) { volume_mask = p_volume_mask; volume_mask_active = p_active; } int PatternEditor::get_current_volume_mask() const { return volume_mask; } bool PatternEditor::is_current_volume_mask_active() const { return volume_mask_active; } void PatternEditor::_redraw() { queue_draw(); } void PatternEditor::initialize_menus() { track_menu = Gio::Menu::create(); track_menu_add = Gio::Menu::create(); track_menu->append_section(track_menu_add); track_menu_add->append("New Track", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_ADD_TRACK).ascii().get_data()); track_menu_column = Gio::Menu::create(); track_menu->append_section(track_menu_column); track_menu_column->append("Add Column", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_ADD_COLUMN).ascii().get_data()); track_menu_column->append("Remove Column", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_REMOVE_COLUMN).ascii().get_data()); track_menu_command = Gio::Menu::create(); track_menu->append_section(track_menu_command); track_menu_command->append("Add Command Column", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_ADD_COMMAND_COLUMN).ascii().get_data()); track_menu_command->append("Remove Command Column", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_REMOVE_COMMAND_COLUMN).ascii().get_data()); track_menu_solo = Gio::Menu::create(); track_menu->append_section(track_menu_solo); track_menu_solo->append("Mute", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_MUTE).ascii().get_data()); track_menu_solo->append("Solo", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_SOLO).ascii().get_data()); track_menu_edit = Gio::Menu::create(); track_menu->append_section(track_menu_edit); track_menu_edit->append("Move Left", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_MOVE_LEFT).ascii().get_data()); track_menu_edit->append("Move Right", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_MOVE_RIGHT).ascii().get_data()); track_menu_edit->append("Rename", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_RENAME).ascii().get_data()); track_menu_remove = Gio::Menu::create(); track_menu->append_section(track_menu_remove); track_menu_remove->append("Remove", key_bindings->get_keybind_detailed_name(KeyBindings::TRACK_REMOVE).ascii().get_data()); track_popup.bind_model(track_menu, true); automation_menu = Gio::Menu::create(); automation_menu_mode = Gio::Menu::create(); automation_menu->append_section(automation_menu_mode); automation_menu_mode->append("Numbers (Discrete)", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS).ascii().get_data()); automation_menu_mode->append("Small Envelope", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_RADIO_ENVELOPE_SMALL).ascii().get_data()); automation_menu_mode->append("Large Envelope", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_RADIO_ENVELOPE_LARGE).ascii().get_data()); automation_menu_move = Gio::Menu::create(); automation_menu->append_section(automation_menu_move); automation_menu_move->append("Move Left", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_MOVE_LEFT).ascii().get_data()); automation_menu_move->append("Move Right", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_MOVE_RIGHT).ascii().get_data()); automation_menu_remove = Gio::Menu::create(); automation_menu->append_section(automation_menu_remove); automation_menu_remove->append("Remove", key_bindings->get_keybind_detailed_name(KeyBindings::AUTOMATION_REMOVE).ascii().get_data()); automation_popup.bind_model(automation_menu, true); track_popup.attach_to_widget(*this); automation_popup.attach_to_widget(*this); } void PatternEditor::_mouse_button_event(GdkEventButton *event, bool p_press) { if (song->get_track_count() == 0) { return; } Gdk::Rectangle posr(event->x, event->y, 1, 1); if (p_press && event->button == 1) { int closest_field = -1; int closest_column = -1; int closest_distance = 0x7FFFFFFF; for (List::Element *E = click_areas.front(); E; E = E->next()) { int point_index = -1; float point_d = 1e20; int pos_x = 0; int pos_y = 0; for (List::Element *F = E->get().automation_points.front(); F; F = F->next()) { float x = F->get().x; float y = F->get().y; float d = sqrt((x - event->x) * (x - event->x) + (y - event->y) * (y - event->y)); if (d < 6) { if (point_index < 0 || d < point_d) { point_index = F->get().index; point_d = d; pos_x = x; pos_y = y; } } } if (point_index >= 0) { grabbing_point = point_index; grabbing_point_tick_from = E->get().automation->get_point_tick_by_index( current_pattern, grabbing_point); grabbing_point_value_from = E->get().automation->get_point_by_index( current_pattern, grabbing_point); grabbing_point_tick = grabbing_point_tick_from; grabbing_point_value = grabbing_point_value_from; grabbing_automation = E->get().automation; grabbing_x = E->get().fields[0].x; grabbing_width = E->get().fields[0].width; grabbing_mouse_pos_x = pos_x; grabbing_mouse_pos_y = pos_y; grabbing_mouse_prev_x = grabbing_mouse_pos_x; grabbing_mouse_prev_y = grabbing_mouse_pos_y; return; } else if (event->state & GDK_CONTROL_MASK && event->x >= E->get().fields[0].x && event->x < E->get().fields[0].x + E->get().fields[0].width) { // add it int x = event->x - E->get().fields[0].x; int y = event->y; int w = E->get().fields[0].width; Tick tick = MAX(0, (y - row_top_ofs + v_offset * row_height_cache)) * TICKS_PER_BEAT / (row_height_cache * _get_rows_per_beat()); uint8_t value = CLAMP((x)*Automation::VALUE_MAX / w, 0, Automation::VALUE_MAX); grabbing_automation = E->get().automation; grabbing_automation->set_point(current_pattern, tick, value); grabbing_point = 1; // useless, can be anything here grabbing_point_tick_from = tick; grabbing_point_value_from = Automation::EMPTY; grabbing_point_tick = tick; grabbing_point_value = value; grabbing_x = E->get().fields[0].x; grabbing_width = E->get().fields[0].width; grabbing_mouse_pos_x = pos_x; grabbing_mouse_pos_y = pos_y; grabbing_mouse_prev_x = grabbing_mouse_pos_x; grabbing_mouse_prev_y = grabbing_mouse_pos_y; queue_draw(); return; } else { for (int i = 0; i < E->get().fields.size(); i++) { int localx = event->x - E->get().fields[i].x; if (localx >= 0 && localx < E->get().fields[i].width) { cursor.field = i; cursor.column = E->get().column; queue_draw(); cursor.row = (event->y / row_height_cache) + v_offset; _validate_menus(); selection.mouse_drag_from_column = cursor.column; selection.mouse_drag_from_row = cursor.row; selection.mouse_drag_active = true; return; } else { int distance = localx < 0 ? -localx : localx - E->get().fields[i].width; if (distance < closest_distance) { closest_distance = distance; closest_field = i; closest_column = E->get().column; } } } } } if (closest_column >= 0) { cursor.field = closest_field; cursor.column = closest_column; queue_draw(); cursor.row = (event->y / row_height_cache) + v_offset; selection.mouse_drag_from_column = cursor.column; selection.mouse_drag_from_row = cursor.row; selection.mouse_drag_active = true; _validate_menus(); return; } } if (p_press && event->button == 3 && grabbing_point == -1) { // remove for (List::Element *E = click_areas.front(); E; E = E->next()) { int point_index = -1; float point_d = 1e20; for (List::Element *F = E->get().automation_points.front(); F; F = F->next()) { float x = F->get().x; float y = F->get().y; float d = sqrt((x - event->x) * (x - event->x) + (y - event->y) * (y - event->y)); if (d < 4) { if (point_index < 0 || d < point_d) { point_index = F->get().index; point_d = d; } } } if (point_index >= 0) { undo_redo->begin_action("Remove Point"); Tick tick = E->get().automation->get_point_tick_by_index( current_pattern, point_index); uint8_t value = E->get().automation->get_point_by_index(current_pattern, point_index); undo_redo->do_method(E->get().automation, &Automation::remove_point, current_pattern, tick); undo_redo->undo_method(E->get().automation, &Automation::set_point, current_pattern, tick, value); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); return; } } //nothing found, popup relevant menus { int closest_field = -1; int closest_column = -1; int closest_distance = 0x7FFFFFFF; for (List::Element *E = click_areas.front(); E; E = E->next()) { for (int i = 0; i < E->get().fields.size(); i++) { int localx = event->x - E->get().fields[i].x; if (localx >= 0 && localx < E->get().fields[i].width) { closest_field = i; closest_column = E->get().column; closest_distance = 0; break; } else { int distance = localx < 0 ? -localx : localx - E->get().fields[i].width; if (distance < closest_distance) { closest_distance = distance; closest_field = i; closest_column = E->get().column; } } } if (closest_distance == 0) { break; } } if (closest_column >= 0) { cursor.field = closest_field; cursor.column = closest_column; queue_draw(); //cursor.row = (event->y / row_height_cache) + v_offset; _validate_menus(); int automation = song->get_event_column_automation(closest_column); if (song->get_event_column_automation(closest_column) >= 0) { automation_popup.popup(event->button, event->time); } else { track_popup.popup(event->button, event->time); } } } } if (!p_press && event->button == 1) { if (grabbing_point >= 0) { grabbing_point = -1; undo_redo->begin_action("Move Point"); undo_redo->do_method(grabbing_automation, &Automation::remove_point, current_pattern, grabbing_point_tick_from); undo_redo->do_method(grabbing_automation, &Automation::set_point, current_pattern, grabbing_point_tick, grabbing_point_value); undo_redo->undo_method(grabbing_automation, &Automation::remove_point, current_pattern, grabbing_point_tick); undo_redo->undo_method(grabbing_automation, &Automation::set_point, current_pattern, grabbing_point_tick_from, grabbing_point_value_from); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } selection.mouse_drag_active = false; } } bool PatternEditor::on_scroll_event(GdkEventScroll *scroll_event) { if (scroll_event->direction == GDK_SCROLL_UP) { v_scroll->set_value(v_scroll->get_value() - _get_rows_per_beat()); return true; } if (scroll_event->direction == GDK_SCROLL_DOWN) { v_scroll->set_value(v_scroll->get_value() + _get_rows_per_beat()); return true; } return false; } bool PatternEditor::on_button_press_event(GdkEventButton *event) { grab_focus(); _mouse_button_event(event, true); return false; } bool PatternEditor::on_button_release_event(GdkEventButton *release_event) { _mouse_button_event(release_event, false); return false; } bool PatternEditor::on_motion_notify_event(GdkEventMotion *motion_event) { if (song->get_track_count() == 0) { return false; } if (selection.mouse_drag_active) { int closest_field = -1; int closest_column = -1; int closest_distance = 0x7FFFFFFF; for (List::Element *E = click_areas.front(); E; E = E->next()) { for (int i = 0; i < E->get().fields.size(); i++) { int localx = motion_event->x - E->get().fields[i].x; if (localx >= 0 && localx < E->get().fields[i].width) { closest_field = i; closest_column = E->get().column; closest_distance = 0; break; } else { int distance = localx < 0 ? -localx : localx - E->get().fields[i].width; if (distance < closest_distance) { closest_distance = distance; closest_field = i; closest_column = E->get().column; } } } if (closest_distance == 0) { break; } } if (closest_column >= 0) { int row = (motion_event->y / row_height_cache) + v_offset; bool prev_active = selection.active; selection.active = true; selection.begin_column = selection.mouse_drag_from_column; selection.begin_tick = selection.mouse_drag_from_row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.end_column = closest_column; selection.end_tick = row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); _validate_selection(); if (prev_active != selection.active) { _validate_menus(); } queue_draw(); return true; } } if (grabbing_point >= 0) { grabbing_mouse_prev_x = grabbing_mouse_pos_x; grabbing_mouse_prev_y = grabbing_mouse_pos_y; int rel_x = motion_event->x - grabbing_mouse_prev_x; int rel_y = motion_event->y - grabbing_mouse_prev_y; grabbing_mouse_prev_x = motion_event->x; grabbing_mouse_prev_y = motion_event->y; grabbing_mouse_pos_x += rel_x; grabbing_mouse_pos_y += rel_y; int y = grabbing_mouse_pos_y; int x = grabbing_mouse_pos_x; Tick tick = MAX(0, (y - row_top_ofs + v_offset * row_height_cache)) * TICKS_PER_BEAT / (row_height_cache * _get_rows_per_beat()); grabbing_automation->remove_point(current_pattern, grabbing_point_tick); while (grabbing_automation->get_point(current_pattern, tick) != Automation::EMPTY) { tick++; } uint8_t value = CLAMP((x - grabbing_x) * Automation::VALUE_MAX / grabbing_width, 0, Automation::VALUE_MAX); grabbing_point_tick = tick; grabbing_point_value = value; grabbing_automation->set_point(current_pattern, tick, value); queue_draw(); } return false; } void PatternEditor::_validate_menus() { //begin by validating cursor _validate_cursor(); int current_track = get_current_track(); key_bindings->set_action_enabled(KeyBindings::TRACK_ADD_COLUMN, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::TRACK_REMOVE_COLUMN, current_track >= 0 && song->get_track(current_track)->get_column_count() > 1); key_bindings->set_action_enabled(KeyBindings::TRACK_ADD_COMMAND_COLUMN, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::TRACK_REMOVE_COMMAND_COLUMN, current_track >= 0 && song->get_track(current_track)->get_command_column_count() > 0); key_bindings->set_action_enabled(KeyBindings::TRACK_MOVE_LEFT, current_track > 0); key_bindings->set_action_enabled(KeyBindings::TRACK_MOVE_RIGHT, current_track >= 0 && current_track < song->get_track_count() - 1); key_bindings->set_action_enabled(KeyBindings::TRACK_MUTE, current_track >= 0); key_bindings->set_action_checked(KeyBindings::TRACK_MUTE, current_track >= 0 && song->get_track(current_track)->is_muted()); key_bindings->set_action_enabled(KeyBindings::TRACK_SOLO, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::TRACK_RENAME, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::TRACK_REMOVE, current_track >= 0); int current_automation = current_track >= 0 ? song->get_event_column_automation(cursor.column) : -1; key_bindings->set_action_enabled(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS, current_automation >= 0); key_bindings->set_action_enabled(KeyBindings::AUTOMATION_RADIO_ENVELOPE_SMALL, current_automation >= 0); key_bindings->set_action_enabled(KeyBindings::AUTOMATION_RADIO_ENVELOPE_LARGE, current_automation >= 0); if (current_automation >= 0) { switch (song->get_track(current_track)->get_automation(current_automation)->get_edit_mode()) { case Automation::EDIT_ROWS_DISCRETE: { key_bindings->set_action_state(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS, key_bindings->get_keybind_name(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS)); } break; case Automation::EDIT_ENVELOPE_SMALL: { key_bindings->set_action_state(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS, key_bindings->get_keybind_name(KeyBindings::AUTOMATION_RADIO_ENVELOPE_SMALL)); } break; case Automation::EDIT_ENVELOPE_LARGE: { key_bindings->set_action_state(KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS, key_bindings->get_keybind_name(KeyBindings::AUTOMATION_RADIO_ENVELOPE_LARGE)); } break; } } key_bindings->set_action_enabled(KeyBindings::AUTOMATION_MOVE_LEFT, current_automation > 0); key_bindings->set_action_enabled(KeyBindings::AUTOMATION_MOVE_RIGHT, current_automation >= 0 && current_automation < song->get_track(current_track)->get_automation_count() - 1); key_bindings->set_action_enabled(KeyBindings::AUTOMATION_REMOVE, current_automation >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECT_BEGIN, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECT_END, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECT_COLUMN_TRACK_ALL, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_RAISE_NOTES_SEMITONE, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_RAISE_NOTES_OCTAVE, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_LOWER_NOTES_SEMITONE, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_LOWER_NOTES_OCTAVE, current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_SET_VOLUME, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_INTERPOLATE_VOLUME_AUTOMATION, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_AMPLIFY_VOLUME_AUTOMATION, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_CUT, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_COPY, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_PASTE_INSERT, clipboard.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_PASTE_OVERWRITE, clipboard.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_PASTE_MIX, clipboard.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_DISABLE, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_DOUBLE_LENGTH, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_HALVE_LENGTH, selection.active && current_track >= 0); key_bindings->set_action_enabled(KeyBindings::PATTERN_SELECTION_SCALE_LENGTH, selection.active && current_track >= 0); current_track_changed.emit(); //this should probably be optimized a bit } void PatternEditor::_notify_track_layout_changed() { track_layout_changed.emit(); } void PatternEditor::_on_action_activated(KeyBindings::KeyBind p_bind) { if (p_bind == KeyBindings::TRACK_ADD_TRACK) { //always available Track *track = new Track; String name = "New Track"; int idx = 1; while (true) { bool exists = false; for (int i = 0; i < song->get_track_count(); i++) { if (song->get_track(i)->get_name() == name) { exists = true; break; } } if (exists) { idx++; name = "New Track " + String::num(idx); } else { break; } } track->set_name(name); track->add_send(Track::SEND_SPEAKERS); undo_redo->begin_action("Add Track"); undo_redo->do_method( song, &Song::add_track, track); undo_redo->undo_method( song, &Song::remove_track, song->get_track_count()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->do_data(track); undo_redo->commit_action(); } int current_track = get_current_track(); if (current_track >= 0) { switch (p_bind) { case KeyBindings::TRACK_ADD_COLUMN: { undo_redo->begin_action("Add Column"); undo_redo->do_method( song->get_track(current_track), &Track::set_columns, song->get_track(current_track)->get_column_count() + 1); undo_redo->undo_method( song->get_track(current_track), &Track::set_columns, song->get_track(current_track)->get_column_count()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; case KeyBindings::TRACK_REMOVE_COLUMN: { ERR_FAIL_COND(song->get_track(current_track)->get_column_count() <= 1); undo_redo->begin_action("Remove Column"); undo_redo->do_method( song->get_track(current_track), &Track::set_columns, song->get_track(current_track)->get_column_count() - 1); undo_redo->undo_method( song->get_track(current_track), &Track::set_columns, song->get_track(current_track)->get_column_count()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::TRACK_ADD_COMMAND_COLUMN: { undo_redo->begin_action("Add Command Column"); undo_redo->do_method( song->get_track(current_track), &Track::set_command_columns, song->get_track(current_track)->get_command_column_count() + 1); undo_redo->undo_method( song->get_track(current_track), &Track::set_command_columns, song->get_track(current_track)->get_command_column_count()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; case KeyBindings::TRACK_REMOVE_COMMAND_COLUMN: { ERR_FAIL_COND(song->get_track(current_track)->get_command_column_count() == 0); undo_redo->begin_action("Remove Command Column"); undo_redo->do_method( song->get_track(current_track), &Track::set_command_columns, song->get_track(current_track)->get_command_column_count() - 1); undo_redo->undo_method( song->get_track(current_track), &Track::set_command_columns, song->get_track(current_track)->get_command_column_count()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::TRACK_SOLO: { bool unsolo = true; for (int i = 0; i < song->get_track_count(); i++) { if (i == current_track) { if (song->get_track(i)->is_muted()) { unsolo = false; break; } } else { if (!song->get_track(i)->is_muted()) { unsolo = false; break; } } } undo_redo->begin_action("Solo"); for (int i = 0; i < song->get_track_count(); i++) { if (unsolo) { undo_redo->do_method(song->get_track(i), &Track::set_muted, false); } else { undo_redo->do_method(song->get_track(i), &Track::set_muted, i != current_track); } undo_redo->undo_method(song->get_track(i), &Track::set_muted, song->get_track(i)->is_muted()); } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::TRACK_MUTE: { undo_redo->begin_action("Mute"); undo_redo->do_method(song->get_track(current_track), &Track::set_muted, !song->get_track(current_track)->is_muted()); undo_redo->undo_method(song->get_track(current_track), &Track::set_muted, song->get_track(current_track)->is_muted()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::TRACK_RENAME: { /*undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed);*/ Gtk::MessageDialog dialog("Enter Track Name:", false /* use_markup */, Gtk::MESSAGE_QUESTION, Gtk::BUTTONS_OK_CANCEL); dialog.set_title("Rename Track"); dialog.set_transient_for(*static_cast(get_toplevel())); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); Gtk::Entry entry; entry.set_text(song->get_track(current_track)->get_name().utf8().get_data()); dialog.get_vbox()->pack_start(entry, Gtk::PACK_SHRINK); entry.show(); //make sure pressing enter also causes rename entry.signal_activate().connect(sigc::bind(sigc::mem_fun(dialog, &Gtk::MessageDialog::response), int(Gtk::RESPONSE_OK))); if (dialog.run() == Gtk::RESPONSE_OK) { String name = entry.get_text().c_str(); if (name != song->get_track(current_track)->get_name()) { undo_redo->begin_action("Track Rename"); undo_redo->do_method(song->get_track(current_track), &Track::set_name, name); undo_redo->undo_method(song->get_track(current_track), &Track::set_name, song->get_track(current_track)->get_name()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->commit_action(); } } } break; case KeyBindings::TRACK_MOVE_LEFT: { ERR_FAIL_COND(current_track == 0); cursor.column -= song->get_track(current_track - 1)->get_event_column_count(); undo_redo->begin_action("Track Move Left"); undo_redo->do_method(song, &Song::swap_tracks, current_track, current_track - 1); undo_redo->undo_method(song, &Song::swap_tracks, current_track, current_track - 1); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->commit_action(); } break; case KeyBindings::TRACK_MOVE_RIGHT: { ERR_FAIL_COND(current_track == song->get_track_count() - 1); cursor.column += song->get_track(current_track + 1)->get_event_column_count(); undo_redo->begin_action("Track Move Right"); undo_redo->do_method(song, &Song::swap_tracks, current_track, current_track + 1); undo_redo->undo_method(song, &Song::swap_tracks, current_track, current_track + 1); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->commit_action(); } break; case KeyBindings::TRACK_REMOVE: { undo_redo->begin_action("Remove"); undo_redo->do_method(song, &Song::remove_track, current_track); undo_redo->undo_method(song, &Song::add_track_at_pos, song->get_track(current_track), current_track); for (int i = 0; i < song->get_track_count(); i++) { if (i == current_track) { continue; } Track *track = song->get_track(i); for (int j = 0; j < track->get_send_count(); j++) { int send_to = track->get_send_track(j); if (send_to == Track::SEND_SPEAKERS) { continue; //nothing to do } int removed = 0; if (send_to < current_track) { continue; } else if (send_to > current_track) { undo_redo->do_method(track, &Track::set_send_track, j - removed, send_to - 1); undo_redo->undo_method(track, &Track::set_send_track, j - removed, send_to); } else { //removee undo_redo->do_method(track, &Track::remove_send, j - removed); undo_redo->undo_method(track, &Track::add_send, current_track, j - removed); undo_redo->undo_method(track, &Track::set_send_amount, j - removed, track->get_send_amount(j)); undo_redo->undo_method(track, &Track::set_send_mute, j - removed, track->is_send_muted(j)); removed++; } } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->undo_data(song->get_track(current_track)); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->do_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->undo_method(this, &PatternEditor::_notify_track_layout_changed); undo_redo->commit_action(); for (int i = 0; i < song->get_track(current_track)->get_audio_effect_count(); i++) { erase_effect_editor_request.emit(song->get_track(current_track)->get_audio_effect(i)); } } break; case KeyBindings::PATTERN_SELECT_BEGIN: { if (!selection.active) { selection.begin_column = cursor.column; selection.begin_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.end_column = cursor.column; selection.end_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); selection.active = true; _validate_menus(); } else { selection.begin_column = cursor.column; selection.begin_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); _validate_selection(); } queue_draw(); } break; case KeyBindings::PATTERN_SELECT_END: { if (!selection.active) { selection.begin_column = cursor.column; selection.begin_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.end_column = cursor.column; selection.end_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); selection.active = true; _validate_menus(); } else { selection.end_column = cursor.column; selection.end_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); _validate_selection(); } queue_draw(); } break; case KeyBindings::PATTERN_SELECT_COLUMN_TRACK_ALL: { Tick pattern_ticks = song->pattern_get_beats(current_pattern) * TICKS_PER_BEAT; pattern_ticks -= TICKS_PER_BEAT / _get_rows_per_beat(); int pattern_columns = song->get_event_column_count(); Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND(!track); int track_begin_column; if (column >= 0) { track_begin_column = cursor.column - column; } else if (command >= 0) { track_begin_column = cursor.column - column - track->get_column_count(); } else { track_begin_column = cursor.column - automation - (track->get_column_count() + track->get_command_column_count()); } int track_event_columns = track->get_event_column_count(); int track_end_column = track_begin_column + track_event_columns - 1; //printf("track begin: %i, track end %i\n", track_begin_column, track_end_column); if (selection.active && selection.begin_column == track_begin_column && selection.end_column == track_end_column && selection.begin_tick == 0 && selection.end_tick == pattern_ticks) { //in track (or single column), switch to all (if there is anything to switch to) if (selection.begin_column != 0 || selection.end_column != pattern_columns - 1) { selection.begin_column = 0; selection.end_column = pattern_columns - 1; queue_draw(); break; } } if (selection.active && selection.begin_column == cursor.column && selection.end_column == cursor.column && selection.begin_tick == 0 && selection.end_tick == pattern_ticks) { //in column, switch to track, unless track is a single column selection.begin_column = track_begin_column; selection.end_column = track_end_column; queue_draw(); break; } //select column selection.begin_column = cursor.column; selection.end_column = cursor.column; selection.begin_tick = 0; selection.end_tick = pattern_ticks; selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); selection.active = true; _validate_menus(); queue_draw(); } break; case KeyBindings::PATTERN_SELECTION_DISABLE: { selection.active = false; _validate_menus(); queue_draw(); } break; /* selection AREA actions */ case KeyBindings::PATTERN_SELECTION_RAISE_NOTES_SEMITONE: case KeyBindings::PATTERN_SELECTION_RAISE_NOTES_OCTAVE: case KeyBindings::PATTERN_SELECTION_LOWER_NOTES_SEMITONE: case KeyBindings::PATTERN_SELECTION_LOWER_NOTES_OCTAVE: { int amount = 0; String action; switch (p_bind) { case KeyBindings::PATTERN_SELECTION_RAISE_NOTES_SEMITONE: amount = 1; action = "Raise Semitone"; break; case KeyBindings::PATTERN_SELECTION_RAISE_NOTES_OCTAVE: amount = 12; action = "Raise Octave"; break; case KeyBindings::PATTERN_SELECTION_LOWER_NOTES_SEMITONE: amount = -1; action = "Lower Semitone"; break; case KeyBindings::PATTERN_SELECTION_LOWER_NOTES_OCTAVE: amount = -12; action = "Raise Octave"; break; } int column_from; int column_to; Tick tick_from; Tick tick_to; if (selection.active) { column_from = selection.begin_column; tick_from = selection.begin_tick; column_to = selection.end_column; tick_to = selection.end_tick + selection.row_tick_size - 1; } else { column_from = cursor.column; column_to = cursor.column; tick_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); tick_to = tick_from + selection.row_tick_size - 1; } List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, column_from), Track::Pos(tick_to, column_to), &events); if (events.empty()) { break; } undo_redo->begin_action(action, true); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; if (ev.type == Track::Event::TYPE_NOTE && ev.a < Track::Note::MAX_NOTE) { undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); ev.a = CLAMP(int(ev.a + amount), 0, Track::Note::MAX_NOTE); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; case KeyBindings::PATTERN_SELECTION_SET_VOLUME: { int column_from; int column_to; Tick tick_from; Tick tick_to; if (selection.active) { column_from = selection.begin_column; tick_from = selection.begin_tick; column_to = selection.end_column; tick_to = selection.end_tick + selection.row_tick_size - 1; } else { column_from = cursor.column; column_to = cursor.column; tick_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); tick_to = tick_from + selection.row_tick_size - 1; } List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, column_from), Track::Pos(tick_to, column_to), &events); if (events.empty()) { break; } undo_redo->begin_action("Set Volume Mask", true); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; if (ev.type == Track::Event::TYPE_NOTE && ev.a < Track::Note::MAX_NOTE) { undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); ev.b = volume_mask; undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; case KeyBindings::PATTERN_SELECTION_INTERPOLATE_VOLUME_AUTOMATION: { if (!selection.active) { break; } bool valid = false; for (int i = selection.begin_column; i <= selection.begin_column; i++) { Track::Event ev_first = song->get_event(current_pattern, i, selection.begin_tick); Track::Event ev_last = song->get_event(current_pattern, i, selection.end_tick); if (ev_first.type == Track::Event::TYPE_NOTE && ev_first.a != Track::Note::EMPTY && ev_last.a != Track::Note::EMPTY) { valid = true; break; } if (ev_first.type == Track::Event::TYPE_AUTOMATION && ev_first.a != Automation::EMPTY && ev_last.a != Automation::EMPTY) { valid = true; break; } if (ev_first.type == Track::Event::TYPE_COMMAND) { valid = true; break; } } if (!valid) { break; } undo_redo->begin_action("Interpolate", true); Tick tick_from = selection.begin_tick; Tick tick_to = selection.end_tick; for (int i = selection.begin_column; i <= selection.begin_column; i++) { Track::Event ev_first = song->get_event(current_pattern, i, selection.begin_tick); Track::Event ev_last = song->get_event(current_pattern, i, selection.end_tick); if (ev_first.type == Track::Event::TYPE_NOTE && ev_first.a != Track::Note::EMPTY && ev_last.a != Track::Note::EMPTY) { //interpolate notes List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, i), Track::Pos(tick_to, i), &events); int volume_from = ev_first.b == Track::Note::EMPTY ? 99 : ev_first.b; int volume_to = ev_last.b == Track::Note::EMPTY ? 99 : ev_last.b; for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); float c = float(E->get().pos.tick - tick_from) / float(tick_to - tick_from); int volume = CLAMP(int(volume_from * (1.0 - c) + volume_to * c), 0, 99); ev.b = volume; undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } } if (ev_first.type == Track::Event::TYPE_COMMAND) { //interpolate notes List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, i), Track::Pos(tick_to, i), &events); int param_from = ev_first.b; int param_to = ev_last.b; for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); float c = float(E->get().pos.tick - tick_from) / float(tick_to - tick_from); int param = CLAMP(int(param_from * (1.0 - c) + param_to * c), 0, 99); ev.b = param; undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } } if (ev_first.type == Track::Event::TYPE_AUTOMATION && ev_first.a != Automation::EMPTY && ev_last.a != Automation::EMPTY) { //interpolate automations int value_from = ev_first.a == Automation::EMPTY ? 99 : ev_first.a; int value_to = ev_last.a == Automation::EMPTY ? 99 : ev_last.a; Tick tick_increment = TICKS_PER_BEAT / _get_rows_per_beat(); Tick tick_pos = tick_from; while (tick_pos < tick_to) { Track::Event ev = song->get_event(current_pattern, i, tick_pos); undo_redo->undo_method(song, &Song::set_event, current_pattern, i, tick_pos, ev); float c = float(tick_pos - tick_from) / float(tick_to - tick_from); int value = CLAMP(int(value_from * (1.0 - c) + value_to * c), 0, 99); ev.a = value; undo_redo->do_method(song, &Song::set_event, current_pattern, i, tick_pos, ev); tick_pos += tick_increment; } } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; case KeyBindings::PATTERN_SELECTION_AMPLIFY_VOLUME_AUTOMATION: { Gtk::MessageDialog dialog("Amplify(%):", false /* use_markup */, Gtk::MESSAGE_QUESTION, Gtk::BUTTONS_OK_CANCEL); dialog.set_title("Amplify Volume / Automation"); dialog.set_transient_for(*static_cast(get_toplevel())); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); Gtk::Entry entry; entry.set_text(String::num(last_amplify_value).ascii().get_data()); dialog.get_vbox()->pack_start(entry, Gtk::PACK_SHRINK); entry.show(); //make sure pressing enter also causes enter entry.signal_activate().connect(sigc::bind(sigc::mem_fun(dialog, &Gtk::MessageDialog::response), int(Gtk::RESPONSE_OK))); if (dialog.run() == Gtk::RESPONSE_OK) { int amount = String(entry.get_text().c_str()).to_int(); last_amplify_value = amount; float ratio = float(amount) / 100; int column_from; int column_to; Tick tick_from; Tick tick_to; if (selection.active) { column_from = selection.begin_column; tick_from = selection.begin_tick; column_to = selection.end_column; tick_to = selection.end_tick + selection.row_tick_size - 1; } else { column_from = cursor.column; column_to = cursor.column; tick_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); tick_to = tick_from + selection.row_tick_size - 1; } List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, column_from), Track::Pos(tick_to, column_to), &events); if (events.empty()) { break; } undo_redo->begin_action("Amplify", true); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); if (ev.type == Track::Event::TYPE_NOTE) { if (ev.b == Track::Note::EMPTY) { ev.b = Track::Note::MAX_VOLUME; } ev.b = int(CLAMP(float(ev.b) * ratio, 0, Track::Note::MAX_VOLUME)); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } else if (ev.type == Track::Event::TYPE_AUTOMATION) { ev.a = int(CLAMP(float(ev.a) * ratio, 0, Automation::VALUE_MAX)); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } } break; case KeyBindings::PATTERN_SELECTION_DOUBLE_LENGTH: case KeyBindings::PATTERN_SELECTION_HALVE_LENGTH: case KeyBindings::PATTERN_SELECTION_SCALE_LENGTH: { if (!selection.active) { break; } float scale = 1.0; String action; switch (p_bind) { case KeyBindings::PATTERN_SELECTION_DOUBLE_LENGTH: scale = 2.0; action = "Double Length"; break; case KeyBindings::PATTERN_SELECTION_HALVE_LENGTH: scale = 0.5; action = "Halve Length"; break; case KeyBindings::PATTERN_SELECTION_SCALE_LENGTH: { action = "Scale Length"; Gtk::MessageDialog dialog("Scale Ratio:", false /* use_markup */, Gtk::MESSAGE_QUESTION, Gtk::BUTTONS_OK_CANCEL); dialog.set_title("Scale Selection Length"); dialog.set_transient_for(*static_cast(get_toplevel())); dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); Gtk::Entry entry; entry.set_text(String::num(last_scale_value).ascii().get_data()); dialog.get_vbox()->pack_start(entry, Gtk::PACK_SHRINK); entry.show(); //make sure pressing enter also causes enter entry.signal_activate().connect(sigc::bind(sigc::mem_fun(dialog, &Gtk::MessageDialog::response), int(Gtk::RESPONSE_OK))); if (dialog.run() == Gtk::RESPONSE_OK) { scale = String(entry.get_text().c_str()).to_double(); last_scale_value = scale; } else { return; } } break; } int column_from; int column_to; Tick tick_from; Tick tick_to; if (selection.active) { column_from = selection.begin_column; tick_from = selection.begin_tick; column_to = selection.end_column; tick_to = selection.end_tick + selection.row_tick_size - 1; } else { column_from = cursor.column; column_to = cursor.column; tick_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); tick_to = tick_from + selection.row_tick_size - 1; } List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, column_from), Track::Pos(tick_to, column_to), &events); if (events.empty()) { break; } undo_redo->begin_action(action, true); //erase stuff first for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } //replace Tick max_len = song->pattern_get_beats(current_pattern) * TICKS_PER_BEAT; for (List::Element *E = events.front(); E; E = E->next()) { Tick new_pos = E->get().pos.tick * scale; if (new_pos >= max_len) { continue; } Track::Event ev = song->get_event(current_pattern, E->get().pos.column, new_pos); undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, new_pos, ev); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, new_pos, E->get().event); } //readd undone stuff for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; /* CLIPBOARD */ case KeyBindings::PATTERN_SELECTION_CUT: case KeyBindings::PATTERN_SELECTION_COPY: { int column_from = selection.begin_column; int column_to = selection.end_column; Tick tick_from = selection.begin_tick; Tick tick_to = selection.end_tick + selection.row_tick_size - 1; List events; song->get_events_in_range(current_pattern, Track::Pos(tick_from, column_from), Track::Pos(tick_to, column_to), &events); if (events.empty()) { clipboard.active = false; _validate_menus(); break; } clipboard.active = true; clipboard.columns = column_to - column_from + 1; clipboard.ticks = tick_to - tick_from + 1; //else paste insert wont work clipboard.events.clear(); for (List::Element *E = events.front(); E; E = E->next()) { Track::PosEvent pe = E->get(); pe.pos.tick -= tick_from; pe.pos.column -= column_from; clipboard.events.push_back(pe); } if (p_bind == KeyBindings::PATTERN_SELECTION_CUT) { //also cut undo_redo->begin_action("Selection Zap (cut)"); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); ev = Track::Event::make_empty(ev.type); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } _validate_menus(); } break; case KeyBindings::PATTERN_SELECTION_PASTE_INSERT: case KeyBindings::PATTERN_SELECTION_PASTE_OVERWRITE: case KeyBindings::PATTERN_SELECTION_PASTE_MIX: { if (!clipboard.active) { break; } String action; switch (p_bind) { case KeyBindings::PATTERN_SELECTION_PASTE_INSERT: action = "Paste Insert"; break; case KeyBindings::PATTERN_SELECTION_PASTE_OVERWRITE: action = "Paste Overwrite"; break; case KeyBindings::PATTERN_SELECTION_PASTE_MIX: action = "Paste Mix"; break; } undo_redo->begin_action(action); int last_column = song->get_event_column_count() - 1; //clear space to paste (pre) if (p_bind != KeyBindings::PATTERN_SELECTION_PASTE_MIX) { Tick clear_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick clear_to = clear_from + clipboard.ticks; int clear_from_colum = cursor.column; int clear_to_column = cursor.column + clipboard.columns - 1; clear_to_column = MIN(clear_to_column, last_column); for (int i = clear_from_colum; i <= clear_to_column; i++) { List events; song->get_events_in_range(current_pattern, Track::Pos(clear_from, i), Track::Pos(clear_to, i), &events); for (List::Element *E = events.front(); E; E = E->next()) { if (E->get().event.type != song->get_event_column_type(i)) { continue; } Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->do_method(song, &Song::set_event, current_pattern, i, E->get().pos.tick, ev); } } } //paste { Tick paste_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); for (List::Element *E = clipboard.events.front(); E; E = E->next()) { int column = cursor.column + E->get().pos.column; if (column > last_column) { continue; } if (song->get_event_column_type(column) != E->get().event.type) { //different type, do nothing continue; } Tick tick = paste_from + E->get().pos.tick; undo_redo->do_method(song, &Song::set_event, current_pattern, column, tick, E->get().event); undo_redo->undo_method(song, &Song::set_event, current_pattern, column, tick, song->get_event(current_pattern, column, tick)); } } //move everything down on insert if (p_bind == KeyBindings::PATTERN_SELECTION_PASTE_INSERT) { Tick move_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick move_to = song->pattern_get_beats(current_pattern) * TICKS_PER_BEAT; int move_from_colum = cursor.column; int move_to_column = cursor.column + clipboard.columns - 1; move_to_column = MIN(move_to_column, last_column); for (int i = move_from_colum; i <= move_to_column; i++) { if (move_from + clipboard.ticks >= move_to) { continue; //nothing to do } //erase List erase_events; song->get_events_in_range(current_pattern, Track::Pos(move_from + clipboard.ticks, i), Track::Pos(move_to, i), &erase_events); for (List::Element *E = erase_events.front(); E; E = E->next()) { Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } //move List move_events; song->get_events_in_range(current_pattern, Track::Pos(move_from, i), Track::Pos(move_to - clipboard.ticks, i), &move_events); for (List::Element *E = move_events.front(); E; E = E->next()) { undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick + clipboard.ticks, E->get().event); Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick + clipboard.ticks, ev); } //restore for (List::Element *E = erase_events.front(); E; E = E->next()) { undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, E->get().event); } } } //clear space to paste (post) if (p_bind != KeyBindings::PATTERN_SELECTION_PASTE_MIX) { Tick clear_from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick clear_to = clear_from + clipboard.ticks; int clear_from_colum = cursor.column; int clear_to_column = cursor.column + clipboard.columns - 1; clear_to_column = MIN(clear_to_column, last_column); for (int i = clear_from_colum; i <= clear_to_column; i++) { List events; song->get_events_in_range(current_pattern, Track::Pos(clear_from, i), Track::Pos(clear_to, i), &events); for (List::Element *E = events.front(); E; E = E->next()) { if (E->get().event.type != song->get_event_column_type(i)) { continue; } undo_redo->undo_method(song, &Song::set_event, current_pattern, i, E->get().pos.tick, E->get().event); } } } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } break; } int current_automation = song->get_event_column_automation(cursor.column); if (current_automation >= 0) { switch (p_bind) { case KeyBindings::AUTOMATION_RADIO_DISCRETE_ROWS: { Automation *a = song->get_track(current_track) ->get_automation(current_automation); undo_redo->begin_action("Automation Display Numbers"); undo_redo->do_method(a, &Automation::set_edit_mode, Automation::EDIT_ROWS_DISCRETE); undo_redo->undo_method(a, &Automation::set_edit_mode, a->get_edit_mode()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::AUTOMATION_RADIO_ENVELOPE_SMALL: { Automation *a = song->get_track(current_track) ->get_automation(current_automation); undo_redo->begin_action("Automation Display Small"); undo_redo->do_method(a, &Automation::set_edit_mode, Automation::EDIT_ENVELOPE_SMALL); undo_redo->undo_method(a, &Automation::set_edit_mode, a->get_edit_mode()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::AUTOMATION_RADIO_ENVELOPE_LARGE: { Automation *a = song->get_track(current_track) ->get_automation(current_automation); undo_redo->begin_action("Automation Display Large"); undo_redo->do_method(a, &Automation::set_edit_mode, Automation::EDIT_ENVELOPE_LARGE); undo_redo->undo_method(a, &Automation::set_edit_mode, a->get_edit_mode()); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::AUTOMATION_MOVE_LEFT: { ERR_FAIL_COND(current_automation == 0); cursor.column -= 1; undo_redo->begin_action("Automation Move Left"); undo_redo->do_method(song->get_track(current_track), &Track::swap_automations, current_automation, current_automation - 1); undo_redo->undo_method(song->get_track(current_track), &Track::swap_automations, current_automation, current_automation - 1); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::AUTOMATION_MOVE_RIGHT: { ERR_FAIL_COND(current_automation == song->get_track(current_track)->get_automation_count() - 1); cursor.column += 1; undo_redo->begin_action("Automation Move Right"); undo_redo->do_method(song->get_track(current_track), &Track::swap_automations, current_automation, current_automation + 1); undo_redo->undo_method(song->get_track(current_track), &Track::swap_automations, current_automation, current_automation + 1); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; case KeyBindings::AUTOMATION_REMOVE: { Automation *a = song->get_track(current_track) ->get_automation(current_automation); undo_redo->begin_action("Remove Automation"); undo_redo->do_method(song->get_track(current_track), &Track::remove_automation, current_automation); undo_redo->undo_method(song->get_track(current_track), &Track::add_automation, a, current_automation); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->do_method(this, &PatternEditor::_validate_menus); undo_redo->undo_method(this, &PatternEditor::_validate_menus); undo_redo->commit_action(); } break; } } } } void PatternEditor::_update_shift_selection() { bool prev_active = selection.active; selection.active = true; selection.begin_column = selection.shift_from_column; selection.begin_tick = selection.shift_from_row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.end_column = cursor.column; selection.end_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); selection.row_tick_size = TICKS_PER_BEAT / _get_rows_per_beat(); selection.shift_active = true; _validate_selection(); if (prev_active != selection.active) { _validate_menus(); } } bool PatternEditor::on_key_press_event(GdkEventKey *key_event) { bool shift_pressed = key_event->state & GDK_SHIFT_MASK; if (!shift_pressed) { //shift selection will always begin from a previous state selection.shift_active = false; } if (song->get_track_count() == 0) { return false; } if (!selection.shift_active) { selection.shift_from_column = cursor.column; selection.shift_from_row = cursor.row; } if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_UP)) { cursor.row -= cursor_advance; _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_DOWN)) { cursor.row += cursor_advance; _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_UP_1_ROW)) { cursor.row -= 1; _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_DOWN_1_ROW)) { cursor.row += 1; _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_PAGE_UP)) { cursor.row -= song->pattern_get_beats_per_bar(current_pattern) * _get_rows_per_beat(); _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_PAGE_DOWN)) { cursor.row += song->pattern_get_beats_per_bar(current_pattern) * _get_rows_per_beat(); _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_LEFT)) { if (cursor.field == 0) { if (cursor.column > 0) { cursor.column--; Track *track; int automation; int command; int column; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); if (automation >= 0) { if (track->get_automation(automation)->get_edit_mode() != Automation::EDIT_ROWS_DISCRETE) { cursor.field = 0; } else { cursor.field = 1; } } else if (command >= 0) { cursor.field = 2; } else { //note cursor.field = 3; } _validate_menus(); } } else { cursor.field--; } _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_MOVE_RIGHT)) { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); int max_field = 1; if (automation >= 0) { if (track->get_automation(automation)->get_edit_mode() != Automation::EDIT_ROWS_DISCRETE) { max_field = 0; } else { max_field = 1; } } else if (command >= 0) { max_field = 2; } else { max_field = 3; } if (cursor.field == max_field) { if (cursor.column < song->get_event_column_count() - 1) { cursor.column++; cursor.field = 0; _validate_menus(); } } else { cursor.field++; } _validate_cursor(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_TAB)) { if (cursor.column < song->get_event_column_count() - 1) { cursor.column++; cursor.field = 0; } _validate_cursor(); _validate_menus(); } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_BACKTAB)) { if (cursor.field > 0) cursor.field = 0; else if (cursor.column > 0) { cursor.column--; } _validate_cursor(); _validate_menus(); } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_HOME)) { if (cursor.field != 0) { cursor.field = 0; } else { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); if (column != 0) { if (column >= 0) { cursor.column -= column; } else if (command >= 0) { cursor.column -= track->get_column_count() + command; } else { cursor.column -= automation + track->get_column_count() + track->get_command_column_count(); } } else { if (cursor.column == 0) { cursor.row = 0; } else { cursor.column = 0; } } } _validate_cursor(); _validate_menus(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind_noshift(key_event, KeyBindings::CURSOR_END)) { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); int pattern_w = song->get_event_column_count(); int event_ofs; if (column >= 0) { event_ofs = column; } else if (command >= 0) { event_ofs = command + track->get_column_count(); } else { event_ofs = automation + track->get_column_count() + track->get_command_column_count(); } int event_total = track->get_event_column_count(); if (event_ofs < event_total - 1) { cursor.column += event_total - event_ofs - 1; cursor.field = 0; } else if (cursor.column < pattern_w - 1) { cursor.column = pattern_w - 1; cursor.field = 0; } else { cursor.row = song->pattern_get_beats(current_pattern) * _get_rows_per_beat() - 1; cursor.field = 0; } _validate_cursor(); _validate_menus(); if (shift_pressed) { _update_shift_selection(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_PAN_WINDOW_UP)) { if (v_offset > 0) { if (v_offset + visible_rows - 1 == cursor.row) cursor.row--; v_offset--; queue_draw(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_PAN_WINDOW_DOWN)) { if (v_offset + visible_rows < get_total_rows()) { if (cursor.row <= v_offset) { cursor.row++; } v_offset++; queue_draw(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_INSERT) || key_bindings->is_keybind(key_event, KeyBindings::CURSOR_TRACK_INSERT)) { List events; Track::Pos from; from.tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); from.column = cursor.column; Track::Pos to; to.tick = song->pattern_get_beats(current_pattern) * TICKS_PER_BEAT; to.column = cursor.column; if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_TRACK_INSERT)) { from.column = _cursor_get_track_begin_column(); to.column = _cursor_get_track_end_column(); } song->get_events_in_range(current_pattern, from, to, &events); if (events.size()) { undo_redo->begin_action("Insert", true); for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->do_method( song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick + TICKS_PER_BEAT / _get_rows_per_beat(), ev); ev = Track::Event::make_empty(ev.type); undo_redo->undo_method( song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick + TICKS_PER_BEAT / _get_rows_per_beat(), ev); } for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_DELETE) || key_bindings->is_keybind(key_event, KeyBindings::CURSOR_TRACK_DELETE)) { List events; Track::Pos from; from.tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); from.column = cursor.column; Track::Pos to; to.tick = song->pattern_get_beats(current_pattern) * TICKS_PER_BEAT; to.column = cursor.column; if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_TRACK_DELETE)) { from.column = _cursor_get_track_begin_column(); to.column = _cursor_get_track_end_column(); } song->get_events_in_range(current_pattern, from, to, &events); if (events.size()) { undo_redo->begin_action("Delete", true); Tick limit = from.tick; for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = Track::Event::make_empty(E->get().event.type); undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } for (List::Element *E = events.front(); E; E = E->next()) { Tick new_ofs = E->get().pos.tick - TICKS_PER_BEAT / _get_rows_per_beat(); if (new_ofs < limit) continue; Track::Event ev = E->get().event; undo_redo->do_method(song, &Song::set_event, current_pattern, E->get().pos.column, new_ofs, ev); ev = Track::Event::make_empty(ev.type); undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, new_ofs, ev); } for (List::Element *E = events.front(); E; E = E->next()) { Track::Event ev = E->get().event; undo_redo->undo_method(song, &Song::set_event, current_pattern, E->get().pos.column, E->get().pos.tick, ev); } undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_TOGGLE_VOLUME_MASK)) { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); if (column >= 0) { volume_mask_active = !volume_mask_active; volume_mask_changed.emit(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_COPY_VOLUME_MASK)) { Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); if (column >= 0) { Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); if (ev.b != Track::Note::EMPTY) { volume_mask = ev.b; volume_mask_changed.emit(); } } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_OCTAVE_LOWER)) { if (current_octave > 0) { current_octave--; octave_changed.emit(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_OCTAVE_RAISE)) { if (current_octave < 8) { current_octave++; octave_changed.emit(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_PREV_PATTERN)) { if (current_pattern > 0) { current_pattern--; pattern_changed.emit(); _validate_cursor(); queue_draw(); } } else if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_NEXT_PATTERN)) { if (current_pattern < Song::MAX_PATTERN - 1) { current_pattern++; pattern_changed.emit(); _validate_cursor(); queue_draw(); } } else if (cursor.field == 0 && song->get_event_column_type(cursor.column) == Track::Event::TYPE_NOTE && key_bindings->is_keybind(key_event, KeyBindings::CURSOR_PLAY_NOTE)) { Tick from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to = (cursor.row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); song->play_event_range(current_pattern, cursor.column, cursor.column, from, to); _cursor_advance(); _validate_cursor(); } else if (cursor.field == 0 && song->get_event_column_type(cursor.column) == Track::Event::TYPE_NOTE && key_bindings->is_keybind(key_event, KeyBindings::CURSOR_PLAY_ROW)) { Tick from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to = (cursor.row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); song->play_event_range(current_pattern, 0, song->get_event_column_count() - 1, from, to); _cursor_advance(); queue_draw(); } else { //set step or zoom for (int i = 0; i < 10; i++) { if (key_bindings->is_keybind(key_event, KeyBindings::KeyBind(KeyBindings::CURSOR_ADVANCE_1 + i))) { cursor_advance = i + 1; step_changed.emit(); return true; } if (key_bindings->is_keybind(key_event, KeyBindings::KeyBind(KeyBindings::CURSOR_ZOOM_1 + i))) { Tick cursor_on_tick = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); int ofs_v = cursor.row - v_offset; beat_zoom = BeatZoom(i); cursor.row = cursor_on_tick * _get_rows_per_beat() / TICKS_PER_BEAT; v_offset = MAX(0, cursor.row - ofs_v); zoom_changed.emit(); queue_draw(); return true; } } // check field Track *track; int automation; int column; int command; get_cursor_column_data(&track, command, automation, column); ERR_FAIL_COND_V(!track, false); if (column >= 0) { if (cursor.field == 0) { // put a note for (int i = KeyBindings::PIANO_C0; i <= KeyBindings::PIANO_E2; i++) { if (key_bindings->is_keybind(key_event, KeyBindings::KeyBind(i))) { int note = i - KeyBindings::PIANO_C0; Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); if (volume_mask_active) { ev.b = volume_mask; } Track::Event old_ev = ev; ev.a = current_octave * 12 + note; if (ev.a >= 120) { ev.a = 119; } undo_redo->begin_action("Add Note"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); { //preview play Tick from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to = (cursor.row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); song->play_event_range(current_pattern, cursor.column, cursor.column, from, to); } _cursor_advance(); _validate_cursor(); return true; } } if (key_bindings->is_keybind(key_event, KeyBindings::PATTERN_CURSOR_NOTE_OFF)) { Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; ev.a = Track::Note::OFF; undo_redo->begin_action("Add Note Off"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); { //preview play Tick from = cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to = (cursor.row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); song->play_event_range(current_pattern, cursor.column, cursor.column, from, to); } _cursor_advance(); _validate_cursor(); } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } else if (cursor.field == 1) { // put octave if ((key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9)) { int octave = key_event->keyval - GDK_KEY_0; Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; if (old_ev.a == Track::Note::EMPTY) { return true; // no add octave on empty } ev.a = (ev.a % 12) + octave * 12; undo_redo->begin_action("Set Octave"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); _cursor_advance(); _validate_cursor(); return true; } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } else if (cursor.field == 2) { // put volume 1 if (key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9) { int vol_g = key_event->keyval - GDK_KEY_0; Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; if (old_ev.b == Track::Note::EMPTY) { ev.b = 0; } ev.b = (ev.b % 10) + vol_g * 10; volume_mask = ev.b; volume_mask_changed.emit(); undo_redo->begin_action("Set Volume"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); cursor.field = 3; return true; } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } else if (cursor.field == 3) { // put volume 2 if (key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9) { int vol_l = key_event->keyval - GDK_KEY_0; Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; if (old_ev.b == Track::Note::EMPTY) { ev.b = 0; } ev.b -= (ev.b % 10); ev.b += vol_l; volume_mask = ev.b; volume_mask_changed.emit(); undo_redo->begin_action("Set Volume"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); cursor.field = 2; _cursor_advance(); _validate_cursor(); return true; } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } } else if (command >= 0) { if (cursor.field == 0) { //command if ((key_event->keyval >= GDK_KEY_a && key_event->keyval <= GDK_KEY_z)) { int command = 'a' + (key_event->keyval - GDK_KEY_a); Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; ev.a = command; undo_redo->begin_action("Set Command"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); _cursor_advance(); _validate_cursor(); return true; } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } else { //parameter if (key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9) { int param = key_event->keyval - GDK_KEY_0; Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; if (cursor.field == 1) { ev.b = param * 10 + (ev.b % 10); } else { ev.b = ((ev.b / 10) % 10) * 10 + param; } undo_redo->begin_action("Set Parameter"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); if (cursor.field == 1) { cursor.field = 2; } else { cursor.field = 1; _cursor_advance(); _validate_cursor(); } return true; } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } } else if (automation >= 0) { Automation *a = track->get_automation(automation); if (a->get_edit_mode() == Automation::EDIT_ROWS_DISCRETE) { if (key_event->keyval >= GDK_KEY_0 && key_event->keyval <= GDK_KEY_9) { Track::Event ev = song->get_event(current_pattern, cursor.column, cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()); Track::Event old_ev = ev; if (ev.a == Automation::EMPTY) { ev.a = 0; } if (cursor.field == 0) { ev.a = (key_event->keyval - GDK_KEY_0) * 10 + ev.a % 10; } else { ev.a = (key_event->keyval - GDK_KEY_0) + (ev.a / 10) * 10; } undo_redo->begin_action("Set Automation Point"); undo_redo->do_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), ev); undo_redo->undo_method( song, &Song::set_event, current_pattern, cursor.column, Tick(cursor.row * TICKS_PER_BEAT / _get_rows_per_beat()), old_ev); undo_redo->do_method(this, &PatternEditor::_redraw); undo_redo->undo_method(this, &PatternEditor::_redraw); undo_redo->commit_action(); if (cursor.field == 0) { cursor.field = 1; } else { cursor.field = 0; _cursor_advance(); _validate_cursor(); } } } if (key_bindings->is_keybind(key_event, KeyBindings::CURSOR_FIELD_CLEAR)) { _field_clear(); } } return false; } return true; } bool PatternEditor::on_key_release_event(GdkEventKey *key_event) { bool shift_pressed = key_event->state & GDK_SHIFT_MASK; if (!shift_pressed) { selection.shift_active = false; } return false; } Gtk::SizeRequestMode PatternEditor::get_request_mode_vfunc() const { // Accept the default value supplied by the base class. return Gtk::Widget::get_request_mode_vfunc(); } // Discover the total amount of minimum space and natural space needed by // this widget. // Let's make this simple example widget always need minimum 60 by 50 and // natural 100 by 70. void PatternEditor::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = 64; natural_width = 64; } void PatternEditor::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = 64; natural_height = 64; } void PatternEditor::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = 64; natural_height = 64; } void PatternEditor::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = 64; natural_width = 64; } void PatternEditor::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void PatternEditor::on_map() { // Call base class: Gtk::Widget::on_map(); } void PatternEditor::on_unmap() { // Call base class: Gtk::Widget::on_unmap(); } void PatternEditor::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::SCROLL_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void PatternEditor::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void PatternEditor::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } void PatternEditor::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void PatternEditor::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } void PatternEditor::_draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x + w / 4, y + h / 4); cr->line_to(x + w * 3 / 4, y + h / 4); cr->line_to(x + w / 2, y + h * 3 / 4); cr->line_to(x + w / 4, y + h / 4); cr->fill(); cr->stroke(); } void PatternEditor::_v_scroll_changed() { if (drawing) { return; } v_offset = v_scroll->get_value(); queue_draw(); } void PatternEditor::_h_scroll_changed() { if (drawing) { return; } int offset = h_scroll->get_value(); h_offset = 0; while (true) { if (h_offset == song->get_event_column_count() - 1) { break; } if (offset < get_column_offset(h_offset)) { break; } h_offset++; } queue_draw(); } int PatternEditor::get_column_offset(int p_column) { int offset = 0; int idx = 0; int track_sep = theme->constants[Theme::CONSTANT_PATTERN_EDITOR_TRACK_SEPARATION]; for (int i = 0; i < song->get_track_count(); i++) { Track *t = song->get_track(i); for (int j = 0; j < t->get_column_count(); j++) { if (idx == p_column + 1) { return offset; } int cw = fw_cache * 6; //base column width if (j == 0) { cw += fh_cache; //name } else { cw += fw_cache; //separator } offset += cw; idx++; } for (int j = 0; j < t->get_command_column_count(); j++) { if (idx == p_column + 1) { return offset; } int cw = fw_cache * 4; //base column width offset += cw; idx++; } for (int j = 0; j < t->get_automation_count(); j++) { if (idx == p_column + 1) { return offset; } Automation *a = t->get_automation(j); int cw = fh_cache; //base column width switch (a->get_edit_mode()) { case Automation::EDIT_ROWS_DISCRETE: { cw += fw_cache * 2; } break; case Automation::EDIT_ENVELOPE_SMALL: { cw += fw_cache * 4; } break; case Automation::EDIT_ENVELOPE_LARGE: { cw += fw_cache * 8; } break; } idx++; offset += cw; } offset += track_sep; } return offset; } bool PatternEditor::on_draw(const Cairo::RefPtr &cr) { drawing = true; const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); Gdk::Cairo::set_source_rgba(cr, theme->colors[Theme::COLOR_BACKGROUND]); cr->rectangle(0, 0, w, h); cr->fill(); theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); // Believe it or not, this the only reliable way to get the width of // a monospace char in GTK. Yes. Cairo::TextExtents te; cr->get_text_extents("XXX", te); int fw = te.width; cr->get_text_extents("XX", te); fw -= te.width; int fh = fe.height; int fa = fe.ascent; int sep = 1; fh += sep; fw_cache = fw; fh_cache = fh; row_height_cache = fh; int top_ofs = 4; row_top_ofs = top_ofs; Gdk::RGBA track_sep_color = theme->colors[Theme::COLOR_PATTERN_EDITOR_TRACK_SEPARATOR]; Gdk::RGBA cursorcol = theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR]; int track_sep_w = theme->constants[Theme::CONSTANT_PATTERN_EDITOR_TRACK_SEPARATION]; click_areas.clear(); visible_rows = (h - top_ofs) / fh; int beats_per_bar = song->pattern_get_beats_per_bar(current_pattern); int pattern_length = song->pattern_get_beats(current_pattern) * _get_rows_per_beat(); for (int i = 0; i < visible_rows; i++) { int row = v_offset + i; int beat = row / _get_rows_per_beat(); int subbeat = row % _get_rows_per_beat(); if (row >= pattern_length) { break; } bool is_playing = playback_pattern == current_pattern && playback_row == row; if (subbeat == 0 || i == 0) { if (beat % beats_per_bar == 0) _draw_text(cr, 0, top_ofs + i * fh + fa, String::num(beat), is_playing ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_ROW_BAR]); else _draw_text(cr, 0, top_ofs + i * fh + fa, String::num(beat), is_playing ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_ROW_BEAT]); } else { char text[3] = { '0' + (subbeat / 10), '0' + (subbeat % 10), 0 }; _draw_text(cr, fw, top_ofs + i * fh + fa, text, is_playing ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_ROW_SUB_BEAT]); } } int ofs = fw * 4; Gdk::RGBA bgcol = theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]; _draw_fill_rect(cr, ofs, 0, w - ofs, h, bgcol); int idx = 0; for (int i = 0; i < song->get_track_count(); i++) { Track *t = song->get_track(i); bool drawn = false; for (int j = 0; j < t->get_column_count(); j++) { if (idx < h_offset) { idx++; continue; } if (j == 0) { int as = 2; Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_TRACK_NAME]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); } _draw_text(cr, ofs + fh - fa, top_ofs + as, t->get_name(), c, true); ofs += fh; } { // fill fields for click areas ClickArea ca; ca.column = idx; ClickArea::Field f; f.width = fw; f.x = ofs; ca.fields.push_back(f); f.x += fw * 2; ca.fields.push_back(f); f.x += fw * 2; ca.fields.push_back(f); f.x += fw; ca.fields.push_back(f); click_areas.push_back(ca); } drawn = true; int extrahl = (j < t->get_column_count() - 1) ? fw : 0; for (int k = 0; k < visible_rows; k++) { char rowstr[7] = { '.', '.', '.', ' ', '.', '.', 0 }; int row = v_offset + k; if (row >= pattern_length) { break; } int beat = row / _get_rows_per_beat(); int subbeat = row % _get_rows_per_beat(); Tick from_tick = row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to_tick = (row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); bool bg_selected = _is_in_selection(idx, from_tick); Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE]; Gdk::RGBA csel = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_SELECTED]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); csel.set_alpha(csel.get_alpha() * 0.5); } Gdk::RGBA bgc = bgcol; if (bg_selected) { bgc = theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_SELECTED]; if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 6 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BAR_SELECTED]); else _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 6 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT_SELECTED]); } else { _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 6 + extrahl, fh, bgc); } } else if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 6 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BAR]); else _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 6 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT]); } Track::Pos from, to; from.tick = from_tick; from.column = j; to.tick = to_tick; to.column = j; List pn; t->get_notes_in_range(current_pattern, from, to, &pn); Vector valid; for (List::Element *E = pn.front(); E; E = E->next()) { if (E->get().pos.column != j || E->get().pos.tick >= to_tick) continue; valid.push_back(E->get()); } if (valid.size() == 0) { _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, bg_selected ? csel : c); } else if (valid.size() == 1) { Track::PosNote n = pn.front()->get(); if (_is_in_selection(idx, n.pos.tick)) { //in-selection c = csel; } else if (n.pos.tick != from.tick) { c = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_NOFIT]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); } } if (n.note.note == Track::Note::OFF) { rowstr[0] = '='; rowstr[1] = '='; rowstr[2] = '='; } else if (n.note.note < 120) { static const char *note[12] = { "C-", "C#", "D-", "D#", "E-", "F-", "F#", "G-", "G#", "A-", "A#", "B-" }; static const char octave[12] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' }; rowstr[0] = note[n.note.note % 12][0]; rowstr[1] = note[n.note.note % 12][1]; rowstr[2] = octave[n.note.note / 12]; // octave } if (n.note.volume < 100) { rowstr[4] = '0' + n.note.volume / 10; rowstr[5] = '0' + n.note.volume % 10; } _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, c); } else { int base_x = ofs; int base_y = top_ofs + k * fh; for (int l = 0; l < valid.size(); l++) { int h = (fh - 2) / valid.size(); int w = fw * 3 - 2; int vw = fw * 2 - 2; Gdk::RGBA col = c; if (_is_in_selection(idx, valid[l].pos.tick)) { col = csel; } if (valid[l].note.note < 120) { _draw_fill_rect(cr, base_x, base_y + h * l, fw * 3, h - 1, col); _draw_rect(cr, base_x + valid[l].note.note * w / 120, base_y + 1 + h * l, 2, h - 2, bgc); } else if (valid[l].note.note == Track::Note::OFF) { _draw_rect(cr, base_x + 0, base_y + 1 + h * l, fw * 3, 1, col); _draw_rect(cr, base_x, base_y + 1 + h * l + h - 2, fw * 3, 1, col); } if (valid[l].note.volume < 100) { _draw_fill_rect(cr, base_x + fw * 4, base_y + h * l, fw * 2, h - 1, col); _draw_rect(cr, base_x + fw * 4 + valid[l].note.volume * vw / 100, base_y + 1 + h * l, 2, h - 2, bgc); } } } if (has_focus() && idx == cursor.column && cursor.row == row) { int field_ofs[4] = { 0, 2, 4, 5 }; int cursor_x = ofs + field_ofs[cursor.field] * fw; int cursor_y = top_ofs + k * fh - sep; _draw_rect(cr, cursor_x - 1, cursor_y - 1, fw + 1, fh + 1, cursorcol); } } if (j < t->get_column_count() - 1) ofs += fw; // ofs += fw * 6; idx++; } for (int j = 0; j < t->get_command_column_count(); j++) { if (idx < h_offset) { idx++; continue; } ofs += fw; { // fill fields for click areas ClickArea ca; ca.column = idx; ClickArea::Field f; f.width = fw; f.x = ofs; ca.fields.push_back(f); f.x += fw; ca.fields.push_back(f); f.x += fw; ca.fields.push_back(f); click_areas.push_back(ca); } drawn = true; int extrahl = (j < t->get_column_count() - 1) ? fw : 0; for (int k = 0; k < visible_rows; k++) { char rowstr[4] = { '.', '0', '0', 0 }; int row = v_offset + k; if (row >= pattern_length) { break; } int beat = row / _get_rows_per_beat(); int subbeat = row % _get_rows_per_beat(); Tick from_tick = row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to_tick = (row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); bool bg_selected = _is_in_selection(idx, from_tick); Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE]; Gdk::RGBA csel = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_SELECTED]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); csel.set_alpha(csel.get_alpha() * 0.5); } Gdk::RGBA bgc = bgcol; if (bg_selected) { bgc = theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]; if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 3 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BAR_SELECTED]); else _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 3 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT_SELECTED]); } else { _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 3 + extrahl, fh, bgc); } } else if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 3 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BAR]); else _draw_fill_rect(cr, ofs, top_ofs + k * fh - sep, fw * 3 + extrahl, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT]); } Track::Pos from, to; from.tick = from_tick; from.column = j; to.tick = to_tick; to.column = j; List pc; t->get_commands_in_range(current_pattern, from, to, &pc); Vector valid; for (List::Element *E = pc.front(); E; E = E->next()) { if (E->get().pos.column != j || E->get().pos.tick >= to_tick) continue; valid.push_back(E->get()); } if (valid.size() == 0) { _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, bg_selected ? csel : c); } else if (valid.size() == 1) { Track::PosCommand n = pc.front()->get(); if (_is_in_selection(idx, n.pos.tick)) { //in-selection c = csel; } else if (n.pos.tick != from.tick) { c = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_NOFIT]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); } } if (n.command.command == Track::Command::EMPTY) { rowstr[0] = '.'; } else if (n.command.command < 120) { rowstr[0] = n.command.command; //its an actual ascii! if (rowstr[0] >= 'a' && rowstr[0] <= 'z') { rowstr[0] = 'A' + (rowstr[0] - 'a'); //uppercase it } } if (n.command.parameter < 100) { //should be always true but.. rowstr[1] = '0' + n.command.parameter / 10; rowstr[2] = '0' + n.command.parameter % 10; } _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, c); } else { int base_x = ofs; int base_y = top_ofs + k * fh; for (int l = 0; l < valid.size(); l++) { int h = (fh - 2) / valid.size(); int w = fw * 3 - 2; int vw = fw * 2 - 2; Gdk::RGBA col = c; if (_is_in_selection(idx, valid[l].pos.tick)) { col = csel; } if (valid[l].command.parameter < 120) { _draw_fill_rect(cr, base_x, base_y + h * l, fw * 3, h - 1, col); _draw_rect(cr, base_x + valid[l].command.command * w / 99, base_y + 1 + h * l, 2, h - 2, bgc); } } } if (has_focus() && idx == cursor.column && cursor.row == row) { int field_ofs[4] = { 0, 1, 2 }; int cursor_x = ofs + field_ofs[cursor.field] * fw; int cursor_y = top_ofs + k * fh - sep; _draw_rect(cr, cursor_x - 1, cursor_y - 1, fw + 1, fh + 1, cursorcol); } } ofs += fw * 3; idx++; } for (int j = 0; j < t->get_automation_count(); j++) { Automation *a = t->get_automation(j); if (idx < h_offset) { idx++; continue; } { int as = 2; Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_NAME]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); } _draw_text(cr, ofs + fh - fa, top_ofs + as, a->get_control_port()->get_name().utf8().get_data(), c, true); } ofs += fh; switch (a->get_edit_mode()) { case Automation::EDIT_ROWS_DISCRETE: { { // fill fields for click areas ClickArea ca; ca.column = idx; ClickArea::Field f; f.width = fw; f.x = ofs; ca.fields.push_back(f); f.x += fw; ca.fields.push_back(f); click_areas.push_back(ca); } for (int k = 0; k < visible_rows; k++) { char rowstr[3] = { '.', '.', 0 }; int row = v_offset + k; if (row >= pattern_length) { break; } int beat = row / _get_rows_per_beat(); int subbeat = row % _get_rows_per_beat(); Tick from = row * TICKS_PER_BEAT / _get_rows_per_beat(); Tick to = (row + 1) * TICKS_PER_BEAT / _get_rows_per_beat(); bool bg_selected = _is_in_selection(idx, from); Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_VALUE]; Gdk::RGBA csel = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_SELECTED]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); csel.set_alpha(csel.get_alpha() * 0.5); } Gdk::RGBA bgc = bgcol; if (bg_selected) { bgc = theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_SELECTED]; if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, fw * 2, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR_SELECTED]); else _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, fw * 2, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT_SELECTED]); } else { _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, fw * 2, fh, bgc); } } else if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, fw * 2, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR]); else _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, fw * 2, fh, theme ->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT]); } int first; int count; a->get_points_in_range(current_pattern, from, to, first, count); if (count == 0) { _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, bg_selected ? csel : c); } else if (count == 1) { int val = a->get_point_by_index(current_pattern, first); Tick point_tick = a->get_point_tick_by_index(current_pattern, first); if (_is_in_selection(idx, point_tick)) { c = csel; } else if (point_tick != from) { c = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_NOFIT]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); } } rowstr[0] = '0' + val / 10; rowstr[1] = '0' + val % 10; _draw_text(cr, ofs, top_ofs + k * fh + fa, rowstr, c); } else { int base_x = ofs; int base_y = top_ofs + k * fh; for (int l = 0; l < count; l++) { int h = (fh - 2) / count; int w = fw * 2 - 2; Tick point_tick = a->get_point_tick_by_index(current_pattern, first + l); int val = a->get_point_by_index(current_pattern, first + l); Gdk::RGBA col; if (_is_in_selection(idx, point_tick)) { col = csel; } else { col = c; } _draw_fill_rect(cr, base_x, base_y + h * l, fw * 2, h - 1, col); _draw_rect(cr, base_x + val * w / Automation::VALUE_MAX, base_y + 1 + h * l, 2, h - 2, bgc); } } if (has_focus() && idx == cursor.column && cursor.row == row) { int cursor_pos_x = ofs + cursor.field * fw; int cursor_pos_y = top_ofs + k * fh - sep; _draw_rect(cr, cursor_pos_x - 1, cursor_pos_y - 1, fw + 1, fh + 1, cursorcol); } } ofs += fw * 2; } break; case Automation::EDIT_ENVELOPE_SMALL: case Automation::EDIT_ENVELOPE_LARGE: { int w = a->get_edit_mode() == Automation::EDIT_ENVELOPE_SMALL ? 4 : 8; w *= fw; Gdk::RGBA c = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_VALUE]; Gdk::RGBA csel = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_VALUE_SELECTED]; if (t->is_muted()) { c.set_alpha(c.get_alpha() * 0.5); csel.set_alpha(csel.get_alpha() * 0.5); } Gdk::RGBA cpoint = theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_POINT]; if (t->is_muted()) { cpoint.set_alpha(cpoint.get_alpha() * 0.5); } // fill fields for click areas ClickArea ca; ca.column = idx; ClickArea::Field f; f.width = w; f.x = ofs; ca.fields.push_back(f); for (int k = 0; k < visible_rows; k++) { int row = v_offset + k; if (row >= pattern_length) { break; } int beat = row / _get_rows_per_beat(); int subbeat = row % _get_rows_per_beat(); Tick from = row * TICKS_PER_BEAT / _get_rows_per_beat(); bool bg_selected = _is_in_selection(idx, from); bool draw_outline = false; if (bg_selected) { if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, w, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR_SELECTED]); else _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, w, fh, theme ->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT_SELECTED]); } else { _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, w, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_SELECTED]); draw_outline = true; } } else if (subbeat == 0 || k == 0) { if ((beat % beats_per_bar) == 0) _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, w, fh, theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BAR]); else _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, w, fh, theme ->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT]); } else { draw_outline = true; } if (draw_outline) { Gdk::RGBA col = bg_selected ? theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT_SELECTED] : theme->colors[Theme::COLOR_PATTERN_EDITOR_AUTOMATION_HL_BEAT]; _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep + fh - 1, w, 1, col); _draw_fill_rect( cr, ofs, top_ofs + k * fh - sep, 1, fh, col); _draw_fill_rect( cr, ofs + w - 1, top_ofs + k * fh - sep, 1, fh, col); } float prev = -1; for (int l = 0; l < fh; l++) { Tick at = (fh * row + l) * (TICKS_PER_BEAT / _get_rows_per_beat()) / fh; bool selected = _is_in_selection(idx, at); float tofs = a->interpolate_offset(current_pattern, at); if (prev == -1) prev = tofs; if (tofs >= 0) _draw_fill_rect(cr, ofs + tofs * w, top_ofs + k * fh - sep + l, 2, 1, selected ? csel : c); prev = tofs; } if (has_focus() && idx == cursor.column && cursor.row == row) { int cursor_ofs_x = ofs; int cursor_ofs_y = top_ofs + k * fh - sep; _draw_rect(cr, cursor_ofs_x - 1, cursor_ofs_y - 1, w + 1, fh + 1, cursorcol); } } Tick pfrom = (v_offset) * (TICKS_PER_BEAT / _get_rows_per_beat()); Tick pto = (v_offset + visible_rows) * (TICKS_PER_BEAT / _get_rows_per_beat()); int first; int count; a->get_points_in_range(current_pattern, pfrom, pto, first, count); ca.automation = a; for (int l = first; l < first + count; l++) { int x = (a->get_point_by_index(current_pattern, l) * w / Automation::VALUE_MAX); int y = a->get_point_tick_by_index(current_pattern, l) * fh / (TICKS_PER_BEAT / _get_rows_per_beat()) - v_offset * fh; _draw_fill_rect(cr, ofs + x - 2, top_ofs + y - 2 - sep, 5, 5, cpoint); ClickArea::AutomationPoint ap; ap.tick = a->get_point_tick_by_index(current_pattern, l); ap.x = ofs + x; ap.y = top_ofs + y /*- sep*/; ap.index = l; ca.automation_points.push_back(ap); } click_areas.push_back(ca); ofs += w; } break; } idx++; } if (drawn) { _draw_fill_rect(cr, ofs, 0, track_sep_w, h, track_sep_color); ofs += track_sep_w; } } if (has_focus()) { cr->set_source_rgba(1, 1, 1, 1); cr->rectangle(0, 0, w, h); } cr->stroke(); v_scroll->set_upper(song->pattern_get_beats(current_pattern) * _get_rows_per_beat()); v_scroll->set_page_size(visible_rows); v_scroll->set_value(v_offset); { int total = get_column_offset(song->get_event_column_count()); h_scroll->set_upper(total); h_scroll->set_page_size(w - fw * 4); int hofs = h_offset == 0 ? 0 : get_column_offset(h_offset - 1); //printf("total %i, pagesize %i, offset %i\n", total, w - fw * 4, hofs); h_scroll->set_value(hofs); } drawing = false; #if 0 const double scale_x = (double)allocation.get_width() / ; const double scale_y = (double)allocation.get_height() / m_scale; // paint the background refStyleContext->render_background(cr, allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); Gdk::Cairo::set_source_rgba(cr, refStyleContext->get_color(state)); cr->move_to(155. * scale_x, 165. * scale_y); cr->line_to(155. * scale_x, 838. * scale_y); cr->line_to(265. * scale_x, 900. * scale_y); cr->line_to(849. * scale_x, 564. * scale_y); cr->line_to(849. * scale_x, 438. * scale_y); cr->line_to(265. * scale_x, 100. * scale_y); cr->line_to(155. * scale_x, 165. * scale_y); cr->move_to(265. * scale_x, 100. * scale_y); cr->line_to(265. * scale_x, 652. * scale_y); cr->line_to(526. * scale_x, 502. * scale_y); cr->move_to(369. * scale_x, 411. * scale_y); cr->line_to(633. * scale_x, 564. * scale_y); cr->move_to(369. * scale_x, 286. * scale_y); cr->line_to(369. * scale_x, 592. * scale_y); cr->move_to(369. * scale_x, 286. * scale_y); cr->line_to(849. * scale_x, 564. * scale_y); cr->move_to(633. * scale_x, 564. * scale_y); cr->line_to(155. * scale_x, 838. * scale_y); cr->storke(); #endif return false; } void PatternEditor::set_beat_zoom(BeatZoom p_zoom) { beat_zoom = p_zoom; queue_draw(); } PatternEditor::BeatZoom PatternEditor::get_beat_zoom() const { return beat_zoom; } int PatternEditor::get_current_track() const { return song->get_event_column_track(cursor.column); } void PatternEditor::set_playback_cursor(int p_pattern, Tick p_tick) { int cursor_row = p_tick * _get_rows_per_beat() / TICKS_PER_BEAT; if (cursor_row != cursor.row && p_pattern != current_pattern) { cursor.row = cursor_row; current_pattern = p_pattern; _validate_cursor(); queue_draw(); } } Tick PatternEditor::get_cursor_tick() const { return cursor.row * TICKS_PER_BEAT / _get_rows_per_beat(); } void PatternEditor::set_hscroll(Glib::RefPtr p_h_scroll) { h_scroll = p_h_scroll; h_scroll->signal_value_changed().connect(sigc::mem_fun(*this, &PatternEditor::_h_scroll_changed)); } void PatternEditor::set_vscroll(Glib::RefPtr p_v_scroll) { v_scroll = p_v_scroll; v_scroll->signal_value_changed().connect(sigc::mem_fun(*this, &PatternEditor::_v_scroll_changed)); } void PatternEditor::redraw_and_validate_cursor() { _validate_cursor(); queue_draw(); } void PatternEditor::set_playback_pos(int p_pattern, Tick p_tick) { int cursor_row = p_tick * _get_rows_per_beat() / TICKS_PER_BEAT; if (playback_pattern != p_pattern || playback_row != cursor_row) { playback_pattern = p_pattern; playback_row = cursor_row; queue_draw(); } } void PatternEditor::set_focus_on_track(int p_track) { if (song->get_event_column_track(cursor.column) == p_track) { return; //it's already there } //not there, look for it for (int i = 0; i < song->get_event_column_count(); i++) { if (song->get_event_column_track(i) == p_track) { cursor.column = i; cursor.field = 0; queue_draw(); return; } } } void PatternEditor::on_parsing_error( const Glib::RefPtr §ion, const Glib::Error &error) {} PatternEditor::PatternEditor(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("pattern_editor"), Gtk::Widget() { // This shows the GType name, which must be used in the CSS file. // std::cout << "GType name: " << G_OBJECT_TYPE_NAME(gobj()) << std::endl; // This shows that the GType still derives from GtkWidget: // std::cout << "Gtype is a GtkWidget?:" << GTK_IS_WIDGET(gobj()) << // std::endl; song = p_song; undo_redo = p_undo_redo; key_bindings = p_bindings; theme = p_theme; set_has_window(true); set_can_focus(true); set_focus_on_click(true); // Gives Exposure & Button presses to the widget. set_name("pattern_editor"); v_offset = 0; h_offset = 0; current_pattern = 0; beat_zoom = BEAT_ZOOM_4; volume_mask = 99; volume_mask_active = true; visible_rows = 4; current_octave = 5; cursor_advance = 1; cursor.row = 0; cursor.field = 0; cursor.column = 0; selection.active = false; selection.shift_active = false; selection.mouse_drag_active = false; clipboard.active = false; #if 0 Track *t = new Track; t->set_columns(2); song->add_track(t); t->set_note(0, Track::Pos(0, 0), Track::Note(60, 99)); t->set_note(0, Track::Pos(TICKS_PER_BEAT, 1), Track::Note(88, Track::Note::EMPTY)); t->set_note(0, Track::Pos(TICKS_PER_BEAT * 4, 1), Track::Note(Track::Note::OFF)); t->set_note(0, Track::Pos(66, 1), Track::Note(63, 33)); t->set_note(0, Track::Pos(TICKS_PER_BEAT * 3, 0), Track::Note(28, 80)); t->set_note(0, Track::Pos(TICKS_PER_BEAT * 3 + 1, 0), Track::Note(Track::Note::OFF, 30)); t->add_automation(new Automation(t->get_volume_port())); t->add_automation(new Automation(t->get_swing_port())); t->add_automation(new Automation(t->get_pan_port())); t->get_automation(0)->set_display_mode(Automation::DISPLAY_ROWS); t->get_automation(0)->set_point(0, 0, 160); t->get_automation(0)->set_point(0, TICKS_PER_BEAT * 3 + 2, 22); t->get_automation(0)->set_point(0, TICKS_PER_BEAT * 5, 22); t->get_automation(0)->set_point(0, TICKS_PER_BEAT * 5 + 2, 80); t->get_automation(0)->set_point(0, TICKS_PER_BEAT * 5 + 4, 44); t->get_automation(1)->set_display_mode(Automation::DISPLAY_SMALL); t->get_automation(2)->set_display_mode(Automation::DISPLAY_LARGE); t->get_automation(2)->set_point(0, 0, 22); t->get_automation(2)->set_point(0, TICKS_PER_BEAT * 2, 22); t->get_automation(2)->set_point(0, TICKS_PER_BEAT * 3, 44); #endif grabbing_point = -1; fw_cache = 0; fh_cache = 0; last_amplify_value = 100; last_scale_value = 1.0; playback_pattern = -1; playback_row = -1; p_bindings->action_activated.connect(sigc::mem_fun(*this, &PatternEditor::_on_action_activated)); } PatternEditor::~PatternEditor() { } zytrax-master/gui/pattern_editor.h000066400000000000000000000151261347722000700176770ustar00rootroot00000000000000#ifndef PATTERN_EDITOR_H #define PATTERN_EDITOR_H #include "engine/song.h" #include "engine/undo_redo.h" #include #include #include #include "gui/color_theme.h" #include "gui/key_bindings.h" class PatternEditor : public Gtk::Widget { public: enum BeatZoom { BEAT_ZOOM_1, BEAT_ZOOM_2, BEAT_ZOOM_3, BEAT_ZOOM_4, BEAT_ZOOM_6, BEAT_ZOOM_8, BEAT_ZOOM_12, BEAT_ZOOM_16, BEAT_ZOOM_24, BEAT_ZOOM_32, BEAT_ZOOM_48, BEAT_ZOOM_64, BEAT_ZOOM_MAX }; protected: //Overrides: Gtk::SizeRequestMode get_request_mode_vfunc() const override; void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_map() override; void on_unmap() override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; //Signal handler: void on_parsing_error(const Glib::RefPtr §ion, const Glib::Error &error); void _mouse_button_event(GdkEventButton *event, bool p_press); bool on_scroll_event(GdkEventScroll *scroll_event); bool on_button_press_event(GdkEventButton *event); bool on_button_release_event(GdkEventButton *event); bool on_motion_notify_event(GdkEventMotion *motion_event); bool on_key_press_event(GdkEventKey *key_event); bool on_key_release_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; UndoRedo *undo_redo; Song *song; int current_pattern; int current_octave; int cursor_advance; int volume_mask; bool volume_mask_active; int v_offset; BeatZoom beat_zoom; int h_offset; int visible_rows; int row_height_cache; int row_top_ofs; struct Cursor { int row; int column; int field; //int skip; } cursor; struct Selection { int begin_column; Tick begin_tick; int end_column; Tick end_tick; Tick row_tick_size; bool active; int shift_from_column; int shift_from_row; bool shift_active; int mouse_drag_from_column; int mouse_drag_from_row; bool mouse_drag_active; } selection; struct Clipboard { List events; int columns; Tick ticks; bool active; } clipboard; void _update_shift_selection(); bool _is_in_selection(int p_column, Tick p_tick); struct ClickArea { int column; struct Field { int x; int width; }; Vector fields; Automation *automation; struct AutomationPoint { int index; int x, y; Tick tick; }; List automation_points; ClickArea() { automation = NULL; } }; int fw_cache; int fh_cache; int get_column_offset(int p_column); List click_areas; int grabbing_point; Tick grabbing_point_tick_from; uint8_t grabbing_point_value_from; Tick grabbing_point_tick; uint8_t grabbing_point_value; Automation *grabbing_automation; int grabbing_x, grabbing_width; int grabbing_mouse_pos_x; int grabbing_mouse_pos_y; int grabbing_mouse_prev_x; int grabbing_mouse_prev_y; int get_total_rows() const; int get_visible_rows() const; void _cursor_advance(); void get_cursor_column_data(Track **r_track, int &r_command_column, int &r_automation, int &r_track_column); void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down = false); void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _field_clear(); void _validate_cursor(); void _validate_selection(); void _redraw(); Theme *theme; KeyBindings *key_bindings; int _get_rows_per_beat() const; void _on_action_activated(KeyBindings::KeyBind p_bind); void _validate_menus(); void _notify_track_layout_changed(); bool drawing; int last_amplify_value; float last_scale_value; Glib::RefPtr h_scroll; Glib::RefPtr v_scroll; void _v_scroll_changed(); void _h_scroll_changed(); int playback_pattern; int playback_row; Glib::RefPtr new_track_menu; Gtk::Menu new_track_popup; Glib::RefPtr track_menu; Glib::RefPtr track_menu_add; Glib::RefPtr track_menu_column; Glib::RefPtr track_menu_command; Glib::RefPtr track_menu_solo; Glib::RefPtr track_menu_edit; Glib::RefPtr track_menu_remove; Gtk::Menu track_popup; Glib::RefPtr automation_menu; Glib::RefPtr automation_menu_item; Glib::RefPtr automation_menu_mode; Glib::RefPtr automation_menu_move; Glib::RefPtr automation_menu_remove; Gtk::Menu automation_popup; int _cursor_get_track_begin_column(); int _cursor_get_track_end_column(); public: sigc::signal1 track_edited; sigc::signal0 track_layout_changed; sigc::signal0 current_track_changed; sigc::signal0 volume_mask_changed; sigc::signal0 octave_changed; sigc::signal0 step_changed; sigc::signal0 zoom_changed; sigc::signal0 pattern_changed; sigc::signal1 erase_effect_editor_request; void set_current_pattern(int p_pattern); int get_current_pattern() const; void set_current_octave(int p_octave); int get_current_octave() const; void set_current_cursor_advance(int p_cursor_advance); int get_current_cursor_advance() const; void set_current_volume_mask(int p_volume_mask, bool p_active); int get_current_volume_mask() const; bool is_current_volume_mask_active() const; void set_beat_zoom(BeatZoom p_zoom); BeatZoom get_beat_zoom() const; int get_current_track() const; Tick get_cursor_tick() const; void set_hscroll(Glib::RefPtr p_h_scroll); void set_vscroll(Glib::RefPtr p_v_scroll); void set_playback_pos(int p_pattern, Tick p_tick); void set_playback_cursor(int p_pattern, Tick p_tick); void redraw_and_validate_cursor(); void set_focus_on_track(int p_track); void initialize_menus(); PatternEditor(Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings); ~PatternEditor(); }; #endif // PATTERN_EDITOR_H zytrax-master/gui/settings_dialog.cpp000066400000000000000000001152441347722000700203700ustar00rootroot00000000000000#include "settings_dialog.h" #include "engine/audio_effect.h" #include "globals/json_file.h" bool ThemeColorList::on_button_press_event(GdkEventButton *event) { if (event->button == 1) { //select selected = event->y / font_height; if (selected < 0 || selected >= Theme::COLOR_MAX) { selected = -1; } color_selected.emit(selected); queue_draw(); } return false; } bool ThemeColorList::on_key_press_event(GdkEventKey *key_event) { return true; } void ThemeColorList::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = 1; natural_width = 1; } void ThemeColorList::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = font_height * Theme::COLOR_MAX; natural_height = font_height * Theme::COLOR_MAX; } void ThemeColorList::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = font_height * Theme::COLOR_MAX; natural_height = font_height * Theme::COLOR_MAX; } void ThemeColorList::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = 1; natural_width = 1; } void ThemeColorList::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void ThemeColorList::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::POINTER_MOTION_MASK | Gdk::LEAVE_NOTIFY_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void ThemeColorList::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void ThemeColorList::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } int ThemeColorList::_get_text_width(const Cairo::RefPtr &cr, const String &p_text) const { Cairo::TextExtents te; cr->get_text_extents(p_text.utf8().get_data(), te); return te.width; } void ThemeColorList::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void ThemeColorList::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } bool ThemeColorList::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); { //update min width theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); if (font_height != fe.height || font_ascent != fe.ascent) { font_height = fe.height; font_ascent = fe.ascent; queue_resize(); } } Gdk::RGBA white; white.set_red(1.0); white.set_green(1.0); white.set_blue(1.0); white.set_alpha(1.0); Gdk::RGBA black; black.set_red(0.0); black.set_green(0.0); black.set_blue(0.0); black.set_alpha(1.0); Gdk::RGBA hilited; hilited.set_red(1.0); hilited.set_green(0.4); hilited.set_blue(0.4); hilited.set_alpha(1.0); for (int i = 0; i < Theme::COLOR_MAX; i++) { Gdk::Cairo::set_source_rgba(cr, theme->colors[i]); cr->rectangle(0, i * font_height, w, font_height); cr->fill(); int tw = _get_text_width(cr, theme->color_names[i]); int x_ofs = (w - tw) / 2; int y_ofs = i * font_height + font_ascent; _draw_text(cr, x_ofs - 1, y_ofs, theme->color_names[i], black, false); _draw_text(cr, x_ofs + 1, y_ofs, theme->color_names[i], black, false); _draw_text(cr, x_ofs, y_ofs - 1, theme->color_names[i], black, false); _draw_text(cr, x_ofs, y_ofs + 1, theme->color_names[i], black, false); if (i == selected) { _draw_rect(cr, 0, i * font_height, w, font_height - 1, white); _draw_text(cr, x_ofs, y_ofs, theme->color_names[i], hilited, false); } else { _draw_text(cr, x_ofs, y_ofs, theme->color_names[i], white, false); } } return false; } ThemeColorList::ThemeColorList(Theme *p_theme) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("theme_colors"), Gtk::Widget() { font_height = 1; font_ascent = 1; selected = 0; theme = p_theme; set_has_window(true); set_can_focus(true); set_focus_on_click(true); set_name("theme_color_list"); } ThemeColorList::~ThemeColorList() { } //////////////////////////////// void SettingsDialog::_driver_changed() { Gtk::TreeModel::iterator iter = driver_combo.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; SoundDriverManager::init_driver(id); _save_settings(); } } } void SettingsDialog::_midi_input_driver_changed() { Gtk::TreeModel::iterator iter = midi_input_driver_combo.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; MIDIDriverManager::init_input_driver(id); _save_settings(); } } } void SettingsDialog::_driver_freq_changed() { Gtk::TreeModel::iterator iter = frequency_combo.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; SoundDriverManager::set_mix_frequency(SoundDriverManager::MixFrequency(id)); SoundDriverManager::init_driver(); update_mix_rate.emit(); _save_settings(); } } } void SettingsDialog::_driver_buffer_changed() { Gtk::TreeModel::iterator iter = buffer_combo.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; SoundDriverManager::set_buffer_size(SoundDriverManager::BufferSize(id)); SoundDriverManager::init_driver(); _save_settings(); } } } void SettingsDialog::_driver_step_changed() { Gtk::TreeModel::iterator iter = step_combo.get_active(); if (iter) { Gtk::TreeModel::Row row = *iter; if (row) { //Get the data for the selected row, using our knowledge of the tree //model: int id = row[model_columns.index]; SoundDriverManager::set_step_buffer_size(SoundDriverManager::BufferSize(id)); //but actually not here.. update_song_step_buffer.emit(); _save_settings(); } } } bool SettingsDialog::_scan_plugin_key(GdkEvent *p_key) { //avoid closing scan with escape key if (p_key->type == GDK_KEY_PRESS && ((GdkEventKey *)(p_key))->keyval == GDK_KEY_Escape) { return true; } else { return false; } } void SettingsDialog::_scan_callback(const String &p_name, void *p_ud) { SettingsDialog *sd = (SettingsDialog *)p_ud; Gtk::TreeModel::Row row = *(sd->scan_list_store->append()); row[sd->scan_model_columns.name] = p_name.utf8().get_data(); while (gtk_events_pending()) { gtk_main_iteration_do(false); } } void SettingsDialog::_scan_plugins() { MessageDialog scan("", false /* use_markup */, Gtk::MESSAGE_OTHER, Gtk::BUTTONS_NONE); scan.get_vbox()->get_children()[0]->hide(); scan.get_vbox()->set_spacing(0); scan.get_vbox()->pack_start(scan_scroll, Gtk::PACK_EXPAND_WIDGET); Gtk::Button *response_button = scan.add_button("Close", Gtk::RESPONSE_OK); response_button->set_sensitive(false); scan_list_store->clear(); Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); scan.set_default_size(width / 5, height / 3); scan.show_all_children(); scan.get_vbox()->get_children()[0]->hide(); scan.set_deletable(false); scan.set_transient_for(*this); scan.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); scan.set_title("Scanning.. (Please Wait)"); scan.signal_event().connect(sigc::mem_fun(*this, &SettingsDialog::_scan_plugin_key)); scan.show(); fx_factory->rescan_effects(_scan_callback, this); response_button->set_sensitive(true); scan.set_deletable(true); scan.set_title("Scanning.. Done"); Gtk::TreeModel::Row row = *(scan_list_store->append()); row[scan_model_columns.name] = "Done."; row = *(scan_list_store->append()); row[scan_model_columns.name] = ("Found " + String::num(fx_factory->get_audio_effect_count()) + " effect(s).").utf8().get_data(); scan.run(); scan.hide(); _save_plugins(); } void SettingsDialog::_save_plugins() { String save_to = get_settings_path() + "/plugins.json"; JSON::Node node = JSON::object(); { //plugins JSON::Node plugin_array = JSON::array(); for (int i = 0; i < fx_factory->get_audio_effect_count(); i++) { JSON::Node plugin_node = JSON::object(); const AudioEffectInfo *info = fx_factory->get_audio_effect(i); plugin_node.add("caption", info->caption.utf8().get_data()); plugin_node.add("description", info->description.utf8().get_data()); plugin_node.add("author", info->author.utf8().get_data()); plugin_node.add("category", info->category.utf8().get_data()); plugin_node.add("unique_id", info->unique_ID.utf8().get_data()); plugin_node.add("icon_string", info->icon_string.utf8().get_data()); plugin_node.add("version", info->version.utf8().get_data()); plugin_node.add("synth", info->synth); plugin_node.add("has_ui", info->has_ui); plugin_node.add("provider_caption", info->provider_caption.utf8().get_data()); plugin_node.add("provider_id", info->provider_id.utf8().get_data()); plugin_node.add("path", info->path.utf8().get_data()); plugin_array.add(plugin_node); } node.add("plugins", plugin_array); } save_json(save_to, node); } void SettingsDialog::_browse_plugin_path() { Gtk::FileChooserDialog dialog("Select a folder containing plugins", Gtk::FILE_CHOOSER_ACTION_SELECT_FOLDER); dialog.set_transient_for(*this); //Add response buttons the the dialog: gboolean swap_buttons; g_object_get(gtk_settings_get_default(), "gtk-alternative-button-order", &swap_buttons, NULL); if (swap_buttons) { dialog.add_button("Select", Gtk::RESPONSE_OK); dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); } else { dialog.add_button("Cancel", Gtk::RESPONSE_CANCEL); dialog.add_button("Select", Gtk::RESPONSE_OK); } Gtk::TreeModel::iterator iter = plugin_tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int index = row[plugin_model_columns.index]; String existing = AudioEffectProvider::get_scan_path(index); if (existing != String()) { dialog.set_filename(existing.utf8().get_data()); } int result = dialog.run(); if (result == Gtk::RESPONSE_OK) { String path; path.parse_utf8(dialog.get_filename().c_str()); AudioEffectProvider::set_scan_path(index, path); row[plugin_model_columns.text] = dialog.get_filename(); _save_settings(); } } void SettingsDialog::_plugin_path_edited(const Glib::ustring &path, const Glib::ustring &text) { Gtk::TreeIter iter = plugin_list_store->get_iter(path); ERR_FAIL_COND(!iter); String s; s.parse_utf8(text.c_str()); AudioEffectProvider::set_scan_path((*iter)[plugin_model_columns.index], s); (*iter)[plugin_model_columns.text] = text; _save_settings(); } void SettingsDialog::_color_selected(int p_index) { if (theme_color_list.get_selected() >= 0) { Gdk::RGBA color = theme->colors[theme_color_list.get_selected()]; theme_color_change.set_rgba(color); } } void SettingsDialog::_choose_color() { if (theme_color_list.get_selected() >= 0) { theme->colors[theme_color_list.get_selected()] = theme_color_change.get_rgba(); theme_color_list.queue_draw(); update_colors.emit(); _save_settings(); } } void SettingsDialog::_font_chosen() { theme->font.parse_utf8(theme_font_button.get_font_name().c_str()); update_colors.emit(); _save_settings(); } void SettingsDialog::_on_dark_theme_chosen() { theme->color_scheme = theme_force_dark.get_active() ? Theme::COLOR_SCHEME_DARK : Theme::COLOR_SCHEME_DEFAULT; _save_settings(); } bool SettingsDialog::_signal_remap_key(GdkEventKey *p_key) { if (p_key->keyval >= GDK_KEY_Shift_L && p_key->keyval <= GDK_KEY_Hyper_R) { return false; //no modifiers welcome } Gtk::AccelKey accel(p_key->keyval, Gdk::ModifierType(p_key->state)); key_remap_dialog.set_message(accel.get_abbrev()); key_remap_key = p_key->keyval; key_remap_mod = p_key->state; return true; } void SettingsDialog::_shortcut_assign() { Gtk::TreeModel::iterator iter = shortcut_tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int index = row[shortcut_model_columns.index]; key_remap_dialog.set_message(key_bindings->get_keybind_text(KeyBindings::KeyBind(index)).utf8().get_data()); if (key_remap_dialog.run() == Gtk::RESPONSE_OK) { key_bindings->set_keybind(KeyBindings::KeyBind(index), key_remap_key, key_remap_mod); row[shortcut_model_columns.text] = key_bindings->get_keybind_text(KeyBindings::KeyBind(index)).utf8().get_data(); } key_remap_dialog.hide(); _save_settings(); } void SettingsDialog::_shortcut_clear() { Gtk::TreeModel::iterator iter = shortcut_tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int index = row[shortcut_model_columns.index]; key_bindings->clear_keybind(KeyBindings::KeyBind(index)); row[shortcut_model_columns.text] = key_bindings->get_keybind_text(KeyBindings::KeyBind(index)).utf8().get_data(); _save_settings(); } void SettingsDialog::_shortcut_restore() { Gtk::TreeModel::iterator iter = shortcut_tree_selection->get_selected(); if (!iter) return; Gtk::TreeModel::Row row = *iter; int index = row[shortcut_model_columns.index]; key_bindings->reset_keybind(KeyBindings::KeyBind(index)); row[shortcut_model_columns.text] = key_bindings->get_keybind_text(KeyBindings::KeyBind(index)).utf8().get_data(); _save_settings(); } void SettingsDialog::initialize_bindings() { for (int i = 0; i < KeyBindings::BIND_MAX; i++) { String label = key_bindings->get_keybind_name(KeyBindings::KeyBind(i)); String text = key_bindings->get_keybind_text(KeyBindings::KeyBind(i)); Gtk::TreeModel::Row row = *(shortcut_list_store->append()); row[shortcut_model_columns.label] = label.utf8().get_data(); row[shortcut_model_columns.text] = text.utf8().get_data(); row[shortcut_model_columns.index] = i; shortcut_rows.push_back(row); if (i == 0) { shortcut_tree_selection->select(row); } } } void SettingsDialog::_save_settings() { String save_to = get_settings_path() + "/settings.json"; JSON::Node node = JSON::object(); { //audio JSON::Node audio_node = JSON::object(); std::string driver_id; if (SoundDriverManager::get_current_driver_index() >= 0) { SoundDriver *driver = SoundDriverManager::get_driver(SoundDriverManager::get_current_driver_index()); if (driver) { driver_id = driver->get_id().utf8().get_data(); } } std::string midi_driver_id; if (MIDIDriverManager::get_current_input_driver_index() >= 0) { MIDIInputDriver *driver = MIDIDriverManager::get_input_driver(MIDIDriverManager::get_current_input_driver_index()); if (driver) { midi_driver_id = driver->get_id().utf8().get_data(); } } audio_node.add("id", driver_id); audio_node.add("mixing_hz", SoundDriverManager::get_mix_frequency()); audio_node.add("buffer_size", SoundDriverManager::get_buffer_size()); audio_node.add("block_size", SoundDriverManager::get_step_buffer_size()); audio_node.add("midi_in_id", midi_driver_id); node.add("audio", audio_node); } { //plugins JSON::Node plugin_node = JSON::object(); for (int i = 0; i < AudioEffectProvider::MAX_SCAN_PATHS; i++) { String path = AudioEffectProvider::get_scan_path(i).strip_edges(); if (path != String()) { plugin_node.add(String::num(i).ascii().get_data(), path.utf8().get_data()); } } node.add("plugins", plugin_node); } { //theme JSON::Node theme_node = JSON::object(); theme_node.add("font", theme->font.utf8().get_data()); JSON::Node colors_node = JSON::object(); for (int i = 0; i < Theme::COLOR_MAX; i++) { JSON::Node array = JSON::array(); array.add(theme->colors[i].get_red()); array.add(theme->colors[i].get_green()); array.add(theme->colors[i].get_blue()); colors_node.add(theme->color_names[i], array); } theme_node.add("colors", colors_node); theme_node.add("use_dark_theme", theme->color_scheme == Theme::COLOR_SCHEME_DARK); node.add("theme", theme_node); } { //key bindings JSON::Node bindings = JSON::object(); JSON::Node array = JSON::array(); for (int i = 0; i < KeyBindings::BIND_MAX; i++) { JSON::Node bind = JSON::object(); bind.add("name", key_bindings->get_keybind_name(KeyBindings::KeyBind(i))); bind.add("key", key_bindings->get_keybind_key(KeyBindings::KeyBind(i))); bind.add("mods", key_bindings->get_keybind_mod(KeyBindings::KeyBind(i))); array.add(bind); } bindings.add("keys", array); node.add("key_bindings", bindings); } { //default commands JSON::Node def_commands = JSON::array(); for (int i = 0; i < MAX_DEFAULT_COMMANDS; i++) { if (default_commands[i].name == "" && default_commands[i].command == 0) { continue; } JSON::Node command = JSON::object(); command.add("index", i); command.add("identifier", default_commands[i].name.utf8().get_data()); command.add("command", default_commands[i].command); def_commands.add(command); } node.add("default_commands", def_commands); } save_json(save_to, node); } SettingsDialog::DefaultCommand SettingsDialog::default_commands[MAX_DEFAULT_COMMANDS]; void SettingsDialog::_update_command_list() { command_list_store->clear(); for (int i = 0; i < MAX_DEFAULT_COMMANDS; i++) { Gtk::TreeModel::iterator iter = command_list_store->append(); Gtk::TreeModel::Row row = *iter; row[model_columns.name] = default_commands[i].name.utf8().get_data(); if (default_commands[i].command == 0) { row[command_editor_columns.command] = ""; } else { char s[2] = { char('A' + (default_commands[i].command - 'a')), 0 }; row[command_editor_columns.command] = s; } row[command_editor_columns.index] = i; } } void SettingsDialog::_command_name_changed(const Glib::ustring &path, const Glib::ustring &text) { Gtk::TreeIter iter = command_list_store->get_iter(path); ERR_FAIL_COND(!iter); int index = (*iter)[command_editor_columns.index]; String s; s.parse_utf8(text.c_str()); (*iter)[command_editor_columns.name] = text; default_commands[index].name = s; _save_settings(); } void SettingsDialog::_command_value_changed(const Glib::ustring &path, const Glib::ustring &value) { Gtk::TreeIter iter = command_list_store->get_iter(path); ERR_FAIL_COND(!iter); int index = (*iter)[command_editor_columns.index]; char valc = value[0]; if (valc == '<') { valc = 0; //unselected (*iter)[command_editor_columns.command] = ""; } else { valc = 'a' + (valc - 'A'); //unselected (*iter)[command_editor_columns.command] = value; } default_commands[index].command = valc; _save_settings(); } void SettingsDialog::set_default_command(int p_index, const String &p_name, char p_command) { ERR_FAIL_INDEX(p_index, MAX_DEFAULT_COMMANDS); default_commands[p_index].name = p_name; default_commands[p_index].command = p_command; } String SettingsDialog::get_default_command_name(int p_index) { ERR_FAIL_INDEX_V(p_index, MAX_DEFAULT_COMMANDS, String()); return default_commands[p_index].name; } char SettingsDialog::get_default_command_command(int p_index) { ERR_FAIL_INDEX_V(p_index, MAX_DEFAULT_COMMANDS, 0); return default_commands[p_index].command; } void SettingsDialog::add_default_command(const String &p_name, char p_command) { bool exists = false; for (int i = 0; i < MAX_DEFAULT_COMMANDS; i++) { if (default_commands[i].name == p_name) { default_commands[i].command = p_command; exists = true; break; } } if (!exists) { for (int i = 0; i < MAX_DEFAULT_COMMANDS; i++) { if (default_commands[i].name == String()) { default_commands[i].name = p_name; default_commands[i].command = p_command; break; } } } if (singleton) { singleton->_update_command_list(); singleton->_save_settings(); } } SettingsDialog *SettingsDialog::singleton = NULL; SettingsDialog::SettingsDialog(Theme *p_theme, KeyBindings *p_key_bindings, AudioEffectFactory *p_fx_factory) : MessageDialog("", false /* use_markup */, Gtk::MESSAGE_OTHER, Gtk::BUTTONS_CLOSE), key_remap_dialog("", false, Gtk::MESSAGE_OTHER, Gtk::BUTTONS_OK_CANCEL), theme_color_list(p_theme) { singleton = this; fx_factory = p_fx_factory; key_bindings = p_key_bindings; theme = p_theme; { midi_input_driver_list_store = Gtk::ListStore::create(model_columns); midi_input_driver_combo.set_model(midi_input_driver_list_store); for (int i = 0; i < MIDIDriverManager::get_input_driver_count(); i++) { MIDIInputDriver *midi_input_driver = MIDIDriverManager::get_input_driver(i); Gtk::TreeModel::Row row = *(midi_input_driver_list_store->append()); row[model_columns.name] = midi_input_driver->get_name().utf8().get_data(); row[model_columns.index] = i; midi_input_driver_rows.push_back(row); } midi_input_driver_combo.pack_start(model_columns.name); int active_index = MIDIDriverManager::get_current_input_driver_index(); if (active_index >= 0) { midi_input_driver_combo.set_active(midi_input_driver_rows[active_index]); } } midi_input_driver_combo.signal_changed().connect(sigc::mem_fun(*this, &SettingsDialog::_midi_input_driver_changed)); { driver_list_store = Gtk::ListStore::create(model_columns); driver_combo.set_model(driver_list_store); for (int i = 0; i < SoundDriverManager::get_driver_count(); i++) { SoundDriver *driver = SoundDriverManager::get_driver(i); Gtk::TreeModel::Row row = *(driver_list_store->append()); row[model_columns.name] = driver->get_name().utf8().get_data(); row[model_columns.index] = i; driver_rows.push_back(row); } driver_combo.pack_start(model_columns.name); int active_index = SoundDriverManager::get_current_driver_index(); if (active_index >= 0) { driver_combo.set_active(driver_rows[active_index]); } } driver_combo.signal_changed().connect(sigc::mem_fun(*this, &SettingsDialog::_driver_changed)); { frequency_list_store = Gtk::ListStore::create(model_columns); frequency_combo.set_model(frequency_list_store); for (int i = 0; i < SoundDriverManager::MIX_FREQ_MAX; i++) { String label = String::num(SoundDriverManager::get_mix_frequency_hz(SoundDriverManager::MixFrequency(i))) + " hz"; Gtk::TreeModel::Row row = *(frequency_list_store->append()); row[model_columns.name] = label.utf8().get_data(); row[model_columns.index] = i; frequency_rows.push_back(row); } frequency_combo.pack_start(model_columns.name); int active_index = SoundDriverManager::get_mix_frequency(); if (active_index >= 0) { frequency_combo.set_active(frequency_rows[active_index]); } } frequency_combo.signal_changed().connect(sigc::mem_fun(*this, &SettingsDialog::_driver_freq_changed)); { buffer_list_store = Gtk::ListStore::create(model_columns); buffer_combo.set_model(buffer_list_store); for (int i = 0; i < SoundDriverManager::BUFFER_SIZE_MAX; i++) { String label = String::num(SoundDriverManager::get_buffer_size_frames(SoundDriverManager::BufferSize(i))) + " frames"; Gtk::TreeModel::Row row = *(buffer_list_store->append()); row[model_columns.name] = label.utf8().get_data(); row[model_columns.index] = i; buffer_rows.push_back(row); } buffer_combo.pack_start(model_columns.name); int active_index = SoundDriverManager::get_buffer_size(); if (active_index >= 0) { buffer_combo.set_active(buffer_rows[active_index]); } } buffer_combo.signal_changed().connect(sigc::mem_fun(*this, &SettingsDialog::_driver_buffer_changed)); { step_list_store = Gtk::ListStore::create(model_columns); step_combo.set_model(step_list_store); for (int i = 0; i < SoundDriverManager::BUFFER_SIZE_MAX; i++) { String label = String::num(SoundDriverManager::get_buffer_size_frames(SoundDriverManager::BufferSize(i))) + " frames"; Gtk::TreeModel::Row row = *(step_list_store->append()); row[model_columns.name] = label.utf8().get_data(); row[model_columns.index] = i; step_rows.push_back(row); } step_combo.pack_start(model_columns.name); int active_index = SoundDriverManager::get_step_buffer_size(); if (active_index >= 0) { step_combo.set_active(step_rows[active_index]); } } step_combo.signal_changed().connect(sigc::mem_fun(*this, &SettingsDialog::_driver_step_changed)); ////// zoom get_vbox()->set_spacing(0); get_vbox()->pack_start(notebook, Gtk::PACK_EXPAND_WIDGET); notebook.append_page(main_vbox, "Audio Settings"); main_vbox.set_spacing(4); main_vbox.set_margin_left(8); main_vbox.set_margin_right(8); main_vbox.set_margin_top(8); main_vbox.set_margin_bottom(8); main_vbox.pack_start(sound_settings_frame, Gtk::PACK_SHRINK); sound_settings_frame.set_label("Sound Device"); sound_settings_frame.add(sound_settings_grid); driver_label.set_text("Sound Device: "); driver_label.set_hexpand(true); sound_settings_grid.attach(driver_label, 0, 0, 1, 1); driver_combo.set_hexpand(true); sound_settings_grid.attach(driver_combo, 1, 0, 1, 1); frequency_label.set_text("Mix Frequency: "); frequency_label.set_hexpand(true); sound_settings_grid.attach(frequency_label, 0, 1, 1, 1); frequency_combo.set_hexpand(true); sound_settings_grid.attach(frequency_combo, 1, 1, 1, 1); buffer_label.set_text("Buffer Size: "); buffer_label.set_hexpand(true); sound_settings_grid.attach(buffer_label, 0, 2, 1, 1); buffer_combo.set_hexpand(true); sound_settings_grid.attach(buffer_combo, 1, 2, 1, 1); step_label.set_text("Block Size: "); step_label.set_hexpand(true); sound_settings_grid.attach(step_label, 0, 3, 1, 1); step_combo.set_hexpand(true); sound_settings_grid.attach(step_combo, 1, 3, 1, 1); midi_input_driver_label.set_text("MIDI-In Driver: "); midi_input_driver_label.set_hexpand(true); sound_settings_grid.attach(midi_input_driver_label, 0, 4, 1, 1); midi_input_driver_combo.set_hexpand(true); sound_settings_grid.attach(midi_input_driver_combo, 1, 4, 1, 1); sound_settings_grid.set_margin_left(8); sound_settings_grid.set_margin_right(8); sound_settings_grid.set_margin_bottom(8); sound_settings_grid.set_margin_top(8); main_vbox.pack_start(plugin_path_frame, Gtk::PACK_EXPAND_WIDGET); plugin_path_frame.set_label("Plugin Paths"); plugin_path_frame.add(plugin_path_vbox); plugin_path_vbox.pack_start(plugin_scroll, Gtk::PACK_EXPAND_WIDGET); plugin_scroll.add(plugin_tree); plugin_list_store = Gtk::ListStore::create(plugin_model_columns); plugin_tree_selection = plugin_tree.get_selection(); plugin_tree.append_column("Index", plugin_model_columns.label); plugin_column.set_title("Path"); plugin_column.pack_start(plugin_column_text, true); plugin_column_text.property_editable() = true; plugin_column_text.signal_edited().connect(sigc::mem_fun(*this, &SettingsDialog::_plugin_path_edited)); plugin_column.add_attribute(plugin_column_text.property_text(), plugin_model_columns.text); plugin_tree.append_column(plugin_column); plugin_tree.set_model(plugin_list_store); plugin_tree.get_column(0)->set_expand(false); plugin_tree.get_column(1)->set_expand(true); for (int i = 0; i < AudioEffectProvider::MAX_SCAN_PATHS; i++) { Gtk::TreeModel::iterator iter = plugin_list_store->append(); Gtk::TreeModel::Row row = *iter; row[plugin_model_columns.label] = String::num(i).utf8().get_data(); row[plugin_model_columns.text] = AudioEffectProvider::get_scan_path(i).utf8().get_data(); row[plugin_model_columns.index] = i; if (i == 0) { plugin_tree_selection->select(row); } } plugin_path_vbox.pack_start(plugin_hbox, Gtk::PACK_SHRINK); plugin_hbox.pack_start(plugin_browse_path, Gtk::PACK_EXPAND_WIDGET); plugin_browse_path.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_browse_plugin_path)); plugin_browse_path.set_label("Browse.."); plugin_hbox.pack_start(scan_plugins, Gtk::PACK_EXPAND_WIDGET); scan_plugins.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_scan_plugins)); scan_plugins.set_label("Scan"); ////////////////////// notebook.append_page(theme_vbox, "Theme Settings"); theme_vbox.set_spacing(4); theme_vbox.set_margin_left(8); theme_vbox.set_margin_right(8); theme_vbox.set_margin_top(8); theme_vbox.set_margin_bottom(8); theme_vbox.pack_start(theme_font_frame, Gtk::PACK_SHRINK); theme_font_frame.set_label("Font:"); theme_font_frame.add(theme_font_grid); theme_font_label.set_text("Editor Font:"); theme_font_label.set_hexpand(true); theme_font_grid.attach(theme_font_label, 0, 0, 1, 1); theme_font_grid.attach(theme_font_button, 1, 0, 1, 1); theme_font_button.set_hexpand(true); theme_font_grid.set_margin_left(8); theme_font_grid.set_margin_right(8); theme_font_grid.set_margin_top(8); theme_font_grid.set_margin_bottom(8); theme_font_button.set_font_name(theme->font.utf8().get_data()); theme_font_button.signal_font_set().connect(sigc::mem_fun(*this, &SettingsDialog::_font_chosen)); theme_vbox.pack_start(theme_colors_frame, Gtk::PACK_EXPAND_WIDGET); theme_colors_frame.set_label("Colors:"); theme_colors_frame.add(theme_colors_grid); theme_colors_grid.attach(theme_color_list_scroll, 0, 0, 2, 1); theme_color_list_scroll.set_hexpand(true); theme_color_list_scroll.set_vexpand(true); theme_color_list_scroll.add(theme_color_list); theme_color_list.color_selected.connect(sigc::mem_fun(*this, &SettingsDialog::_color_selected)); theme_color_list.set_hexpand(true); theme_color_list.set_vexpand(true); theme_colors_grid.set_margin_left(8); theme_colors_grid.set_margin_right(8); theme_colors_grid.set_margin_top(8); theme_colors_grid.set_margin_bottom(8); theme_color_label.set_text("Change Color: "); theme_colors_grid.attach(theme_color_label, 0, 1, 1, 1); theme_colors_grid.attach(theme_color_change, 1, 1, 1, 1); theme_colors_grid.set_row_spacing(8); //theme_color_change.set_label("Change Color"); theme_color_change.set_use_alpha(false); theme_color_change.property_show_editor() = true; theme_color_change.set_rgba(theme->colors[0]); theme_color_change.signal_color_set().connect(sigc::mem_fun(*this, &SettingsDialog::_choose_color)); theme_vbox.pack_start(theme_settings_frame, Gtk::PACK_SHRINK); theme_settings_frame.set_label("Settings:"); theme_settings_frame.add(theme_settings_grid); theme_force_dark.set_label("Force Dark Theme (needs restart)"); theme_force_dark.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_on_dark_theme_chosen)); theme_settings_grid.attach(theme_force_dark, 0, 0, 1, 1); theme_force_dark.set_active(theme->color_scheme == Theme::COLOR_SCHEME_DARK); theme_settings_grid.set_margin_left(8); theme_settings_grid.set_margin_right(8); theme_settings_grid.set_margin_top(8); theme_settings_grid.set_margin_bottom(8); ///////////////////// notebook.append_page(shortcut_vbox, "Key Bindings"); shortcut_vbox.set_spacing(4); shortcut_vbox.set_margin_left(8); shortcut_vbox.set_margin_right(8); shortcut_vbox.set_margin_top(8); shortcut_vbox.set_margin_bottom(8); shortcut_vbox.pack_start(shortcut_frame, Gtk::PACK_EXPAND_WIDGET); shortcut_frame.add(shortcut_grid); shortcut_grid.set_margin_left(8); shortcut_grid.set_margin_right(8); shortcut_grid.set_margin_top(8); shortcut_grid.set_margin_bottom(8); shortcut_list_store = Gtk::ListStore::create(shortcut_model_columns); shortcut_tree_selection = shortcut_tree.get_selection(); shortcut_tree.append_column("Name", shortcut_model_columns.label); shortcut_tree.append_column("Shortcut", shortcut_model_columns.text); shortcut_tree.set_model(shortcut_list_store); shortcut_tree.get_column(0)->set_expand(true); shortcut_tree.get_column(1)->set_expand(true); shortcut_grid.attach(shortcut_scroll, 0, 0, 3, 1); shortcut_scroll.add(shortcut_tree); shortcut_scroll.set_hexpand(true); shortcut_scroll.set_vexpand(true); shortcut_assign_button.set_label("Assign"); shortcut_grid.attach(shortcut_assign_button, 0, 1, 1, 1); shortcut_assign_button.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_shortcut_assign)); shortcut_clear_button.set_label("Clear"); shortcut_grid.attach(shortcut_clear_button, 1, 1, 1, 1); shortcut_clear_button.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_shortcut_clear)); shortcut_reset_button.set_label("Reset"); shortcut_grid.attach(shortcut_reset_button, 2, 1, 1, 1); shortcut_reset_button.signal_clicked().connect(sigc::mem_fun(*this, &SettingsDialog::_shortcut_restore)); key_remap_dialog.signal_key_press_event().connect(sigc::mem_fun(*this, &SettingsDialog::_signal_remap_key)); key_remap_dialog.set_transient_for(*this); key_remap_dialog.set_title("Press a Key:"); key_remap_dialog.set_position(Gtk::WIN_POS_CENTER_ON_PARENT); ////////////////// scan_list_store = Gtk::ListStore::create(scan_model_columns); scan_tree_selection = scan_tree.get_selection(); scan_tree.set_model(scan_list_store); scan_tree.append_column("Plugins Found:", scan_model_columns.name); scan_tree.get_column(0)->set_expand(false); scan_scroll.add(scan_tree); ////////////////// notebook.append_page(command_frame, "Default Commands"); command_frame.set_label("Commands:"); command_frame.add(command_tree_scroll); command_frame.set_hexpand(true); command_frame.set_vexpand(true); command_tree_scroll.set_hexpand(true); command_tree_scroll.set_vexpand(true); command_tree_scroll.add(command_tree); command_list_store = Gtk::ListStore::create(command_editor_columns); command_tree_selection = command_tree.get_selection(); command_tree.set_model(command_list_store); command_tree.set_model(command_list_store); command_tree.set_hexpand(true); command_tree.set_vexpand(true); command_column1.set_title("Name"); command_column1.pack_start(cell_render_text, true); cell_render_text.signal_edited().connect(sigc::mem_fun(*this, &SettingsDialog::_command_name_changed)); command_column1.add_attribute(cell_render_text.property_text(), command_editor_columns.name); cell_render_text.property_editable() = true; command_tree.append_column(command_column1); command_tree.get_column(0)->set_expand(true); command_commands_list_store = Gtk::ListStore::create(command_editor_columns.command_model_columns); { { Gtk::TreeModel::iterator iter = command_commands_list_store->append(); Gtk::TreeModel::Row row = *iter; row[command_editor_columns.command_model_columns.name] = ""; row[command_editor_columns.command_model_columns.index] = 0; } for (int i = 'a'; i <= 'z'; i++) { Gtk::TreeModel::iterator iter = command_commands_list_store->append(); Gtk::TreeModel::Row row = *iter; const char s[2] = { char('A' + (i - 'a')), 0 }; row[command_editor_columns.command_model_columns.name] = s; row[command_editor_columns.command_model_columns.index] = i; } } command_column2.set_title("Command"); command_column2.pack_start(cell_render_command, false); command_column2.add_attribute(cell_render_command.property_text(), command_editor_columns.command); cell_render_command.signal_edited().connect(sigc::mem_fun(*this, &SettingsDialog::_command_value_changed)); cell_render_command.property_model() = command_commands_list_store; cell_render_command.property_text_column() = 0; cell_render_command.property_editable() = true; cell_render_command.property_has_entry() = false; cell_render_command.set_visible(true); command_tree.append_column(command_column2); command_tree.get_column(1)->set_expand(false); //tree.set_can_focus(false); //tree_selection->set_mode(Gtk::SELECTION_NONE); bool has_default_commands = false; for (int i = 0; i < MAX_DEFAULT_COMMANDS; i++) { if (default_commands[i].name != "" || default_commands[i].command) { has_default_commands = true; } } if (!has_default_commands) { default_commands[0].name = "bend_portamento"; default_commands[0].command = 'g'; default_commands[1].name = "bend_vibrato"; default_commands[1].command = 'h'; default_commands[2].name = "bend_slide_up"; default_commands[2].command = 'f'; default_commands[3].name = "bend_slide_down"; default_commands[3].command = 'e'; default_commands[4].name = "cc_Pan"; default_commands[4].command = 'w'; default_commands[5].name = "cc_Expression"; default_commands[5].command = 'm'; default_commands[6].name = "cc_Breath"; default_commands[6].command = 'b'; default_commands[7].name = "cc_Modulation"; default_commands[7].command = 'u'; default_commands[8].name = "cc_FilterCutoff"; default_commands[8].command = 'z'; } _update_command_list(); /////////////////////// Glib::RefPtr screen = Gdk::Screen::get_default(); int width = screen->get_width(); int height = screen->get_height(); set_default_size(width / 4, height / 2); show_all_children(); get_vbox()->get_children()[0]->hide(); set_title("Settings"); } String SettingsDialog::get_settings_path() { bool created_path = false; String path; String dir = "ZyTrax"; #ifdef WINDOWS_ENABLED path = _wgetenv(L"APPDATA"); #endif #ifdef FREEDESKTOP_ENABLED if (getenv("XDG_CONFIG_HOME")) { path.parse_utf8(getenv("XDG_CONFIG_HOME")); } else { path.parse_utf8(getenv("HOME")); dir = "." + dir; } #endif #ifdef OSX_ENABLED path.parse_utf8(getenv("HOME")); if (path[path.length() - 1] != '/') { path += "/"; } path += "Library/Application Support"; #endif if (path[path.length() - 1] != '/') { path += "/"; } path += dir; if (!created_path) { #ifdef WINDOWS_ENABLED _wmkdir(path.c_str()); #else mkdir(path.utf8().get_data(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); #endif created_path = true; } return path; } zytrax-master/gui/settings_dialog.h000066400000000000000000000204371347722000700200340ustar00rootroot00000000000000#ifndef SETTINGS_DIALOG_H #define SETTINGS_DIALOG_H #include "engine/audio_effect.h" #include "engine/midi_driver_manager.h" #include "engine/sound_driver_manager.h" #include "gui/color_theme.h" #include "gui/key_bindings.h" #include "vector.h" #include class ThemeColorList : public Gtk::Widget { protected: int font_height; int font_ascent; int selected; // Overrides: void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; bool on_button_press_event(GdkEventButton *event); bool on_key_press_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; Theme *theme; void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down); int _get_text_width(const Cairo::RefPtr &cr, const String &p_text) const; void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); public: sigc::signal1 color_selected; void select_color(int p_selected) { selected = p_selected; queue_draw(); } int get_selected() const { return selected; } ThemeColorList(Theme *p_theme); ~ThemeColorList(); }; class SettingsDialog : public Gtk::MessageDialog { //dear GTK, why all this for a simple combo? class ModelColumns : public Gtk::TreeModelColumnRecord { public: ModelColumns() { add(name); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn index; }; ModelColumns model_columns; Glib::RefPtr midi_input_driver_list_store; Vector midi_input_driver_rows; Glib::RefPtr driver_list_store; Vector driver_rows; Glib::RefPtr frequency_list_store; Vector frequency_rows; Glib::RefPtr buffer_list_store; Vector buffer_rows; Glib::RefPtr step_list_store; Vector step_rows; Gtk::Notebook notebook; Gtk::VBox main_vbox; Gtk::Frame sound_settings_frame; Gtk::Grid sound_settings_grid; Gtk::ComboBox midi_input_driver_combo; Gtk::Label midi_input_driver_label; Gtk::ComboBox driver_combo; Gtk::Label driver_label; Gtk::ComboBox frequency_combo; Gtk::Label frequency_label; Gtk::ComboBox buffer_combo; Gtk::Label buffer_label; Gtk::ComboBox step_combo; Gtk::Label step_label; Gtk::Frame plugin_path_frame; Gtk::VBox plugin_path_vbox; void _midi_input_driver_changed(); void _driver_changed(); void _driver_freq_changed(); void _driver_buffer_changed(); void _driver_step_changed(); class PluginModelColumns : public Gtk::TreeModelColumnRecord { public: PluginModelColumns() { add(label); add(text); add(index); } Gtk::TreeModelColumn label; Gtk::TreeModelColumn text; Gtk::TreeModelColumn index; }; PluginModelColumns plugin_model_columns; Glib::RefPtr plugin_list_store; Glib::RefPtr plugin_tree_selection; Gtk::TreeViewColumn plugin_column; Gtk::CellRendererText plugin_column_text; Gtk::ScrolledWindow plugin_scroll; Gtk::TreeView plugin_tree; Gtk::HBox plugin_hbox; Gtk::Button plugin_browse_path; Gtk::Button scan_plugins; void _browse_plugin_path(); void _scan_plugins(); void _plugin_path_edited(const Glib::ustring &path, const Glib::ustring &text); //////////// Gtk::VBox theme_vbox; Gtk::Frame theme_font_frame; Gtk::Grid theme_font_grid; Gtk::Label theme_font_label; Gtk::FontButton theme_font_button; Gtk::Frame theme_colors_frame; Gtk::Grid theme_colors_grid; Gtk::ScrolledWindow theme_color_list_scroll; ThemeColorList theme_color_list; Gtk::ColorButton theme_color_change; Gtk::CheckButton theme_force_dark; Gtk::Label theme_color_label; Gtk::Frame theme_settings_frame; Gtk::Grid theme_settings_grid; void _color_selected(int p_index); void _choose_color(); void _font_chosen(); void _on_dark_theme_chosen(); Theme *theme; //////// shortcut editor ///////// class ShortcutModelColumns : public Gtk::TreeModelColumnRecord { public: ShortcutModelColumns() { add(label); add(text); add(index); } Gtk::TreeModelColumn label; Gtk::TreeModelColumn text; Gtk::TreeModelColumn index; }; Gtk::VBox shortcut_vbox; Gtk::Frame shortcut_frame; Gtk::Grid shortcut_grid; ShortcutModelColumns shortcut_model_columns; Glib::RefPtr shortcut_list_store; Glib::RefPtr shortcut_tree_selection; Gtk::ScrolledWindow shortcut_scroll; Gtk::TreeView shortcut_tree; Gtk::Button shortcut_assign; Vector shortcut_rows; Gtk::Button shortcut_assign_button; Gtk::Button shortcut_clear_button; Gtk::Button shortcut_reset_button; Gtk::MessageDialog key_remap_dialog; int key_remap_key; int key_remap_mod; bool _signal_remap_key(GdkEventKey *p_key); void _shortcut_assign(); void _shortcut_clear(); void _shortcut_restore(); KeyBindings *key_bindings; void _save_settings(); class ScanColumns : public Gtk::TreeModelColumnRecord { public: ScanColumns() { add(name); } Gtk::TreeModelColumn name; }; Gtk::ScrolledWindow scan_scroll; static void scan_callback(const String &p_text, void *p_userdata); ScanColumns scan_model_columns; Glib::RefPtr scan_list_store; Glib::RefPtr scan_tree_selection; Gtk::TreeView scan_tree; void _save_plugins(); bool _scan_plugin_key(GdkEvent *p_key); static void _scan_callback(const String &, void *p_ud); AudioEffectFactory *fx_factory; public: enum { MAX_DEFAULT_COMMANDS = 100 }; private: struct DefaultCommand { String name; char command; }; static DefaultCommand default_commands[MAX_DEFAULT_COMMANDS]; class CommandEditorModelColumns : public Gtk::TreeModelColumnRecord { public: //GTK is beyond bizarre at this point class CommandModelColumns : public Gtk::TreeModelColumnRecord { public: CommandModelColumns() { add(name); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn index; }; CommandModelColumns command_model_columns; CommandEditorModelColumns() { add(name); add(command); add(index); } Gtk::TreeModelColumn name; Gtk::TreeModelColumn command; Gtk::TreeModelColumn index; }; CommandEditorModelColumns command_editor_columns; Glib::RefPtr command_list_store; Glib::RefPtr command_tree_selection; Gtk::CellRendererText cell_render_text; Gtk::CellRendererCombo cell_render_command; Gtk::TreeViewColumn command_column1; Gtk::TreeViewColumn command_column2; Gtk::TreeView command_tree; Gtk::ScrolledWindow command_tree_scroll; Glib::RefPtr command_commands_list_store; //Glib::RefPtr tree_selection; Gtk::Frame command_frame; void _update_command_list(); void _command_name_changed(const Glib::ustring &path, const Glib::ustring &text); void _command_value_changed(const Glib::ustring &path, const Glib::ustring &value); static SettingsDialog *singleton; public: sigc::signal0 update_colors; sigc::signal0 update_song_step_buffer; sigc::signal0 update_mix_rate; static void set_default_command(int p_index, const String &p_name, char p_command); static String get_default_command_name(int p_index); static char get_default_command_command(int p_index); static void add_default_command(const String &p_name, char p_command); void initialize_bindings(); SettingsDialog(Theme *p_theme, KeyBindings *p_key_bindings, AudioEffectFactory *p_fx_factory); static String get_settings_path(); }; #endif // SETTINGS_DIALOG_H zytrax-master/gui/stb_image.h000066400000000000000000007330641347722000700166160ustar00rootroot00000000000000/* stb_image - v2.22 - public domain image loader - http://nothings.org/stb no warranty implied; use at your own risk Do this: #define STB_IMAGE_IMPLEMENTATION before you include this file in *one* C or C++ file to create the implementation. // i.e. it should look like this: #include ... #include ... #include ... #define STB_IMAGE_IMPLEMENTATION #include "stb_image.h" You can #define STBI_ASSERT(x) before the #include to avoid using assert.h. And #define STBI_MALLOC, STBI_REALLOC, and STBI_FREE to avoid using malloc,realloc,free QUICK NOTES: Primarily of interest to game developers and other people who can avoid problematic images and only need the trivial interface JPEG baseline & progressive (12 bpc/arithmetic not supported, same as stock IJG lib) PNG 1/2/4/8/16-bit-per-channel TGA (not sure what subset, if a subset) BMP non-1bpp, non-RLE PSD (composited view only, no extra channels, 8/16 bit-per-channel) GIF (*comp always reports as 4-channel) HDR (radiance rgbE format) PIC (Softimage PIC) PNM (PPM and PGM binary only) Animated GIF still needs a proper API, but here's one way to do it: http://gist.github.com/urraka/685d9a6340b26b830d49 - decode from memory or through FILE (define STBI_NO_STDIO to remove code) - decode from arbitrary I/O callbacks - SIMD acceleration on x86/x64 (SSE2) and ARM (NEON) Full documentation under "DOCUMENTATION" below. LICENSE See end of file for license information. RECENT REVISION HISTORY: 2.22 (2019-03-04) gif fixes, fix warnings 2.21 (2019-02-25) fix typo in comment 2.20 (2019-02-07) support utf8 filenames in Windows; fix warnings and platform ifdefs 2.19 (2018-02-11) fix warning 2.18 (2018-01-30) fix warnings 2.17 (2018-01-29) bugfix, 1-bit BMP, 16-bitness query, fix warnings 2.16 (2017-07-23) all functions have 16-bit variants; optimizations; bugfixes 2.15 (2017-03-18) fix png-1,2,4; all Imagenet JPGs; no runtime SSE detection on GCC 2.14 (2017-03-03) remove deprecated STBI_JPEG_OLD; fixes for Imagenet JPGs 2.13 (2016-12-04) experimental 16-bit API, only for PNG so far; fixes 2.12 (2016-04-02) fix typo in 2.11 PSD fix that caused crashes 2.11 (2016-04-02) 16-bit PNGS; enable SSE2 in non-gcc x64 RGB-format JPEG; remove white matting in PSD; allocate large structures on the stack; correct channel count for PNG & BMP 2.10 (2016-01-22) avoid warning introduced in 2.09 2.09 (2016-01-16) 16-bit TGA; comments in PNM files; STBI_REALLOC_SIZED See end of file for full revision history. ============================ Contributors ========================= Image formats Extensions, features Sean Barrett (jpeg, png, bmp) Jetro Lauha (stbi_info) Nicolas Schulz (hdr, psd) Martin "SpartanJ" Golini (stbi_info) Jonathan Dummer (tga) James "moose2000" Brown (iPhone PNG) Jean-Marc Lienher (gif) Ben "Disch" Wenger (io callbacks) Tom Seddon (pic) Omar Cornut (1/2/4-bit PNG) Thatcher Ulrich (psd) Nicolas Guillemot (vertical flip) Ken Miller (pgm, ppm) Richard Mitton (16-bit PSD) github:urraka (animated gif) Junggon Kim (PNM comments) Christopher Forseth (animated gif) Daniel Gibson (16-bit TGA) socks-the-fox (16-bit PNG) Jeremy Sawicki (handle all ImageNet JPGs) Optimizations & bugfixes Mikhail Morozov (1-bit BMP) Fabian "ryg" Giesen Anael Seghezzi (is-16-bit query) Arseny Kapoulkine John-Mark Allen Carmelo J Fdez-Aguera Bug & warning fixes Marc LeBlanc David Woo Guillaume George Martins Mozeiko Christpher Lloyd Jerry Jansson Joseph Thomson Phil Jordan Dave Moore Roy Eltham Hayaki Saito Nathan Reed Won Chun Luke Graham Johan Duparc Nick Verigakis the Horde3D community Thomas Ruf Ronny Chevalier github:rlyeh Janez Zemva John Bartholomew Michal Cichon github:romigrou Jonathan Blow Ken Hamada Tero Hanninen github:svdijk Laurent Gomila Cort Stratton Sergio Gonzalez github:snagar Aruelien Pocheville Thibault Reuille Cass Everitt github:Zelex Ryamond Barbiero Paul Du Bois Engin Manap github:grim210 Aldo Culquicondor Philipp Wiesemann Dale Weiler github:sammyhw Oriol Ferrer Mesia Josh Tobin Matthew Gregan github:phprus Julian Raschke Gregory Mullen Baldur Karlsson github:poppolopoppo Christian Floisand Kevin Schmidt JR Smith github:darealshinji Blazej Dariusz Roszkowski github:Michaelangel007 */ #ifndef STBI_INCLUDE_STB_IMAGE_H #define STBI_INCLUDE_STB_IMAGE_H // DOCUMENTATION // // Limitations: // - no 12-bit-per-channel JPEG // - no JPEGs with arithmetic coding // - GIF always returns *comp=4 // // Basic usage (see HDR discussion below for HDR usage): // int x,y,n; // unsigned char *data = stbi_load(filename, &x, &y, &n, 0); // // ... process data if not NULL ... // // ... x = width, y = height, n = # 8-bit components per pixel ... // // ... replace '0' with '1'..'4' to force that many components per pixel // // ... but 'n' will always be the number that it would have been if you said 0 // stbi_image_free(data) // // Standard parameters: // int *x -- outputs image width in pixels // int *y -- outputs image height in pixels // int *channels_in_file -- outputs # of image components in image file // int desired_channels -- if non-zero, # of image components requested in result // // The return value from an image loader is an 'unsigned char *' which points // to the pixel data, or NULL on an allocation failure or if the image is // corrupt or invalid. The pixel data consists of *y scanlines of *x pixels, // with each pixel consisting of N interleaved 8-bit components; the first // pixel pointed to is top-left-most in the image. There is no padding between // image scanlines or between pixels, regardless of format. The number of // components N is 'desired_channels' if desired_channels is non-zero, or // *channels_in_file otherwise. If desired_channels is non-zero, // *channels_in_file has the number of components that _would_ have been // output otherwise. E.g. if you set desired_channels to 4, you will always // get RGBA output, but you can check *channels_in_file to see if it's trivially // opaque because e.g. there were only 3 channels in the source image. // // An output image with N components has the following components interleaved // in this order in each pixel: // // N=#comp components // 1 grey // 2 grey, alpha // 3 red, green, blue // 4 red, green, blue, alpha // // If image loading fails for any reason, the return value will be NULL, // and *x, *y, *channels_in_file will be unchanged. The function // stbi_failure_reason() can be queried for an extremely brief, end-user // unfriendly explanation of why the load failed. Define STBI_NO_FAILURE_STRINGS // to avoid compiling these strings at all, and STBI_FAILURE_USERMSG to get slightly // more user-friendly ones. // // Paletted PNG, BMP, GIF, and PIC images are automatically depalettized. // // =========================================================================== // // UNICODE: // // If compiling for Windows and you wish to use Unicode filenames, compile // with // #define STBI_WINDOWS_UTF8 // and pass utf8-encoded filenames. Call stbi_convert_wchar_to_utf8 to convert // Windows wchar_t filenames to utf8. // // =========================================================================== // // Philosophy // // stb libraries are designed with the following priorities: // // 1. easy to use // 2. easy to maintain // 3. good performance // // Sometimes I let "good performance" creep up in priority over "easy to maintain", // and for best performance I may provide less-easy-to-use APIs that give higher // performance, in addition to the easy-to-use ones. Nevertheless, it's important // to keep in mind that from the standpoint of you, a client of this library, // all you care about is #1 and #3, and stb libraries DO NOT emphasize #3 above all. // // Some secondary priorities arise directly from the first two, some of which // provide more explicit reasons why performance can't be emphasized. // // - Portable ("ease of use") // - Small source code footprint ("easy to maintain") // - No dependencies ("ease of use") // // =========================================================================== // // I/O callbacks // // I/O callbacks allow you to read from arbitrary sources, like packaged // files or some other source. Data read from callbacks are processed // through a small internal buffer (currently 128 bytes) to try to reduce // overhead. // // The three functions you must define are "read" (reads some bytes of data), // "skip" (skips some bytes of data), "eof" (reports if the stream is at the end). // // =========================================================================== // // SIMD support // // The JPEG decoder will try to automatically use SIMD kernels on x86 when // supported by the compiler. For ARM Neon support, you must explicitly // request it. // // (The old do-it-yourself SIMD API is no longer supported in the current // code.) // // On x86, SSE2 will automatically be used when available based on a run-time // test; if not, the generic C versions are used as a fall-back. On ARM targets, // the typical path is to have separate builds for NEON and non-NEON devices // (at least this is true for iOS and Android). Therefore, the NEON support is // toggled by a build flag: define STBI_NEON to get NEON loops. // // If for some reason you do not want to use any of SIMD code, or if // you have issues compiling it, you can disable it entirely by // defining STBI_NO_SIMD. // // =========================================================================== // // HDR image support (disable by defining STBI_NO_HDR) // // stb_image supports loading HDR images in general, and currently the Radiance // .HDR file format specifically. You can still load any file through the existing // interface; if you attempt to load an HDR file, it will be automatically remapped // to LDR, assuming gamma 2.2 and an arbitrary scale factor defaulting to 1; // both of these constants can be reconfigured through this interface: // // stbi_hdr_to_ldr_gamma(2.2f); // stbi_hdr_to_ldr_scale(1.0f); // // (note, do not use _inverse_ constants; stbi_image will invert them // appropriately). // // Additionally, there is a new, parallel interface for loading files as // (linear) floats to preserve the full dynamic range: // // float *data = stbi_loadf(filename, &x, &y, &n, 0); // // If you load LDR images through this interface, those images will // be promoted to floating point values, run through the inverse of // constants corresponding to the above: // // stbi_ldr_to_hdr_scale(1.0f); // stbi_ldr_to_hdr_gamma(2.2f); // // Finally, given a filename (or an open file or memory block--see header // file for details) containing image data, you can query for the "most // appropriate" interface to use (that is, whether the image is HDR or // not), using: // // stbi_is_hdr(char *filename); // // =========================================================================== // // iPhone PNG support: // // By default we convert iphone-formatted PNGs back to RGB, even though // they are internally encoded differently. You can disable this conversion // by calling stbi_convert_iphone_png_to_rgb(0), in which case // you will always just get the native iphone "format" through (which // is BGR stored in RGB). // // Call stbi_set_unpremultiply_on_load(1) as well to force a divide per // pixel to remove any premultiplied alpha *only* if the image file explicitly // says there's premultiplied data (currently only happens in iPhone images, // and only if iPhone convert-to-rgb processing is on). // // =========================================================================== // // ADDITIONAL CONFIGURATION // // - You can suppress implementation of any of the decoders to reduce // your code footprint by #defining one or more of the following // symbols before creating the implementation. // // STBI_NO_JPEG // STBI_NO_PNG // STBI_NO_BMP // STBI_NO_PSD // STBI_NO_TGA // STBI_NO_GIF // STBI_NO_HDR // STBI_NO_PIC // STBI_NO_PNM (.ppm and .pgm) // // - You can request *only* certain decoders and suppress all other ones // (this will be more forward-compatible, as addition of new decoders // doesn't require you to disable them explicitly): // // STBI_ONLY_JPEG // STBI_ONLY_PNG // STBI_ONLY_BMP // STBI_ONLY_PSD // STBI_ONLY_TGA // STBI_ONLY_GIF // STBI_ONLY_HDR // STBI_ONLY_PIC // STBI_ONLY_PNM (.ppm and .pgm) // // - If you use STBI_NO_PNG (or _ONLY_ without PNG), and you still // want the zlib decoder to be available, #define STBI_SUPPORT_ZLIB // #ifndef STBI_NO_STDIO #include #endif // STBI_NO_STDIO #define STBI_VERSION 1 enum { STBI_default = 0, // only used for desired_channels STBI_grey = 1, STBI_grey_alpha = 2, STBI_rgb = 3, STBI_rgb_alpha = 4 }; #include typedef unsigned char stbi_uc; typedef unsigned short stbi_us; #ifdef __cplusplus extern "C" { #endif #ifndef STBIDEF #ifdef STB_IMAGE_STATIC #define STBIDEF static #else #define STBIDEF extern #endif #endif ////////////////////////////////////////////////////////////////////////////// // // PRIMARY API - works on images of any type // // // load image by filename, open file, or memory buffer // typedef struct { int (*read)(void *user, char *data, int size); // fill 'data' with 'size' bytes. return number of bytes actually read void (*skip)(void *user, int n); // skip the next 'n' bytes, or 'unget' the last -n bytes if negative int (*eof)(void *user); // returns nonzero if we are at end of file/data } stbi_io_callbacks; //////////////////////////////////// // // 8-bits-per-channel interface // STBIDEF stbi_uc *stbi_load_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels); #ifndef STBI_NO_STDIO STBIDEF stbi_uc *stbi_load(char const *filename, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF stbi_uc *stbi_load_from_file(FILE *f, int *x, int *y, int *channels_in_file, int desired_channels); // for stbi_load_from_file, file pointer is left pointing immediately after image #endif #ifndef STBI_NO_GIF STBIDEF stbi_uc *stbi_load_gif_from_memory(stbi_uc const *buffer, int len, int **delays, int *x, int *y, int *z, int *comp, int req_comp); #endif #ifdef STBI_WINDOWS_UTF8 STBIDEF int stbi_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t *input); #endif //////////////////////////////////// // // 16-bits-per-channel interface // STBIDEF stbi_us *stbi_load_16_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF stbi_us *stbi_load_16_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels); #ifndef STBI_NO_STDIO STBIDEF stbi_us *stbi_load_16(char const *filename, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF stbi_us *stbi_load_from_file_16(FILE *f, int *x, int *y, int *channels_in_file, int desired_channels); #endif //////////////////////////////////// // // float-per-channel interface // #ifndef STBI_NO_LINEAR STBIDEF float *stbi_loadf_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF float *stbi_loadf_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels); #ifndef STBI_NO_STDIO STBIDEF float *stbi_loadf(char const *filename, int *x, int *y, int *channels_in_file, int desired_channels); STBIDEF float *stbi_loadf_from_file(FILE *f, int *x, int *y, int *channels_in_file, int desired_channels); #endif #endif #ifndef STBI_NO_HDR STBIDEF void stbi_hdr_to_ldr_gamma(float gamma); STBIDEF void stbi_hdr_to_ldr_scale(float scale); #endif // STBI_NO_HDR #ifndef STBI_NO_LINEAR STBIDEF void stbi_ldr_to_hdr_gamma(float gamma); STBIDEF void stbi_ldr_to_hdr_scale(float scale); #endif // STBI_NO_LINEAR // stbi_is_hdr is always defined, but always returns false if STBI_NO_HDR STBIDEF int stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user); STBIDEF int stbi_is_hdr_from_memory(stbi_uc const *buffer, int len); #ifndef STBI_NO_STDIO STBIDEF int stbi_is_hdr(char const *filename); STBIDEF int stbi_is_hdr_from_file(FILE *f); #endif // STBI_NO_STDIO // get a VERY brief reason for failure // NOT THREADSAFE STBIDEF const char *stbi_failure_reason(void); // free the loaded image -- this is just free() STBIDEF void stbi_image_free(void *retval_from_stbi_load); // get image dimensions & components without fully decoding STBIDEF int stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp); STBIDEF int stbi_info_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp); STBIDEF int stbi_is_16_bit_from_memory(stbi_uc const *buffer, int len); STBIDEF int stbi_is_16_bit_from_callbacks(stbi_io_callbacks const *clbk, void *user); #ifndef STBI_NO_STDIO STBIDEF int stbi_info(char const *filename, int *x, int *y, int *comp); STBIDEF int stbi_info_from_file(FILE *f, int *x, int *y, int *comp); STBIDEF int stbi_is_16_bit(char const *filename); STBIDEF int stbi_is_16_bit_from_file(FILE *f); #endif // for image formats that explicitly notate that they have premultiplied alpha, // we just return the colors as stored in the file. set this flag to force // unpremultiplication. results are undefined if the unpremultiply overflow. STBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply); // indicate whether we should process iphone images back to canonical format, // or just pass them through "as-is" STBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert); // flip the image vertically, so the first pixel in the output array is the bottom left STBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip); // ZLIB client - used by PNG, available for other purposes STBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen); STBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header); STBIDEF char *stbi_zlib_decode_malloc(const char *buffer, int len, int *outlen); STBIDEF int stbi_zlib_decode_buffer(char *obuffer, int olen, const char *ibuffer, int ilen); STBIDEF char *stbi_zlib_decode_noheader_malloc(const char *buffer, int len, int *outlen); STBIDEF int stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen); #ifdef __cplusplus } #endif // // //// end header file ///////////////////////////////////////////////////// #endif // STBI_INCLUDE_STB_IMAGE_H #ifdef STB_IMAGE_IMPLEMENTATION #if defined(STBI_ONLY_JPEG) || defined(STBI_ONLY_PNG) || defined(STBI_ONLY_BMP) || defined(STBI_ONLY_TGA) || defined(STBI_ONLY_GIF) || defined(STBI_ONLY_PSD) || defined(STBI_ONLY_HDR) || defined(STBI_ONLY_PIC) || defined(STBI_ONLY_PNM) || defined(STBI_ONLY_ZLIB) #ifndef STBI_ONLY_JPEG #define STBI_NO_JPEG #endif #ifndef STBI_ONLY_PNG #define STBI_NO_PNG #endif #ifndef STBI_ONLY_BMP #define STBI_NO_BMP #endif #ifndef STBI_ONLY_PSD #define STBI_NO_PSD #endif #ifndef STBI_ONLY_TGA #define STBI_NO_TGA #endif #ifndef STBI_ONLY_GIF #define STBI_NO_GIF #endif #ifndef STBI_ONLY_HDR #define STBI_NO_HDR #endif #ifndef STBI_ONLY_PIC #define STBI_NO_PIC #endif #ifndef STBI_ONLY_PNM #define STBI_NO_PNM #endif #endif #if defined(STBI_NO_PNG) && !defined(STBI_SUPPORT_ZLIB) && !defined(STBI_NO_ZLIB) #define STBI_NO_ZLIB #endif #include #include #include // ptrdiff_t on osx #include #include #if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR) #include // ldexp, pow #endif #ifndef STBI_NO_STDIO #include #endif #ifndef STBI_ASSERT #include #define STBI_ASSERT(x) assert(x) #endif #ifdef __cplusplus #define STBI_EXTERN extern "C" #else #define STBI_EXTERN extern #endif #ifndef _MSC_VER #ifdef __cplusplus #define stbi_inline inline #else #define stbi_inline #endif #else #define stbi_inline __forceinline #endif #ifdef _MSC_VER typedef unsigned short stbi__uint16; typedef signed short stbi__int16; typedef unsigned int stbi__uint32; typedef signed int stbi__int32; #else #include typedef uint16_t stbi__uint16; typedef int16_t stbi__int16; typedef uint32_t stbi__uint32; typedef int32_t stbi__int32; #endif // should produce compiler error if size is wrong typedef unsigned char validate_uint32[sizeof(stbi__uint32) == 4 ? 1 : -1]; #ifdef _MSC_VER #define STBI_NOTUSED(v) (void)(v) #else #define STBI_NOTUSED(v) (void)sizeof(v) #endif #ifdef _MSC_VER #define STBI_HAS_LROTL #endif #ifdef STBI_HAS_LROTL #define stbi_lrot(x, y) _lrotl(x, y) #else #define stbi_lrot(x, y) (((x) << (y)) | ((x) >> (32 - (y)))) #endif #if defined(STBI_MALLOC) && defined(STBI_FREE) && (defined(STBI_REALLOC) || defined(STBI_REALLOC_SIZED)) // ok #elif !defined(STBI_MALLOC) && !defined(STBI_FREE) && !defined(STBI_REALLOC) && !defined(STBI_REALLOC_SIZED) // ok #else #error "Must define all or none of STBI_MALLOC, STBI_FREE, and STBI_REALLOC (or STBI_REALLOC_SIZED)." #endif #ifndef STBI_MALLOC #define STBI_MALLOC(sz) malloc(sz) #define STBI_REALLOC(p, newsz) realloc(p, newsz) #define STBI_FREE(p) free(p) #endif #ifndef STBI_REALLOC_SIZED #define STBI_REALLOC_SIZED(p, oldsz, newsz) STBI_REALLOC(p, newsz) #endif // x86/x64 detection #if defined(__x86_64__) || defined(_M_X64) #define STBI__X64_TARGET #elif defined(__i386) || defined(_M_IX86) #define STBI__X86_TARGET #endif #if defined(__GNUC__) && defined(STBI__X86_TARGET) && !defined(__SSE2__) && !defined(STBI_NO_SIMD) // gcc doesn't support sse2 intrinsics unless you compile with -msse2, // which in turn means it gets to use SSE2 everywhere. This is unfortunate, // but previous attempts to provide the SSE2 functions with runtime // detection caused numerous issues. The way architecture extensions are // exposed in GCC/Clang is, sadly, not really suited for one-file libs. // New behavior: if compiled with -msse2, we use SSE2 without any // detection; if not, we don't use it at all. #define STBI_NO_SIMD #endif #if defined(__MINGW32__) && defined(STBI__X86_TARGET) && !defined(STBI_MINGW_ENABLE_SSE2) && !defined(STBI_NO_SIMD) // Note that __MINGW32__ doesn't actually mean 32-bit, so we have to avoid STBI__X64_TARGET // // 32-bit MinGW wants ESP to be 16-byte aligned, but this is not in the // Windows ABI and VC++ as well as Windows DLLs don't maintain that invariant. // As a result, enabling SSE2 on 32-bit MinGW is dangerous when not // simultaneously enabling "-mstackrealign". // // See https://github.com/nothings/stb/issues/81 for more information. // // So default to no SSE2 on 32-bit MinGW. If you've read this far and added // -mstackrealign to your build settings, feel free to #define STBI_MINGW_ENABLE_SSE2. #define STBI_NO_SIMD #endif #if !defined(STBI_NO_SIMD) && (defined(STBI__X86_TARGET) || defined(STBI__X64_TARGET)) #define STBI_SSE2 #include #ifdef _MSC_VER #if _MSC_VER >= 1400 // not VC6 #include // __cpuid static int stbi__cpuid3(void) { int info[4]; __cpuid(info, 1); return info[3]; } #else static int stbi__cpuid3(void) { int res; __asm { mov eax,1 cpuid mov res,edx } return res; } #endif #define STBI_SIMD_ALIGN(type, name) __declspec(align(16)) type name #if !defined(STBI_NO_JPEG) && defined(STBI_SSE2) static int stbi__sse2_available(void) { int info3 = stbi__cpuid3(); return ((info3 >> 26) & 1) != 0; } #endif #else // assume GCC-style if not VC++ #define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16))) #if !defined(STBI_NO_JPEG) && defined(STBI_SSE2) static int stbi__sse2_available(void) { // If we're even attempting to compile this on GCC/Clang, that means // -msse2 is on, which means the compiler is allowed to use SSE2 // instructions at will, and so are we. return 1; } #endif #endif #endif // ARM NEON #if defined(STBI_NO_SIMD) && defined(STBI_NEON) #undef STBI_NEON #endif #ifdef STBI_NEON #include // assume GCC or Clang on ARM targets #define STBI_SIMD_ALIGN(type, name) type name __attribute__((aligned(16))) #endif #ifndef STBI_SIMD_ALIGN #define STBI_SIMD_ALIGN(type, name) type name #endif /////////////////////////////////////////////// // // stbi__context struct and start_xxx functions // stbi__context structure is our basic context used by all images, so it // contains all the IO context, plus some basic image information typedef struct { stbi__uint32 img_x, img_y; int img_n, img_out_n; stbi_io_callbacks io; void *io_user_data; int read_from_callbacks; int buflen; stbi_uc buffer_start[128]; stbi_uc *img_buffer, *img_buffer_end; stbi_uc *img_buffer_original, *img_buffer_original_end; } stbi__context; static void stbi__refill_buffer(stbi__context *s); // initialize a memory-decode context static void stbi__start_mem(stbi__context *s, stbi_uc const *buffer, int len) { s->io.read = NULL; s->read_from_callbacks = 0; s->img_buffer = s->img_buffer_original = (stbi_uc *)buffer; s->img_buffer_end = s->img_buffer_original_end = (stbi_uc *)buffer + len; } // initialize a callback-based context static void stbi__start_callbacks(stbi__context *s, stbi_io_callbacks *c, void *user) { s->io = *c; s->io_user_data = user; s->buflen = sizeof(s->buffer_start); s->read_from_callbacks = 1; s->img_buffer_original = s->buffer_start; stbi__refill_buffer(s); s->img_buffer_original_end = s->img_buffer_end; } #ifndef STBI_NO_STDIO static int stbi__stdio_read(void *user, char *data, int size) { return (int)fread(data, 1, size, (FILE *)user); } static void stbi__stdio_skip(void *user, int n) { fseek((FILE *)user, n, SEEK_CUR); } static int stbi__stdio_eof(void *user) { return feof((FILE *)user); } static stbi_io_callbacks stbi__stdio_callbacks = { stbi__stdio_read, stbi__stdio_skip, stbi__stdio_eof, }; static void stbi__start_file(stbi__context *s, FILE *f) { stbi__start_callbacks(s, &stbi__stdio_callbacks, (void *)f); } //static void stop_file(stbi__context *s) { } #endif // !STBI_NO_STDIO static void stbi__rewind(stbi__context *s) { // conceptually rewind SHOULD rewind to the beginning of the stream, // but we just rewind to the beginning of the initial buffer, because // we only use it after doing 'test', which only ever looks at at most 92 bytes s->img_buffer = s->img_buffer_original; s->img_buffer_end = s->img_buffer_original_end; } enum { STBI_ORDER_RGB, STBI_ORDER_BGR }; typedef struct { int bits_per_channel; int num_channels; int channel_order; } stbi__result_info; #ifndef STBI_NO_JPEG static int stbi__jpeg_test(stbi__context *s); static void *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_PNG static int stbi__png_test(stbi__context *s); static void *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__png_info(stbi__context *s, int *x, int *y, int *comp); static int stbi__png_is16(stbi__context *s); #endif #ifndef STBI_NO_BMP static int stbi__bmp_test(stbi__context *s); static void *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_TGA static int stbi__tga_test(stbi__context *s); static void *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__tga_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_PSD static int stbi__psd_test(stbi__context *s); static void *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc); static int stbi__psd_info(stbi__context *s, int *x, int *y, int *comp); static int stbi__psd_is16(stbi__context *s); #endif #ifndef STBI_NO_HDR static int stbi__hdr_test(stbi__context *s); static float *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_PIC static int stbi__pic_test(stbi__context *s); static void *stbi__pic_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__pic_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_GIF static int stbi__gif_test(stbi__context *s); static void *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static void *stbi__load_gif_main(stbi__context *s, int **delays, int *x, int *y, int *z, int *comp, int req_comp); static int stbi__gif_info(stbi__context *s, int *x, int *y, int *comp); #endif #ifndef STBI_NO_PNM static int stbi__pnm_test(stbi__context *s); static void *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri); static int stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp); #endif // this is not threadsafe static const char *stbi__g_failure_reason; STBIDEF const char *stbi_failure_reason(void) { return stbi__g_failure_reason; } static int stbi__err(const char *str) { stbi__g_failure_reason = str; return 0; } static void *stbi__malloc(size_t size) { return STBI_MALLOC(size); } // stb_image uses ints pervasively, including for offset calculations. // therefore the largest decoded image size we can support with the // current code, even on 64-bit targets, is INT_MAX. this is not a // significant limitation for the intended use case. // // we do, however, need to make sure our size calculations don't // overflow. hence a few helper functions for size calculations that // multiply integers together, making sure that they're non-negative // and no overflow occurs. // return 1 if the sum is valid, 0 on overflow. // negative terms are considered invalid. static int stbi__addsizes_valid(int a, int b) { if (b < 0) return 0; // now 0 <= b <= INT_MAX, hence also // 0 <= INT_MAX - b <= INTMAX. // And "a + b <= INT_MAX" (which might overflow) is the // same as a <= INT_MAX - b (no overflow) return a <= INT_MAX - b; } // returns 1 if the product is valid, 0 on overflow. // negative factors are considered invalid. static int stbi__mul2sizes_valid(int a, int b) { if (a < 0 || b < 0) return 0; if (b == 0) return 1; // mul-by-0 is always safe // portable way to check for no overflows in a*b return a <= INT_MAX / b; } // returns 1 if "a*b + add" has no negative terms/factors and doesn't overflow static int stbi__mad2sizes_valid(int a, int b, int add) { return stbi__mul2sizes_valid(a, b) && stbi__addsizes_valid(a * b, add); } // returns 1 if "a*b*c + add" has no negative terms/factors and doesn't overflow static int stbi__mad3sizes_valid(int a, int b, int c, int add) { return stbi__mul2sizes_valid(a, b) && stbi__mul2sizes_valid(a * b, c) && stbi__addsizes_valid(a * b * c, add); } // returns 1 if "a*b*c*d + add" has no negative terms/factors and doesn't overflow #if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR) static int stbi__mad4sizes_valid(int a, int b, int c, int d, int add) { return stbi__mul2sizes_valid(a, b) && stbi__mul2sizes_valid(a * b, c) && stbi__mul2sizes_valid(a * b * c, d) && stbi__addsizes_valid(a * b * c * d, add); } #endif // mallocs with size overflow checking static void *stbi__malloc_mad2(int a, int b, int add) { if (!stbi__mad2sizes_valid(a, b, add)) return NULL; return stbi__malloc(a * b + add); } static void *stbi__malloc_mad3(int a, int b, int c, int add) { if (!stbi__mad3sizes_valid(a, b, c, add)) return NULL; return stbi__malloc(a * b * c + add); } #if !defined(STBI_NO_LINEAR) || !defined(STBI_NO_HDR) static void *stbi__malloc_mad4(int a, int b, int c, int d, int add) { if (!stbi__mad4sizes_valid(a, b, c, d, add)) return NULL; return stbi__malloc(a * b * c * d + add); } #endif // stbi__err - error // stbi__errpf - error returning pointer to float // stbi__errpuc - error returning pointer to unsigned char #ifdef STBI_NO_FAILURE_STRINGS #define stbi__err(x, y) 0 #elif defined(STBI_FAILURE_USERMSG) #define stbi__err(x, y) stbi__err(y) #else #define stbi__err(x, y) stbi__err(x) #endif #define stbi__errpf(x, y) ((float *)(size_t)(stbi__err(x, y) ? NULL : NULL)) #define stbi__errpuc(x, y) ((unsigned char *)(size_t)(stbi__err(x, y) ? NULL : NULL)) STBIDEF void stbi_image_free(void *retval_from_stbi_load) { STBI_FREE(retval_from_stbi_load); } #ifndef STBI_NO_LINEAR static float *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp); #endif #ifndef STBI_NO_HDR static stbi_uc *stbi__hdr_to_ldr(float *data, int x, int y, int comp); #endif static int stbi__vertically_flip_on_load = 0; STBIDEF void stbi_set_flip_vertically_on_load(int flag_true_if_should_flip) { stbi__vertically_flip_on_load = flag_true_if_should_flip; } static void *stbi__load_main(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc) { memset(ri, 0, sizeof(*ri)); // make sure it's initialized if we add new fields ri->bits_per_channel = 8; // default is 8 so most paths don't have to be changed ri->channel_order = STBI_ORDER_RGB; // all current input & output are this, but this is here so we can add BGR order ri->num_channels = 0; #ifndef STBI_NO_JPEG if (stbi__jpeg_test(s)) return stbi__jpeg_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_PNG if (stbi__png_test(s)) return stbi__png_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_BMP if (stbi__bmp_test(s)) return stbi__bmp_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_GIF if (stbi__gif_test(s)) return stbi__gif_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_PSD if (stbi__psd_test(s)) return stbi__psd_load(s, x, y, comp, req_comp, ri, bpc); #endif #ifndef STBI_NO_PIC if (stbi__pic_test(s)) return stbi__pic_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_PNM if (stbi__pnm_test(s)) return stbi__pnm_load(s, x, y, comp, req_comp, ri); #endif #ifndef STBI_NO_HDR if (stbi__hdr_test(s)) { float *hdr = stbi__hdr_load(s, x, y, comp, req_comp, ri); return stbi__hdr_to_ldr(hdr, *x, *y, req_comp ? req_comp : *comp); } #endif #ifndef STBI_NO_TGA // test tga last because it's a crappy test! if (stbi__tga_test(s)) return stbi__tga_load(s, x, y, comp, req_comp, ri); #endif return stbi__errpuc("unknown image type", "Image not of any known type, or corrupt"); } static stbi_uc *stbi__convert_16_to_8(stbi__uint16 *orig, int w, int h, int channels) { int i; int img_len = w * h * channels; stbi_uc *reduced; reduced = (stbi_uc *)stbi__malloc(img_len); if (reduced == NULL) return stbi__errpuc("outofmem", "Out of memory"); for (i = 0; i < img_len; ++i) reduced[i] = (stbi_uc)((orig[i] >> 8) & 0xFF); // top half of each byte is sufficient approx of 16->8 bit scaling STBI_FREE(orig); return reduced; } static stbi__uint16 *stbi__convert_8_to_16(stbi_uc *orig, int w, int h, int channels) { int i; int img_len = w * h * channels; stbi__uint16 *enlarged; enlarged = (stbi__uint16 *)stbi__malloc(img_len * 2); if (enlarged == NULL) return (stbi__uint16 *)stbi__errpuc("outofmem", "Out of memory"); for (i = 0; i < img_len; ++i) enlarged[i] = (stbi__uint16)((orig[i] << 8) + orig[i]); // replicate to high and low byte, maps 0->0, 255->0xffff STBI_FREE(orig); return enlarged; } static void stbi__vertical_flip(void *image, int w, int h, int bytes_per_pixel) { int row; size_t bytes_per_row = (size_t)w * bytes_per_pixel; stbi_uc temp[2048]; stbi_uc *bytes = (stbi_uc *)image; for (row = 0; row < (h >> 1); row++) { stbi_uc *row0 = bytes + row * bytes_per_row; stbi_uc *row1 = bytes + (h - row - 1) * bytes_per_row; // swap row0 with row1 size_t bytes_left = bytes_per_row; while (bytes_left) { size_t bytes_copy = (bytes_left < sizeof(temp)) ? bytes_left : sizeof(temp); memcpy(temp, row0, bytes_copy); memcpy(row0, row1, bytes_copy); memcpy(row1, temp, bytes_copy); row0 += bytes_copy; row1 += bytes_copy; bytes_left -= bytes_copy; } } } #ifndef STBI_NO_GIF static void stbi__vertical_flip_slices(void *image, int w, int h, int z, int bytes_per_pixel) { int slice; int slice_size = w * h * bytes_per_pixel; stbi_uc *bytes = (stbi_uc *)image; for (slice = 0; slice < z; ++slice) { stbi__vertical_flip(bytes, w, h, bytes_per_pixel); bytes += slice_size; } } #endif static unsigned char *stbi__load_and_postprocess_8bit(stbi__context *s, int *x, int *y, int *comp, int req_comp) { stbi__result_info ri; void *result = stbi__load_main(s, x, y, comp, req_comp, &ri, 8); if (result == NULL) return NULL; if (ri.bits_per_channel != 8) { STBI_ASSERT(ri.bits_per_channel == 16); result = stbi__convert_16_to_8((stbi__uint16 *)result, *x, *y, req_comp == 0 ? *comp : req_comp); ri.bits_per_channel = 8; } // @TODO: move stbi__convert_format to here if (stbi__vertically_flip_on_load) { int channels = req_comp ? req_comp : *comp; stbi__vertical_flip(result, *x, *y, channels * sizeof(stbi_uc)); } return (unsigned char *)result; } static stbi__uint16 *stbi__load_and_postprocess_16bit(stbi__context *s, int *x, int *y, int *comp, int req_comp) { stbi__result_info ri; void *result = stbi__load_main(s, x, y, comp, req_comp, &ri, 16); if (result == NULL) return NULL; if (ri.bits_per_channel != 16) { STBI_ASSERT(ri.bits_per_channel == 8); result = stbi__convert_8_to_16((stbi_uc *)result, *x, *y, req_comp == 0 ? *comp : req_comp); ri.bits_per_channel = 16; } // @TODO: move stbi__convert_format16 to here // @TODO: special case RGB-to-Y (and RGBA-to-YA) for 8-bit-to-16-bit case to keep more precision if (stbi__vertically_flip_on_load) { int channels = req_comp ? req_comp : *comp; stbi__vertical_flip(result, *x, *y, channels * sizeof(stbi__uint16)); } return (stbi__uint16 *)result; } #if !defined(STBI_NO_HDR) && !defined(STBI_NO_LINEAR) static void stbi__float_postprocess(float *result, int *x, int *y, int *comp, int req_comp) { if (stbi__vertically_flip_on_load && result != NULL) { int channels = req_comp ? req_comp : *comp; stbi__vertical_flip(result, *x, *y, channels * sizeof(float)); } } #endif #ifndef STBI_NO_STDIO #if defined(_MSC_VER) && defined(STBI_WINDOWS_UTF8) STBI_EXTERN __declspec(dllimport) int __stdcall MultiByteToWideChar(unsigned int cp, unsigned long flags, const char *str, int cbmb, wchar_t *widestr, int cchwide); STBI_EXTERN __declspec(dllimport) int __stdcall WideCharToMultiByte(unsigned int cp, unsigned long flags, const wchar_t *widestr, int cchwide, char *str, int cbmb, const char *defchar, int *used_default); #endif #if defined(_MSC_VER) && defined(STBI_WINDOWS_UTF8) STBIDEF int stbi_convert_wchar_to_utf8(char *buffer, size_t bufferlen, const wchar_t *input) { return WideCharToMultiByte(65001 /* UTF8 */, 0, input, -1, buffer, (int)bufferlen, NULL, NULL); } #endif static FILE *stbi__fopen(char const *filename, char const *mode) { FILE *f; #if defined(_MSC_VER) && defined(STBI_WINDOWS_UTF8) wchar_t wMode[64]; wchar_t wFilename[1024]; if (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, filename, -1, wFilename, sizeof(wFilename))) return 0; if (0 == MultiByteToWideChar(65001 /* UTF8 */, 0, mode, -1, wMode, sizeof(wMode))) return 0; #if _MSC_VER >= 1400 if (0 != _wfopen_s(&f, wFilename, wMode)) f = 0; #else f = _wfopen(wFilename, wMode); #endif #elif defined(_MSC_VER) && _MSC_VER >= 1400 if (0 != fopen_s(&f, filename, mode)) f = 0; #else f = fopen(filename, mode); #endif return f; } STBIDEF stbi_uc *stbi_load(char const *filename, int *x, int *y, int *comp, int req_comp) { FILE *f = stbi__fopen(filename, "rb"); unsigned char *result; if (!f) return stbi__errpuc("can't fopen", "Unable to open file"); result = stbi_load_from_file(f, x, y, comp, req_comp); fclose(f); return result; } STBIDEF stbi_uc *stbi_load_from_file(FILE *f, int *x, int *y, int *comp, int req_comp) { unsigned char *result; stbi__context s; stbi__start_file(&s, f); result = stbi__load_and_postprocess_8bit(&s, x, y, comp, req_comp); if (result) { // need to 'unget' all the characters in the IO buffer fseek(f, -(int)(s.img_buffer_end - s.img_buffer), SEEK_CUR); } return result; } STBIDEF stbi__uint16 *stbi_load_from_file_16(FILE *f, int *x, int *y, int *comp, int req_comp) { stbi__uint16 *result; stbi__context s; stbi__start_file(&s, f); result = stbi__load_and_postprocess_16bit(&s, x, y, comp, req_comp); if (result) { // need to 'unget' all the characters in the IO buffer fseek(f, -(int)(s.img_buffer_end - s.img_buffer), SEEK_CUR); } return result; } STBIDEF stbi_us *stbi_load_16(char const *filename, int *x, int *y, int *comp, int req_comp) { FILE *f = stbi__fopen(filename, "rb"); stbi__uint16 *result; if (!f) return (stbi_us *)stbi__errpuc("can't fopen", "Unable to open file"); result = stbi_load_from_file_16(f, x, y, comp, req_comp); fclose(f); return result; } #endif //!STBI_NO_STDIO STBIDEF stbi_us *stbi_load_16_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *channels_in_file, int desired_channels) { stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__load_and_postprocess_16bit(&s, x, y, channels_in_file, desired_channels); } STBIDEF stbi_us *stbi_load_16_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *channels_in_file, int desired_channels) { stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)clbk, user); return stbi__load_and_postprocess_16bit(&s, x, y, channels_in_file, desired_channels); } STBIDEF stbi_uc *stbi_load_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp) { stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__load_and_postprocess_8bit(&s, x, y, comp, req_comp); } STBIDEF stbi_uc *stbi_load_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp) { stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)clbk, user); return stbi__load_and_postprocess_8bit(&s, x, y, comp, req_comp); } #ifndef STBI_NO_GIF STBIDEF stbi_uc *stbi_load_gif_from_memory(stbi_uc const *buffer, int len, int **delays, int *x, int *y, int *z, int *comp, int req_comp) { unsigned char *result; stbi__context s; stbi__start_mem(&s, buffer, len); result = (unsigned char *)stbi__load_gif_main(&s, delays, x, y, z, comp, req_comp); if (stbi__vertically_flip_on_load) { stbi__vertical_flip_slices(result, *x, *y, *z, *comp); } return result; } #endif #ifndef STBI_NO_LINEAR static float *stbi__loadf_main(stbi__context *s, int *x, int *y, int *comp, int req_comp) { unsigned char *data; #ifndef STBI_NO_HDR if (stbi__hdr_test(s)) { stbi__result_info ri; float *hdr_data = stbi__hdr_load(s, x, y, comp, req_comp, &ri); if (hdr_data) stbi__float_postprocess(hdr_data, x, y, comp, req_comp); return hdr_data; } #endif data = stbi__load_and_postprocess_8bit(s, x, y, comp, req_comp); if (data) return stbi__ldr_to_hdr(data, *x, *y, req_comp ? req_comp : *comp); return stbi__errpf("unknown image type", "Image not of any known type, or corrupt"); } STBIDEF float *stbi_loadf_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp, int req_comp) { stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__loadf_main(&s, x, y, comp, req_comp); } STBIDEF float *stbi_loadf_from_callbacks(stbi_io_callbacks const *clbk, void *user, int *x, int *y, int *comp, int req_comp) { stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)clbk, user); return stbi__loadf_main(&s, x, y, comp, req_comp); } #ifndef STBI_NO_STDIO STBIDEF float *stbi_loadf(char const *filename, int *x, int *y, int *comp, int req_comp) { float *result; FILE *f = stbi__fopen(filename, "rb"); if (!f) return stbi__errpf("can't fopen", "Unable to open file"); result = stbi_loadf_from_file(f, x, y, comp, req_comp); fclose(f); return result; } STBIDEF float *stbi_loadf_from_file(FILE *f, int *x, int *y, int *comp, int req_comp) { stbi__context s; stbi__start_file(&s, f); return stbi__loadf_main(&s, x, y, comp, req_comp); } #endif // !STBI_NO_STDIO #endif // !STBI_NO_LINEAR // these is-hdr-or-not is defined independent of whether STBI_NO_LINEAR is // defined, for API simplicity; if STBI_NO_LINEAR is defined, it always // reports false! STBIDEF int stbi_is_hdr_from_memory(stbi_uc const *buffer, int len) { #ifndef STBI_NO_HDR stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__hdr_test(&s); #else STBI_NOTUSED(buffer); STBI_NOTUSED(len); return 0; #endif } #ifndef STBI_NO_STDIO STBIDEF int stbi_is_hdr(char const *filename) { FILE *f = stbi__fopen(filename, "rb"); int result = 0; if (f) { result = stbi_is_hdr_from_file(f); fclose(f); } return result; } STBIDEF int stbi_is_hdr_from_file(FILE *f) { #ifndef STBI_NO_HDR long pos = ftell(f); int res; stbi__context s; stbi__start_file(&s, f); res = stbi__hdr_test(&s); fseek(f, pos, SEEK_SET); return res; #else STBI_NOTUSED(f); return 0; #endif } #endif // !STBI_NO_STDIO STBIDEF int stbi_is_hdr_from_callbacks(stbi_io_callbacks const *clbk, void *user) { #ifndef STBI_NO_HDR stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)clbk, user); return stbi__hdr_test(&s); #else STBI_NOTUSED(clbk); STBI_NOTUSED(user); return 0; #endif } #ifndef STBI_NO_LINEAR static float stbi__l2h_gamma = 2.2f, stbi__l2h_scale = 1.0f; STBIDEF void stbi_ldr_to_hdr_gamma(float gamma) { stbi__l2h_gamma = gamma; } STBIDEF void stbi_ldr_to_hdr_scale(float scale) { stbi__l2h_scale = scale; } #endif static float stbi__h2l_gamma_i = 1.0f / 2.2f, stbi__h2l_scale_i = 1.0f; STBIDEF void stbi_hdr_to_ldr_gamma(float gamma) { stbi__h2l_gamma_i = 1 / gamma; } STBIDEF void stbi_hdr_to_ldr_scale(float scale) { stbi__h2l_scale_i = 1 / scale; } ////////////////////////////////////////////////////////////////////////////// // // Common code used by all image loaders // enum { STBI__SCAN_load = 0, STBI__SCAN_type, STBI__SCAN_header }; static void stbi__refill_buffer(stbi__context *s) { int n = (s->io.read)(s->io_user_data, (char *)s->buffer_start, s->buflen); if (n == 0) { // at end of file, treat same as if from memory, but need to handle case // where s->img_buffer isn't pointing to safe memory, e.g. 0-byte file s->read_from_callbacks = 0; s->img_buffer = s->buffer_start; s->img_buffer_end = s->buffer_start + 1; *s->img_buffer = 0; } else { s->img_buffer = s->buffer_start; s->img_buffer_end = s->buffer_start + n; } } stbi_inline static stbi_uc stbi__get8(stbi__context *s) { if (s->img_buffer < s->img_buffer_end) return *s->img_buffer++; if (s->read_from_callbacks) { stbi__refill_buffer(s); return *s->img_buffer++; } return 0; } stbi_inline static int stbi__at_eof(stbi__context *s) { if (s->io.read) { if (!(s->io.eof)(s->io_user_data)) return 0; // if feof() is true, check if buffer = end // special case: we've only got the special 0 character at the end if (s->read_from_callbacks == 0) return 1; } return s->img_buffer >= s->img_buffer_end; } static void stbi__skip(stbi__context *s, int n) { if (n < 0) { s->img_buffer = s->img_buffer_end; return; } if (s->io.read) { int blen = (int)(s->img_buffer_end - s->img_buffer); if (blen < n) { s->img_buffer = s->img_buffer_end; (s->io.skip)(s->io_user_data, n - blen); return; } } s->img_buffer += n; } static int stbi__getn(stbi__context *s, stbi_uc *buffer, int n) { if (s->io.read) { int blen = (int)(s->img_buffer_end - s->img_buffer); if (blen < n) { int res, count; memcpy(buffer, s->img_buffer, blen); count = (s->io.read)(s->io_user_data, (char *)buffer + blen, n - blen); res = (count == (n - blen)); s->img_buffer = s->img_buffer_end; return res; } } if (s->img_buffer + n <= s->img_buffer_end) { memcpy(buffer, s->img_buffer, n); s->img_buffer += n; return 1; } else return 0; } static int stbi__get16be(stbi__context *s) { int z = stbi__get8(s); return (z << 8) + stbi__get8(s); } static stbi__uint32 stbi__get32be(stbi__context *s) { stbi__uint32 z = stbi__get16be(s); return (z << 16) + stbi__get16be(s); } #if defined(STBI_NO_BMP) && defined(STBI_NO_TGA) && defined(STBI_NO_GIF) // nothing #else static int stbi__get16le(stbi__context *s) { int z = stbi__get8(s); return z + (stbi__get8(s) << 8); } #endif #ifndef STBI_NO_BMP static stbi__uint32 stbi__get32le(stbi__context *s) { stbi__uint32 z = stbi__get16le(s); return z + (stbi__get16le(s) << 16); } #endif #define STBI__BYTECAST(x) ((stbi_uc)((x)&255)) // truncate int to byte without warnings ////////////////////////////////////////////////////////////////////////////// // // generic converter from built-in img_n to req_comp // individual types do this automatically as much as possible (e.g. jpeg // does all cases internally since it needs to colorspace convert anyway, // and it never has alpha, so very few cases ). png can automatically // interleave an alpha=255 channel, but falls back to this for other cases // // assume data buffer is malloced, so malloc a new one and free that one // only failure mode is malloc failing static stbi_uc stbi__compute_y(int r, int g, int b) { return (stbi_uc)(((r * 77) + (g * 150) + (29 * b)) >> 8); } static unsigned char *stbi__convert_format(unsigned char *data, int img_n, int req_comp, unsigned int x, unsigned int y) { int i, j; unsigned char *good; if (req_comp == img_n) return data; STBI_ASSERT(req_comp >= 1 && req_comp <= 4); good = (unsigned char *)stbi__malloc_mad3(req_comp, x, y, 0); if (good == NULL) { STBI_FREE(data); return stbi__errpuc("outofmem", "Out of memory"); } for (j = 0; j < (int)y; ++j) { unsigned char *src = data + j * x * img_n; unsigned char *dest = good + j * x * req_comp; #define STBI__COMBO(a, b) ((a)*8 + (b)) #define STBI__CASE(a, b) \ case STBI__COMBO(a, b): \ for (i = x - 1; i >= 0; --i, src += a, dest += b) // convert source image with img_n components to one with req_comp components; // avoid switch per pixel, so use switch per scanline and massive macros switch (STBI__COMBO(img_n, req_comp)) { STBI__CASE(1, 2) { dest[0] = src[0]; dest[1] = 255; } break; STBI__CASE(1, 3) { dest[0] = dest[1] = dest[2] = src[0]; } break; STBI__CASE(1, 4) { dest[0] = dest[1] = dest[2] = src[0]; dest[3] = 255; } break; STBI__CASE(2, 1) { dest[0] = src[0]; } break; STBI__CASE(2, 3) { dest[0] = dest[1] = dest[2] = src[0]; } break; STBI__CASE(2, 4) { dest[0] = dest[1] = dest[2] = src[0]; dest[3] = src[1]; } break; STBI__CASE(3, 4) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest[3] = 255; } break; STBI__CASE(3, 1) { dest[0] = stbi__compute_y(src[0], src[1], src[2]); } break; STBI__CASE(3, 2) { dest[0] = stbi__compute_y(src[0], src[1], src[2]); dest[1] = 255; } break; STBI__CASE(4, 1) { dest[0] = stbi__compute_y(src[0], src[1], src[2]); } break; STBI__CASE(4, 2) { dest[0] = stbi__compute_y(src[0], src[1], src[2]); dest[1] = src[3]; } break; STBI__CASE(4, 3) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; } break; default: STBI_ASSERT(0); } #undef STBI__CASE } STBI_FREE(data); return good; } static stbi__uint16 stbi__compute_y_16(int r, int g, int b) { return (stbi__uint16)(((r * 77) + (g * 150) + (29 * b)) >> 8); } static stbi__uint16 *stbi__convert_format16(stbi__uint16 *data, int img_n, int req_comp, unsigned int x, unsigned int y) { int i, j; stbi__uint16 *good; if (req_comp == img_n) return data; STBI_ASSERT(req_comp >= 1 && req_comp <= 4); good = (stbi__uint16 *)stbi__malloc(req_comp * x * y * 2); if (good == NULL) { STBI_FREE(data); return (stbi__uint16 *)stbi__errpuc("outofmem", "Out of memory"); } for (j = 0; j < (int)y; ++j) { stbi__uint16 *src = data + j * x * img_n; stbi__uint16 *dest = good + j * x * req_comp; #define STBI__COMBO(a, b) ((a)*8 + (b)) #define STBI__CASE(a, b) \ case STBI__COMBO(a, b): \ for (i = x - 1; i >= 0; --i, src += a, dest += b) // convert source image with img_n components to one with req_comp components; // avoid switch per pixel, so use switch per scanline and massive macros switch (STBI__COMBO(img_n, req_comp)) { STBI__CASE(1, 2) { dest[0] = src[0]; dest[1] = 0xffff; } break; STBI__CASE(1, 3) { dest[0] = dest[1] = dest[2] = src[0]; } break; STBI__CASE(1, 4) { dest[0] = dest[1] = dest[2] = src[0]; dest[3] = 0xffff; } break; STBI__CASE(2, 1) { dest[0] = src[0]; } break; STBI__CASE(2, 3) { dest[0] = dest[1] = dest[2] = src[0]; } break; STBI__CASE(2, 4) { dest[0] = dest[1] = dest[2] = src[0]; dest[3] = src[1]; } break; STBI__CASE(3, 4) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest[3] = 0xffff; } break; STBI__CASE(3, 1) { dest[0] = stbi__compute_y_16(src[0], src[1], src[2]); } break; STBI__CASE(3, 2) { dest[0] = stbi__compute_y_16(src[0], src[1], src[2]); dest[1] = 0xffff; } break; STBI__CASE(4, 1) { dest[0] = stbi__compute_y_16(src[0], src[1], src[2]); } break; STBI__CASE(4, 2) { dest[0] = stbi__compute_y_16(src[0], src[1], src[2]); dest[1] = src[3]; } break; STBI__CASE(4, 3) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; } break; default: STBI_ASSERT(0); } #undef STBI__CASE } STBI_FREE(data); return good; } #ifndef STBI_NO_LINEAR static float *stbi__ldr_to_hdr(stbi_uc *data, int x, int y, int comp) { int i, k, n; float *output; if (!data) return NULL; output = (float *)stbi__malloc_mad4(x, y, comp, sizeof(float), 0); if (output == NULL) { STBI_FREE(data); return stbi__errpf("outofmem", "Out of memory"); } // compute number of non-alpha components if (comp & 1) n = comp; else n = comp - 1; for (i = 0; i < x * y; ++i) { for (k = 0; k < n; ++k) { output[i * comp + k] = (float)(pow(data[i * comp + k] / 255.0f, stbi__l2h_gamma) * stbi__l2h_scale); } } if (n < comp) { for (i = 0; i < x * y; ++i) { output[i * comp + n] = data[i * comp + n] / 255.0f; } } STBI_FREE(data); return output; } #endif #ifndef STBI_NO_HDR #define stbi__float2int(x) ((int)(x)) static stbi_uc *stbi__hdr_to_ldr(float *data, int x, int y, int comp) { int i, k, n; stbi_uc *output; if (!data) return NULL; output = (stbi_uc *)stbi__malloc_mad3(x, y, comp, 0); if (output == NULL) { STBI_FREE(data); return stbi__errpuc("outofmem", "Out of memory"); } // compute number of non-alpha components if (comp & 1) n = comp; else n = comp - 1; for (i = 0; i < x * y; ++i) { for (k = 0; k < n; ++k) { float z = (float)pow(data[i * comp + k] * stbi__h2l_scale_i, stbi__h2l_gamma_i) * 255 + 0.5f; if (z < 0) z = 0; if (z > 255) z = 255; output[i * comp + k] = (stbi_uc)stbi__float2int(z); } if (k < comp) { float z = data[i * comp + k] * 255 + 0.5f; if (z < 0) z = 0; if (z > 255) z = 255; output[i * comp + k] = (stbi_uc)stbi__float2int(z); } } STBI_FREE(data); return output; } #endif ////////////////////////////////////////////////////////////////////////////// // // "baseline" JPEG/JFIF decoder // // simple implementation // - doesn't support delayed output of y-dimension // - simple interface (only one output format: 8-bit interleaved RGB) // - doesn't try to recover corrupt jpegs // - doesn't allow partial loading, loading multiple at once // - still fast on x86 (copying globals into locals doesn't help x86) // - allocates lots of intermediate memory (full size of all components) // - non-interleaved case requires this anyway // - allows good upsampling (see next) // high-quality // - upsampled channels are bilinearly interpolated, even across blocks // - quality integer IDCT derived from IJG's 'slow' // performance // - fast huffman; reasonable integer IDCT // - some SIMD kernels for common paths on targets with SSE2/NEON // - uses a lot of intermediate memory, could cache poorly #ifndef STBI_NO_JPEG // huffman decoding acceleration #define FAST_BITS 9 // larger handles more cases; smaller stomps less cache typedef struct { stbi_uc fast[1 << FAST_BITS]; // weirdly, repacking this into AoS is a 10% speed loss, instead of a win stbi__uint16 code[256]; stbi_uc values[256]; stbi_uc size[257]; unsigned int maxcode[18]; int delta[17]; // old 'firstsymbol' - old 'firstcode' } stbi__huffman; typedef struct { stbi__context *s; stbi__huffman huff_dc[4]; stbi__huffman huff_ac[4]; stbi__uint16 dequant[4][64]; stbi__int16 fast_ac[4][1 << FAST_BITS]; // sizes for components, interleaved MCUs int img_h_max, img_v_max; int img_mcu_x, img_mcu_y; int img_mcu_w, img_mcu_h; // definition of jpeg image component struct { int id; int h, v; int tq; int hd, ha; int dc_pred; int x, y, w2, h2; stbi_uc *data; void *raw_data, *raw_coeff; stbi_uc *linebuf; short *coeff; // progressive only int coeff_w, coeff_h; // number of 8x8 coefficient blocks } img_comp[4]; stbi__uint32 code_buffer; // jpeg entropy-coded buffer int code_bits; // number of valid bits unsigned char marker; // marker seen while filling entropy buffer int nomore; // flag if we saw a marker so must stop int progressive; int spec_start; int spec_end; int succ_high; int succ_low; int eob_run; int jfif; int app14_color_transform; // Adobe APP14 tag int rgb; int scan_n, order[4]; int restart_interval, todo; // kernels void (*idct_block_kernel)(stbi_uc *out, int out_stride, short data[64]); void (*YCbCr_to_RGB_kernel)(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step); stbi_uc *(*resample_row_hv_2_kernel)(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs); } stbi__jpeg; static int stbi__build_huffman(stbi__huffman *h, int *count) { int i, j, k = 0; unsigned int code; // build size list for each symbol (from JPEG spec) for (i = 0; i < 16; ++i) for (j = 0; j < count[i]; ++j) h->size[k++] = (stbi_uc)(i + 1); h->size[k] = 0; // compute actual symbols (from jpeg spec) code = 0; k = 0; for (j = 1; j <= 16; ++j) { // compute delta to add to code to compute symbol id h->delta[j] = k - code; if (h->size[k] == j) { while (h->size[k] == j) h->code[k++] = (stbi__uint16)(code++); if (code - 1 >= (1u << j)) return stbi__err("bad code lengths", "Corrupt JPEG"); } // compute largest code + 1 for this size, preshifted as needed later h->maxcode[j] = code << (16 - j); code <<= 1; } h->maxcode[j] = 0xffffffff; // build non-spec acceleration table; 255 is flag for not-accelerated memset(h->fast, 255, 1 << FAST_BITS); for (i = 0; i < k; ++i) { int s = h->size[i]; if (s <= FAST_BITS) { int c = h->code[i] << (FAST_BITS - s); int m = 1 << (FAST_BITS - s); for (j = 0; j < m; ++j) { h->fast[c + j] = (stbi_uc)i; } } } return 1; } // build a table that decodes both magnitude and value of small ACs in // one go. static void stbi__build_fast_ac(stbi__int16 *fast_ac, stbi__huffman *h) { int i; for (i = 0; i < (1 << FAST_BITS); ++i) { stbi_uc fast = h->fast[i]; fast_ac[i] = 0; if (fast < 255) { int rs = h->values[fast]; int run = (rs >> 4) & 15; int magbits = rs & 15; int len = h->size[fast]; if (magbits && len + magbits <= FAST_BITS) { // magnitude code followed by receive_extend code int k = ((i << len) & ((1 << FAST_BITS) - 1)) >> (FAST_BITS - magbits); int m = 1 << (magbits - 1); if (k < m) k += (~0U << magbits) + 1; // if the result is small enough, we can fit it in fast_ac table if (k >= -128 && k <= 127) fast_ac[i] = (stbi__int16)((k * 256) + (run * 16) + (len + magbits)); } } } } static void stbi__grow_buffer_unsafe(stbi__jpeg *j) { do { unsigned int b = j->nomore ? 0 : stbi__get8(j->s); if (b == 0xff) { int c = stbi__get8(j->s); while (c == 0xff) c = stbi__get8(j->s); // consume fill bytes if (c != 0) { j->marker = (unsigned char)c; j->nomore = 1; return; } } j->code_buffer |= b << (24 - j->code_bits); j->code_bits += 8; } while (j->code_bits <= 24); } // (1 << n) - 1 static const stbi__uint32 stbi__bmask[17] = { 0, 1, 3, 7, 15, 31, 63, 127, 255, 511, 1023, 2047, 4095, 8191, 16383, 32767, 65535 }; // decode a jpeg huffman value from the bitstream stbi_inline static int stbi__jpeg_huff_decode(stbi__jpeg *j, stbi__huffman *h) { unsigned int temp; int c, k; if (j->code_bits < 16) stbi__grow_buffer_unsafe(j); // look at the top FAST_BITS and determine what symbol ID it is, // if the code is <= FAST_BITS c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS) - 1); k = h->fast[c]; if (k < 255) { int s = h->size[k]; if (s > j->code_bits) return -1; j->code_buffer <<= s; j->code_bits -= s; return h->values[k]; } // naive test is to shift the code_buffer down so k bits are // valid, then test against maxcode. To speed this up, we've // preshifted maxcode left so that it has (16-k) 0s at the // end; in other words, regardless of the number of bits, it // wants to be compared against something shifted to have 16; // that way we don't need to shift inside the loop. temp = j->code_buffer >> 16; for (k = FAST_BITS + 1;; ++k) if (temp < h->maxcode[k]) break; if (k == 17) { // error! code not found j->code_bits -= 16; return -1; } if (k > j->code_bits) return -1; // convert the huffman code to the symbol id c = ((j->code_buffer >> (32 - k)) & stbi__bmask[k]) + h->delta[k]; STBI_ASSERT((((j->code_buffer) >> (32 - h->size[c])) & stbi__bmask[h->size[c]]) == h->code[c]); // convert the id to a symbol j->code_bits -= k; j->code_buffer <<= k; return h->values[c]; } // bias[n] = (-1<code_bits < n) stbi__grow_buffer_unsafe(j); sgn = (stbi__int32)j->code_buffer >> 31; // sign bit is always in MSB k = stbi_lrot(j->code_buffer, n); STBI_ASSERT(n >= 0 && n < (int)(sizeof(stbi__bmask) / sizeof(*stbi__bmask))); j->code_buffer = k & ~stbi__bmask[n]; k &= stbi__bmask[n]; j->code_bits -= n; return k + (stbi__jbias[n] & ~sgn); } // get some unsigned bits stbi_inline static int stbi__jpeg_get_bits(stbi__jpeg *j, int n) { unsigned int k; if (j->code_bits < n) stbi__grow_buffer_unsafe(j); k = stbi_lrot(j->code_buffer, n); j->code_buffer = k & ~stbi__bmask[n]; k &= stbi__bmask[n]; j->code_bits -= n; return k; } stbi_inline static int stbi__jpeg_get_bit(stbi__jpeg *j) { unsigned int k; if (j->code_bits < 1) stbi__grow_buffer_unsafe(j); k = j->code_buffer; j->code_buffer <<= 1; --j->code_bits; return k & 0x80000000; } // given a value that's at position X in the zigzag stream, // where does it appear in the 8x8 matrix coded as row-major? static const stbi_uc stbi__jpeg_dezigzag[64 + 15] = { 0, 1, 8, 16, 9, 2, 3, 10, 17, 24, 32, 25, 18, 11, 4, 5, 12, 19, 26, 33, 40, 48, 41, 34, 27, 20, 13, 6, 7, 14, 21, 28, 35, 42, 49, 56, 57, 50, 43, 36, 29, 22, 15, 23, 30, 37, 44, 51, 58, 59, 52, 45, 38, 31, 39, 46, 53, 60, 61, 54, 47, 55, 62, 63, // let corrupt input sample past end 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63 }; // decode one 64-entry block-- static int stbi__jpeg_decode_block(stbi__jpeg *j, short data[64], stbi__huffman *hdc, stbi__huffman *hac, stbi__int16 *fac, int b, stbi__uint16 *dequant) { int diff, dc, k; int t; if (j->code_bits < 16) stbi__grow_buffer_unsafe(j); t = stbi__jpeg_huff_decode(j, hdc); if (t < 0) return stbi__err("bad huffman code", "Corrupt JPEG"); // 0 all the ac values now so we can do it 32-bits at a time memset(data, 0, 64 * sizeof(data[0])); diff = t ? stbi__extend_receive(j, t) : 0; dc = j->img_comp[b].dc_pred + diff; j->img_comp[b].dc_pred = dc; data[0] = (short)(dc * dequant[0]); // decode AC components, see JPEG spec k = 1; do { unsigned int zig; int c, r, s; if (j->code_bits < 16) stbi__grow_buffer_unsafe(j); c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS) - 1); r = fac[c]; if (r) { // fast-AC path k += (r >> 4) & 15; // run s = r & 15; // combined length j->code_buffer <<= s; j->code_bits -= s; // decode into unzigzag'd location zig = stbi__jpeg_dezigzag[k++]; data[zig] = (short)((r >> 8) * dequant[zig]); } else { int rs = stbi__jpeg_huff_decode(j, hac); if (rs < 0) return stbi__err("bad huffman code", "Corrupt JPEG"); s = rs & 15; r = rs >> 4; if (s == 0) { if (rs != 0xf0) break; // end block k += 16; } else { k += r; // decode into unzigzag'd location zig = stbi__jpeg_dezigzag[k++]; data[zig] = (short)(stbi__extend_receive(j, s) * dequant[zig]); } } } while (k < 64); return 1; } static int stbi__jpeg_decode_block_prog_dc(stbi__jpeg *j, short data[64], stbi__huffman *hdc, int b) { int diff, dc; int t; if (j->spec_end != 0) return stbi__err("can't merge dc and ac", "Corrupt JPEG"); if (j->code_bits < 16) stbi__grow_buffer_unsafe(j); if (j->succ_high == 0) { // first scan for DC coefficient, must be first memset(data, 0, 64 * sizeof(data[0])); // 0 all the ac values now t = stbi__jpeg_huff_decode(j, hdc); diff = t ? stbi__extend_receive(j, t) : 0; dc = j->img_comp[b].dc_pred + diff; j->img_comp[b].dc_pred = dc; data[0] = (short)(dc << j->succ_low); } else { // refinement scan for DC coefficient if (stbi__jpeg_get_bit(j)) data[0] += (short)(1 << j->succ_low); } return 1; } // @OPTIMIZE: store non-zigzagged during the decode passes, // and only de-zigzag when dequantizing static int stbi__jpeg_decode_block_prog_ac(stbi__jpeg *j, short data[64], stbi__huffman *hac, stbi__int16 *fac) { int k; if (j->spec_start == 0) return stbi__err("can't merge dc and ac", "Corrupt JPEG"); if (j->succ_high == 0) { int shift = j->succ_low; if (j->eob_run) { --j->eob_run; return 1; } k = j->spec_start; do { unsigned int zig; int c, r, s; if (j->code_bits < 16) stbi__grow_buffer_unsafe(j); c = (j->code_buffer >> (32 - FAST_BITS)) & ((1 << FAST_BITS) - 1); r = fac[c]; if (r) { // fast-AC path k += (r >> 4) & 15; // run s = r & 15; // combined length j->code_buffer <<= s; j->code_bits -= s; zig = stbi__jpeg_dezigzag[k++]; data[zig] = (short)((r >> 8) << shift); } else { int rs = stbi__jpeg_huff_decode(j, hac); if (rs < 0) return stbi__err("bad huffman code", "Corrupt JPEG"); s = rs & 15; r = rs >> 4; if (s == 0) { if (r < 15) { j->eob_run = (1 << r); if (r) j->eob_run += stbi__jpeg_get_bits(j, r); --j->eob_run; break; } k += 16; } else { k += r; zig = stbi__jpeg_dezigzag[k++]; data[zig] = (short)(stbi__extend_receive(j, s) << shift); } } } while (k <= j->spec_end); } else { // refinement scan for these AC coefficients short bit = (short)(1 << j->succ_low); if (j->eob_run) { --j->eob_run; for (k = j->spec_start; k <= j->spec_end; ++k) { short *p = &data[stbi__jpeg_dezigzag[k]]; if (*p != 0) if (stbi__jpeg_get_bit(j)) if ((*p & bit) == 0) { if (*p > 0) *p += bit; else *p -= bit; } } } else { k = j->spec_start; do { int r, s; int rs = stbi__jpeg_huff_decode(j, hac); // @OPTIMIZE see if we can use the fast path here, advance-by-r is so slow, eh if (rs < 0) return stbi__err("bad huffman code", "Corrupt JPEG"); s = rs & 15; r = rs >> 4; if (s == 0) { if (r < 15) { j->eob_run = (1 << r) - 1; if (r) j->eob_run += stbi__jpeg_get_bits(j, r); r = 64; // force end of block } else { // r=15 s=0 should write 16 0s, so we just do // a run of 15 0s and then write s (which is 0), // so we don't have to do anything special here } } else { if (s != 1) return stbi__err("bad huffman code", "Corrupt JPEG"); // sign bit if (stbi__jpeg_get_bit(j)) s = bit; else s = -bit; } // advance by r while (k <= j->spec_end) { short *p = &data[stbi__jpeg_dezigzag[k++]]; if (*p != 0) { if (stbi__jpeg_get_bit(j)) if ((*p & bit) == 0) { if (*p > 0) *p += bit; else *p -= bit; } } else { if (r == 0) { *p = (short)s; break; } --r; } } } while (k <= j->spec_end); } } return 1; } // take a -128..127 value and stbi__clamp it and convert to 0..255 stbi_inline static stbi_uc stbi__clamp(int x) { // trick to use a single test to catch both cases if ((unsigned int)x > 255) { if (x < 0) return 0; if (x > 255) return 255; } return (stbi_uc)x; } #define stbi__f2f(x) ((int)(((x)*4096 + 0.5))) #define stbi__fsh(x) ((x)*4096) // derived from jidctint -- DCT_ISLOW #define STBI__IDCT_1D(s0, s1, s2, s3, s4, s5, s6, s7) \ int t0, t1, t2, t3, p1, p2, p3, p4, p5, x0, x1, x2, x3; \ p2 = s2; \ p3 = s6; \ p1 = (p2 + p3) * stbi__f2f(0.5411961f); \ t2 = p1 + p3 * stbi__f2f(-1.847759065f); \ t3 = p1 + p2 * stbi__f2f(0.765366865f); \ p2 = s0; \ p3 = s4; \ t0 = stbi__fsh(p2 + p3); \ t1 = stbi__fsh(p2 - p3); \ x0 = t0 + t3; \ x3 = t0 - t3; \ x1 = t1 + t2; \ x2 = t1 - t2; \ t0 = s7; \ t1 = s5; \ t2 = s3; \ t3 = s1; \ p3 = t0 + t2; \ p4 = t1 + t3; \ p1 = t0 + t3; \ p2 = t1 + t2; \ p5 = (p3 + p4) * stbi__f2f(1.175875602f); \ t0 = t0 * stbi__f2f(0.298631336f); \ t1 = t1 * stbi__f2f(2.053119869f); \ t2 = t2 * stbi__f2f(3.072711026f); \ t3 = t3 * stbi__f2f(1.501321110f); \ p1 = p5 + p1 * stbi__f2f(-0.899976223f); \ p2 = p5 + p2 * stbi__f2f(-2.562915447f); \ p3 = p3 * stbi__f2f(-1.961570560f); \ p4 = p4 * stbi__f2f(-0.390180644f); \ t3 += p1 + p4; \ t2 += p2 + p3; \ t1 += p2 + p4; \ t0 += p1 + p3; static void stbi__idct_block(stbi_uc *out, int out_stride, short data[64]) { int i, val[64], *v = val; stbi_uc *o; short *d = data; // columns for (i = 0; i < 8; ++i, ++d, ++v) { // if all zeroes, shortcut -- this avoids dequantizing 0s and IDCTing if (d[8] == 0 && d[16] == 0 && d[24] == 0 && d[32] == 0 && d[40] == 0 && d[48] == 0 && d[56] == 0) { // no shortcut 0 seconds // (1|2|3|4|5|6|7)==0 0 seconds // all separate -0.047 seconds // 1 && 2|3 && 4|5 && 6|7: -0.047 seconds int dcterm = d[0] * 4; v[0] = v[8] = v[16] = v[24] = v[32] = v[40] = v[48] = v[56] = dcterm; } else { STBI__IDCT_1D(d[0], d[8], d[16], d[24], d[32], d[40], d[48], d[56]) // constants scaled things up by 1<<12; let's bring them back // down, but keep 2 extra bits of precision x0 += 512; x1 += 512; x2 += 512; x3 += 512; v[0] = (x0 + t3) >> 10; v[56] = (x0 - t3) >> 10; v[8] = (x1 + t2) >> 10; v[48] = (x1 - t2) >> 10; v[16] = (x2 + t1) >> 10; v[40] = (x2 - t1) >> 10; v[24] = (x3 + t0) >> 10; v[32] = (x3 - t0) >> 10; } } for (i = 0, v = val, o = out; i < 8; ++i, v += 8, o += out_stride) { // no fast case since the first 1D IDCT spread components out STBI__IDCT_1D(v[0], v[1], v[2], v[3], v[4], v[5], v[6], v[7]) // constants scaled things up by 1<<12, plus we had 1<<2 from first // loop, plus horizontal and vertical each scale by sqrt(8) so together // we've got an extra 1<<3, so 1<<17 total we need to remove. // so we want to round that, which means adding 0.5 * 1<<17, // aka 65536. Also, we'll end up with -128 to 127 that we want // to encode as 0..255 by adding 128, so we'll add that before the shift x0 += 65536 + (128 << 17); x1 += 65536 + (128 << 17); x2 += 65536 + (128 << 17); x3 += 65536 + (128 << 17); // tried computing the shifts into temps, or'ing the temps to see // if any were out of range, but that was slower o[0] = stbi__clamp((x0 + t3) >> 17); o[7] = stbi__clamp((x0 - t3) >> 17); o[1] = stbi__clamp((x1 + t2) >> 17); o[6] = stbi__clamp((x1 - t2) >> 17); o[2] = stbi__clamp((x2 + t1) >> 17); o[5] = stbi__clamp((x2 - t1) >> 17); o[3] = stbi__clamp((x3 + t0) >> 17); o[4] = stbi__clamp((x3 - t0) >> 17); } } #ifdef STBI_SSE2 // sse2 integer IDCT. not the fastest possible implementation but it // produces bit-identical results to the generic C version so it's // fully "transparent". static void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64]) { // This is constructed to match our regular (generic) integer IDCT exactly. __m128i row0, row1, row2, row3, row4, row5, row6, row7; __m128i tmp; // dot product constant: even elems=x, odd elems=y #define dct_const(x, y) _mm_setr_epi16((x), (y), (x), (y), (x), (y), (x), (y)) // out(0) = c0[even]*x + c0[odd]*y (c0, x, y 16-bit, out 32-bit) // out(1) = c1[even]*x + c1[odd]*y #define dct_rot(out0, out1, x, y, c0, c1) \ __m128i c0##lo = _mm_unpacklo_epi16((x), (y)); \ __m128i c0##hi = _mm_unpackhi_epi16((x), (y)); \ __m128i out0##_l = _mm_madd_epi16(c0##lo, c0); \ __m128i out0##_h = _mm_madd_epi16(c0##hi, c0); \ __m128i out1##_l = _mm_madd_epi16(c0##lo, c1); \ __m128i out1##_h = _mm_madd_epi16(c0##hi, c1) // out = in << 12 (in 16-bit, out 32-bit) #define dct_widen(out, in) \ __m128i out##_l = _mm_srai_epi32(_mm_unpacklo_epi16(_mm_setzero_si128(), (in)), 4); \ __m128i out##_h = _mm_srai_epi32(_mm_unpackhi_epi16(_mm_setzero_si128(), (in)), 4) // wide add #define dct_wadd(out, a, b) \ __m128i out##_l = _mm_add_epi32(a##_l, b##_l); \ __m128i out##_h = _mm_add_epi32(a##_h, b##_h) // wide sub #define dct_wsub(out, a, b) \ __m128i out##_l = _mm_sub_epi32(a##_l, b##_l); \ __m128i out##_h = _mm_sub_epi32(a##_h, b##_h) // butterfly a/b, add bias, then shift by "s" and pack #define dct_bfly32o(out0, out1, a, b, bias, s) \ { \ __m128i abiased_l = _mm_add_epi32(a##_l, bias); \ __m128i abiased_h = _mm_add_epi32(a##_h, bias); \ dct_wadd(sum, abiased, b); \ dct_wsub(dif, abiased, b); \ out0 = _mm_packs_epi32(_mm_srai_epi32(sum_l, s), _mm_srai_epi32(sum_h, s)); \ out1 = _mm_packs_epi32(_mm_srai_epi32(dif_l, s), _mm_srai_epi32(dif_h, s)); \ } // 8-bit interleave step (for transposes) #define dct_interleave8(a, b) \ tmp = a; \ a = _mm_unpacklo_epi8(a, b); \ b = _mm_unpackhi_epi8(tmp, b) // 16-bit interleave step (for transposes) #define dct_interleave16(a, b) \ tmp = a; \ a = _mm_unpacklo_epi16(a, b); \ b = _mm_unpackhi_epi16(tmp, b) #define dct_pass(bias, shift) \ { \ /* even part */ \ dct_rot(t2e, t3e, row2, row6, rot0_0, rot0_1); \ __m128i sum04 = _mm_add_epi16(row0, row4); \ __m128i dif04 = _mm_sub_epi16(row0, row4); \ dct_widen(t0e, sum04); \ dct_widen(t1e, dif04); \ dct_wadd(x0, t0e, t3e); \ dct_wsub(x3, t0e, t3e); \ dct_wadd(x1, t1e, t2e); \ dct_wsub(x2, t1e, t2e); \ /* odd part */ \ dct_rot(y0o, y2o, row7, row3, rot2_0, rot2_1); \ dct_rot(y1o, y3o, row5, row1, rot3_0, rot3_1); \ __m128i sum17 = _mm_add_epi16(row1, row7); \ __m128i sum35 = _mm_add_epi16(row3, row5); \ dct_rot(y4o, y5o, sum17, sum35, rot1_0, rot1_1); \ dct_wadd(x4, y0o, y4o); \ dct_wadd(x5, y1o, y5o); \ dct_wadd(x6, y2o, y5o); \ dct_wadd(x7, y3o, y4o); \ dct_bfly32o(row0, row7, x0, x7, bias, shift); \ dct_bfly32o(row1, row6, x1, x6, bias, shift); \ dct_bfly32o(row2, row5, x2, x5, bias, shift); \ dct_bfly32o(row3, row4, x3, x4, bias, shift); \ } __m128i rot0_0 = dct_const(stbi__f2f(0.5411961f), stbi__f2f(0.5411961f) + stbi__f2f(-1.847759065f)); __m128i rot0_1 = dct_const(stbi__f2f(0.5411961f) + stbi__f2f(0.765366865f), stbi__f2f(0.5411961f)); __m128i rot1_0 = dct_const(stbi__f2f(1.175875602f) + stbi__f2f(-0.899976223f), stbi__f2f(1.175875602f)); __m128i rot1_1 = dct_const(stbi__f2f(1.175875602f), stbi__f2f(1.175875602f) + stbi__f2f(-2.562915447f)); __m128i rot2_0 = dct_const(stbi__f2f(-1.961570560f) + stbi__f2f(0.298631336f), stbi__f2f(-1.961570560f)); __m128i rot2_1 = dct_const(stbi__f2f(-1.961570560f), stbi__f2f(-1.961570560f) + stbi__f2f(3.072711026f)); __m128i rot3_0 = dct_const(stbi__f2f(-0.390180644f) + stbi__f2f(2.053119869f), stbi__f2f(-0.390180644f)); __m128i rot3_1 = dct_const(stbi__f2f(-0.390180644f), stbi__f2f(-0.390180644f) + stbi__f2f(1.501321110f)); // rounding biases in column/row passes, see stbi__idct_block for explanation. __m128i bias_0 = _mm_set1_epi32(512); __m128i bias_1 = _mm_set1_epi32(65536 + (128 << 17)); // load row0 = _mm_load_si128((const __m128i *)(data + 0 * 8)); row1 = _mm_load_si128((const __m128i *)(data + 1 * 8)); row2 = _mm_load_si128((const __m128i *)(data + 2 * 8)); row3 = _mm_load_si128((const __m128i *)(data + 3 * 8)); row4 = _mm_load_si128((const __m128i *)(data + 4 * 8)); row5 = _mm_load_si128((const __m128i *)(data + 5 * 8)); row6 = _mm_load_si128((const __m128i *)(data + 6 * 8)); row7 = _mm_load_si128((const __m128i *)(data + 7 * 8)); // column pass dct_pass(bias_0, 10); { // 16bit 8x8 transpose pass 1 dct_interleave16(row0, row4); dct_interleave16(row1, row5); dct_interleave16(row2, row6); dct_interleave16(row3, row7); // transpose pass 2 dct_interleave16(row0, row2); dct_interleave16(row1, row3); dct_interleave16(row4, row6); dct_interleave16(row5, row7); // transpose pass 3 dct_interleave16(row0, row1); dct_interleave16(row2, row3); dct_interleave16(row4, row5); dct_interleave16(row6, row7); } // row pass dct_pass(bias_1, 17); { // pack __m128i p0 = _mm_packus_epi16(row0, row1); // a0a1a2a3...a7b0b1b2b3...b7 __m128i p1 = _mm_packus_epi16(row2, row3); __m128i p2 = _mm_packus_epi16(row4, row5); __m128i p3 = _mm_packus_epi16(row6, row7); // 8bit 8x8 transpose pass 1 dct_interleave8(p0, p2); // a0e0a1e1... dct_interleave8(p1, p3); // c0g0c1g1... // transpose pass 2 dct_interleave8(p0, p1); // a0c0e0g0... dct_interleave8(p2, p3); // b0d0f0h0... // transpose pass 3 dct_interleave8(p0, p2); // a0b0c0d0... dct_interleave8(p1, p3); // a4b4c4d4... // store _mm_storel_epi64((__m128i *)out, p0); out += out_stride; _mm_storel_epi64((__m128i *)out, _mm_shuffle_epi32(p0, 0x4e)); out += out_stride; _mm_storel_epi64((__m128i *)out, p2); out += out_stride; _mm_storel_epi64((__m128i *)out, _mm_shuffle_epi32(p2, 0x4e)); out += out_stride; _mm_storel_epi64((__m128i *)out, p1); out += out_stride; _mm_storel_epi64((__m128i *)out, _mm_shuffle_epi32(p1, 0x4e)); out += out_stride; _mm_storel_epi64((__m128i *)out, p3); out += out_stride; _mm_storel_epi64((__m128i *)out, _mm_shuffle_epi32(p3, 0x4e)); } #undef dct_const #undef dct_rot #undef dct_widen #undef dct_wadd #undef dct_wsub #undef dct_bfly32o #undef dct_interleave8 #undef dct_interleave16 #undef dct_pass } #endif // STBI_SSE2 #ifdef STBI_NEON // NEON integer IDCT. should produce bit-identical // results to the generic C version. static void stbi__idct_simd(stbi_uc *out, int out_stride, short data[64]) { int16x8_t row0, row1, row2, row3, row4, row5, row6, row7; int16x4_t rot0_0 = vdup_n_s16(stbi__f2f(0.5411961f)); int16x4_t rot0_1 = vdup_n_s16(stbi__f2f(-1.847759065f)); int16x4_t rot0_2 = vdup_n_s16(stbi__f2f(0.765366865f)); int16x4_t rot1_0 = vdup_n_s16(stbi__f2f(1.175875602f)); int16x4_t rot1_1 = vdup_n_s16(stbi__f2f(-0.899976223f)); int16x4_t rot1_2 = vdup_n_s16(stbi__f2f(-2.562915447f)); int16x4_t rot2_0 = vdup_n_s16(stbi__f2f(-1.961570560f)); int16x4_t rot2_1 = vdup_n_s16(stbi__f2f(-0.390180644f)); int16x4_t rot3_0 = vdup_n_s16(stbi__f2f(0.298631336f)); int16x4_t rot3_1 = vdup_n_s16(stbi__f2f(2.053119869f)); int16x4_t rot3_2 = vdup_n_s16(stbi__f2f(3.072711026f)); int16x4_t rot3_3 = vdup_n_s16(stbi__f2f(1.501321110f)); #define dct_long_mul(out, inq, coeff) \ int32x4_t out##_l = vmull_s16(vget_low_s16(inq), coeff); \ int32x4_t out##_h = vmull_s16(vget_high_s16(inq), coeff) #define dct_long_mac(out, acc, inq, coeff) \ int32x4_t out##_l = vmlal_s16(acc##_l, vget_low_s16(inq), coeff); \ int32x4_t out##_h = vmlal_s16(acc##_h, vget_high_s16(inq), coeff) #define dct_widen(out, inq) \ int32x4_t out##_l = vshll_n_s16(vget_low_s16(inq), 12); \ int32x4_t out##_h = vshll_n_s16(vget_high_s16(inq), 12) // wide add #define dct_wadd(out, a, b) \ int32x4_t out##_l = vaddq_s32(a##_l, b##_l); \ int32x4_t out##_h = vaddq_s32(a##_h, b##_h) // wide sub #define dct_wsub(out, a, b) \ int32x4_t out##_l = vsubq_s32(a##_l, b##_l); \ int32x4_t out##_h = vsubq_s32(a##_h, b##_h) // butterfly a/b, then shift using "shiftop" by "s" and pack #define dct_bfly32o(out0, out1, a, b, shiftop, s) \ { \ dct_wadd(sum, a, b); \ dct_wsub(dif, a, b); \ out0 = vcombine_s16(shiftop(sum_l, s), shiftop(sum_h, s)); \ out1 = vcombine_s16(shiftop(dif_l, s), shiftop(dif_h, s)); \ } #define dct_pass(shiftop, shift) \ { \ /* even part */ \ int16x8_t sum26 = vaddq_s16(row2, row6); \ dct_long_mul(p1e, sum26, rot0_0); \ dct_long_mac(t2e, p1e, row6, rot0_1); \ dct_long_mac(t3e, p1e, row2, rot0_2); \ int16x8_t sum04 = vaddq_s16(row0, row4); \ int16x8_t dif04 = vsubq_s16(row0, row4); \ dct_widen(t0e, sum04); \ dct_widen(t1e, dif04); \ dct_wadd(x0, t0e, t3e); \ dct_wsub(x3, t0e, t3e); \ dct_wadd(x1, t1e, t2e); \ dct_wsub(x2, t1e, t2e); \ /* odd part */ \ int16x8_t sum15 = vaddq_s16(row1, row5); \ int16x8_t sum17 = vaddq_s16(row1, row7); \ int16x8_t sum35 = vaddq_s16(row3, row5); \ int16x8_t sum37 = vaddq_s16(row3, row7); \ int16x8_t sumodd = vaddq_s16(sum17, sum35); \ dct_long_mul(p5o, sumodd, rot1_0); \ dct_long_mac(p1o, p5o, sum17, rot1_1); \ dct_long_mac(p2o, p5o, sum35, rot1_2); \ dct_long_mul(p3o, sum37, rot2_0); \ dct_long_mul(p4o, sum15, rot2_1); \ dct_wadd(sump13o, p1o, p3o); \ dct_wadd(sump24o, p2o, p4o); \ dct_wadd(sump23o, p2o, p3o); \ dct_wadd(sump14o, p1o, p4o); \ dct_long_mac(x4, sump13o, row7, rot3_0); \ dct_long_mac(x5, sump24o, row5, rot3_1); \ dct_long_mac(x6, sump23o, row3, rot3_2); \ dct_long_mac(x7, sump14o, row1, rot3_3); \ dct_bfly32o(row0, row7, x0, x7, shiftop, shift); \ dct_bfly32o(row1, row6, x1, x6, shiftop, shift); \ dct_bfly32o(row2, row5, x2, x5, shiftop, shift); \ dct_bfly32o(row3, row4, x3, x4, shiftop, shift); \ } // load row0 = vld1q_s16(data + 0 * 8); row1 = vld1q_s16(data + 1 * 8); row2 = vld1q_s16(data + 2 * 8); row3 = vld1q_s16(data + 3 * 8); row4 = vld1q_s16(data + 4 * 8); row5 = vld1q_s16(data + 5 * 8); row6 = vld1q_s16(data + 6 * 8); row7 = vld1q_s16(data + 7 * 8); // add DC bias row0 = vaddq_s16(row0, vsetq_lane_s16(1024, vdupq_n_s16(0), 0)); // column pass dct_pass(vrshrn_n_s32, 10); // 16bit 8x8 transpose { // these three map to a single VTRN.16, VTRN.32, and VSWP, respectively. // whether compilers actually get this is another story, sadly. #define dct_trn16(x, y) \ { \ int16x8x2_t t = vtrnq_s16(x, y); \ x = t.val[0]; \ y = t.val[1]; \ } #define dct_trn32(x, y) \ { \ int32x4x2_t t = vtrnq_s32(vreinterpretq_s32_s16(x), vreinterpretq_s32_s16(y)); \ x = vreinterpretq_s16_s32(t.val[0]); \ y = vreinterpretq_s16_s32(t.val[1]); \ } #define dct_trn64(x, y) \ { \ int16x8_t x0 = x; \ int16x8_t y0 = y; \ x = vcombine_s16(vget_low_s16(x0), vget_low_s16(y0)); \ y = vcombine_s16(vget_high_s16(x0), vget_high_s16(y0)); \ } // pass 1 dct_trn16(row0, row1); // a0b0a2b2a4b4a6b6 dct_trn16(row2, row3); dct_trn16(row4, row5); dct_trn16(row6, row7); // pass 2 dct_trn32(row0, row2); // a0b0c0d0a4b4c4d4 dct_trn32(row1, row3); dct_trn32(row4, row6); dct_trn32(row5, row7); // pass 3 dct_trn64(row0, row4); // a0b0c0d0e0f0g0h0 dct_trn64(row1, row5); dct_trn64(row2, row6); dct_trn64(row3, row7); #undef dct_trn16 #undef dct_trn32 #undef dct_trn64 } // row pass // vrshrn_n_s32 only supports shifts up to 16, we need // 17. so do a non-rounding shift of 16 first then follow // up with a rounding shift by 1. dct_pass(vshrn_n_s32, 16); { // pack and round uint8x8_t p0 = vqrshrun_n_s16(row0, 1); uint8x8_t p1 = vqrshrun_n_s16(row1, 1); uint8x8_t p2 = vqrshrun_n_s16(row2, 1); uint8x8_t p3 = vqrshrun_n_s16(row3, 1); uint8x8_t p4 = vqrshrun_n_s16(row4, 1); uint8x8_t p5 = vqrshrun_n_s16(row5, 1); uint8x8_t p6 = vqrshrun_n_s16(row6, 1); uint8x8_t p7 = vqrshrun_n_s16(row7, 1); // again, these can translate into one instruction, but often don't. #define dct_trn8_8(x, y) \ { \ uint8x8x2_t t = vtrn_u8(x, y); \ x = t.val[0]; \ y = t.val[1]; \ } #define dct_trn8_16(x, y) \ { \ uint16x4x2_t t = vtrn_u16(vreinterpret_u16_u8(x), vreinterpret_u16_u8(y)); \ x = vreinterpret_u8_u16(t.val[0]); \ y = vreinterpret_u8_u16(t.val[1]); \ } #define dct_trn8_32(x, y) \ { \ uint32x2x2_t t = vtrn_u32(vreinterpret_u32_u8(x), vreinterpret_u32_u8(y)); \ x = vreinterpret_u8_u32(t.val[0]); \ y = vreinterpret_u8_u32(t.val[1]); \ } // sadly can't use interleaved stores here since we only write // 8 bytes to each scan line! // 8x8 8-bit transpose pass 1 dct_trn8_8(p0, p1); dct_trn8_8(p2, p3); dct_trn8_8(p4, p5); dct_trn8_8(p6, p7); // pass 2 dct_trn8_16(p0, p2); dct_trn8_16(p1, p3); dct_trn8_16(p4, p6); dct_trn8_16(p5, p7); // pass 3 dct_trn8_32(p0, p4); dct_trn8_32(p1, p5); dct_trn8_32(p2, p6); dct_trn8_32(p3, p7); // store vst1_u8(out, p0); out += out_stride; vst1_u8(out, p1); out += out_stride; vst1_u8(out, p2); out += out_stride; vst1_u8(out, p3); out += out_stride; vst1_u8(out, p4); out += out_stride; vst1_u8(out, p5); out += out_stride; vst1_u8(out, p6); out += out_stride; vst1_u8(out, p7); #undef dct_trn8_8 #undef dct_trn8_16 #undef dct_trn8_32 } #undef dct_long_mul #undef dct_long_mac #undef dct_widen #undef dct_wadd #undef dct_wsub #undef dct_bfly32o #undef dct_pass } #endif // STBI_NEON #define STBI__MARKER_none 0xff // if there's a pending marker from the entropy stream, return that // otherwise, fetch from the stream and get a marker. if there's no // marker, return 0xff, which is never a valid marker value static stbi_uc stbi__get_marker(stbi__jpeg *j) { stbi_uc x; if (j->marker != STBI__MARKER_none) { x = j->marker; j->marker = STBI__MARKER_none; return x; } x = stbi__get8(j->s); if (x != 0xff) return STBI__MARKER_none; while (x == 0xff) x = stbi__get8(j->s); // consume repeated 0xff fill bytes return x; } // in each scan, we'll have scan_n components, and the order // of the components is specified by order[] #define STBI__RESTART(x) ((x) >= 0xd0 && (x) <= 0xd7) // after a restart interval, stbi__jpeg_reset the entropy decoder and // the dc prediction static void stbi__jpeg_reset(stbi__jpeg *j) { j->code_bits = 0; j->code_buffer = 0; j->nomore = 0; j->img_comp[0].dc_pred = j->img_comp[1].dc_pred = j->img_comp[2].dc_pred = j->img_comp[3].dc_pred = 0; j->marker = STBI__MARKER_none; j->todo = j->restart_interval ? j->restart_interval : 0x7fffffff; j->eob_run = 0; // no more than 1<<31 MCUs if no restart_interal? that's plenty safe, // since we don't even allow 1<<30 pixels } static int stbi__parse_entropy_coded_data(stbi__jpeg *z) { stbi__jpeg_reset(z); if (!z->progressive) { if (z->scan_n == 1) { int i, j; STBI_SIMD_ALIGN(short, data[64]); int n = z->order[0]; // non-interleaved data, we just need to process one block at a time, // in trivial scanline order // number of blocks to do just depends on how many actual "pixels" this // component has, independent of interleaved MCU blocking and such int w = (z->img_comp[n].x + 7) >> 3; int h = (z->img_comp[n].y + 7) >> 3; for (j = 0; j < h; ++j) { for (i = 0; i < w; ++i) { int ha = z->img_comp[n].ha; if (!stbi__jpeg_decode_block(z, data, z->huff_dc + z->img_comp[n].hd, z->huff_ac + ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0; z->idct_block_kernel(z->img_comp[n].data + z->img_comp[n].w2 * j * 8 + i * 8, z->img_comp[n].w2, data); // every data block is an MCU, so countdown the restart interval if (--z->todo <= 0) { if (z->code_bits < 24) stbi__grow_buffer_unsafe(z); // if it's NOT a restart, then just bail, so we get corrupt data // rather than no data if (!STBI__RESTART(z->marker)) return 1; stbi__jpeg_reset(z); } } } return 1; } else { // interleaved int i, j, k, x, y; STBI_SIMD_ALIGN(short, data[64]); for (j = 0; j < z->img_mcu_y; ++j) { for (i = 0; i < z->img_mcu_x; ++i) { // scan an interleaved mcu... process scan_n components in order for (k = 0; k < z->scan_n; ++k) { int n = z->order[k]; // scan out an mcu's worth of this component; that's just determined // by the basic H and V specified for the component for (y = 0; y < z->img_comp[n].v; ++y) { for (x = 0; x < z->img_comp[n].h; ++x) { int x2 = (i * z->img_comp[n].h + x) * 8; int y2 = (j * z->img_comp[n].v + y) * 8; int ha = z->img_comp[n].ha; if (!stbi__jpeg_decode_block(z, data, z->huff_dc + z->img_comp[n].hd, z->huff_ac + ha, z->fast_ac[ha], n, z->dequant[z->img_comp[n].tq])) return 0; z->idct_block_kernel(z->img_comp[n].data + z->img_comp[n].w2 * y2 + x2, z->img_comp[n].w2, data); } } } // after all interleaved components, that's an interleaved MCU, // so now count down the restart interval if (--z->todo <= 0) { if (z->code_bits < 24) stbi__grow_buffer_unsafe(z); if (!STBI__RESTART(z->marker)) return 1; stbi__jpeg_reset(z); } } } return 1; } } else { if (z->scan_n == 1) { int i, j; int n = z->order[0]; // non-interleaved data, we just need to process one block at a time, // in trivial scanline order // number of blocks to do just depends on how many actual "pixels" this // component has, independent of interleaved MCU blocking and such int w = (z->img_comp[n].x + 7) >> 3; int h = (z->img_comp[n].y + 7) >> 3; for (j = 0; j < h; ++j) { for (i = 0; i < w; ++i) { short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w); if (z->spec_start == 0) { if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n)) return 0; } else { int ha = z->img_comp[n].ha; if (!stbi__jpeg_decode_block_prog_ac(z, data, &z->huff_ac[ha], z->fast_ac[ha])) return 0; } // every data block is an MCU, so countdown the restart interval if (--z->todo <= 0) { if (z->code_bits < 24) stbi__grow_buffer_unsafe(z); if (!STBI__RESTART(z->marker)) return 1; stbi__jpeg_reset(z); } } } return 1; } else { // interleaved int i, j, k, x, y; for (j = 0; j < z->img_mcu_y; ++j) { for (i = 0; i < z->img_mcu_x; ++i) { // scan an interleaved mcu... process scan_n components in order for (k = 0; k < z->scan_n; ++k) { int n = z->order[k]; // scan out an mcu's worth of this component; that's just determined // by the basic H and V specified for the component for (y = 0; y < z->img_comp[n].v; ++y) { for (x = 0; x < z->img_comp[n].h; ++x) { int x2 = (i * z->img_comp[n].h + x); int y2 = (j * z->img_comp[n].v + y); short *data = z->img_comp[n].coeff + 64 * (x2 + y2 * z->img_comp[n].coeff_w); if (!stbi__jpeg_decode_block_prog_dc(z, data, &z->huff_dc[z->img_comp[n].hd], n)) return 0; } } } // after all interleaved components, that's an interleaved MCU, // so now count down the restart interval if (--z->todo <= 0) { if (z->code_bits < 24) stbi__grow_buffer_unsafe(z); if (!STBI__RESTART(z->marker)) return 1; stbi__jpeg_reset(z); } } } return 1; } } } static void stbi__jpeg_dequantize(short *data, stbi__uint16 *dequant) { int i; for (i = 0; i < 64; ++i) data[i] *= dequant[i]; } static void stbi__jpeg_finish(stbi__jpeg *z) { if (z->progressive) { // dequantize and idct the data int i, j, n; for (n = 0; n < z->s->img_n; ++n) { int w = (z->img_comp[n].x + 7) >> 3; int h = (z->img_comp[n].y + 7) >> 3; for (j = 0; j < h; ++j) { for (i = 0; i < w; ++i) { short *data = z->img_comp[n].coeff + 64 * (i + j * z->img_comp[n].coeff_w); stbi__jpeg_dequantize(data, z->dequant[z->img_comp[n].tq]); z->idct_block_kernel(z->img_comp[n].data + z->img_comp[n].w2 * j * 8 + i * 8, z->img_comp[n].w2, data); } } } } } static int stbi__process_marker(stbi__jpeg *z, int m) { int L; switch (m) { case STBI__MARKER_none: // no marker found return stbi__err("expected marker", "Corrupt JPEG"); case 0xDD: // DRI - specify restart interval if (stbi__get16be(z->s) != 4) return stbi__err("bad DRI len", "Corrupt JPEG"); z->restart_interval = stbi__get16be(z->s); return 1; case 0xDB: // DQT - define quantization table L = stbi__get16be(z->s) - 2; while (L > 0) { int q = stbi__get8(z->s); int p = q >> 4, sixteen = (p != 0); int t = q & 15, i; if (p != 0 && p != 1) return stbi__err("bad DQT type", "Corrupt JPEG"); if (t > 3) return stbi__err("bad DQT table", "Corrupt JPEG"); for (i = 0; i < 64; ++i) z->dequant[t][stbi__jpeg_dezigzag[i]] = (stbi__uint16)(sixteen ? stbi__get16be(z->s) : stbi__get8(z->s)); L -= (sixteen ? 129 : 65); } return L == 0; case 0xC4: // DHT - define huffman table L = stbi__get16be(z->s) - 2; while (L > 0) { stbi_uc *v; int sizes[16], i, n = 0; int q = stbi__get8(z->s); int tc = q >> 4; int th = q & 15; if (tc > 1 || th > 3) return stbi__err("bad DHT header", "Corrupt JPEG"); for (i = 0; i < 16; ++i) { sizes[i] = stbi__get8(z->s); n += sizes[i]; } L -= 17; if (tc == 0) { if (!stbi__build_huffman(z->huff_dc + th, sizes)) return 0; v = z->huff_dc[th].values; } else { if (!stbi__build_huffman(z->huff_ac + th, sizes)) return 0; v = z->huff_ac[th].values; } for (i = 0; i < n; ++i) v[i] = stbi__get8(z->s); if (tc != 0) stbi__build_fast_ac(z->fast_ac[th], z->huff_ac + th); L -= n; } return L == 0; } // check for comment block or APP blocks if ((m >= 0xE0 && m <= 0xEF) || m == 0xFE) { L = stbi__get16be(z->s); if (L < 2) { if (m == 0xFE) return stbi__err("bad COM len", "Corrupt JPEG"); else return stbi__err("bad APP len", "Corrupt JPEG"); } L -= 2; if (m == 0xE0 && L >= 5) { // JFIF APP0 segment static const unsigned char tag[5] = { 'J', 'F', 'I', 'F', '\0' }; int ok = 1; int i; for (i = 0; i < 5; ++i) if (stbi__get8(z->s) != tag[i]) ok = 0; L -= 5; if (ok) z->jfif = 1; } else if (m == 0xEE && L >= 12) { // Adobe APP14 segment static const unsigned char tag[6] = { 'A', 'd', 'o', 'b', 'e', '\0' }; int ok = 1; int i; for (i = 0; i < 6; ++i) if (stbi__get8(z->s) != tag[i]) ok = 0; L -= 6; if (ok) { stbi__get8(z->s); // version stbi__get16be(z->s); // flags0 stbi__get16be(z->s); // flags1 z->app14_color_transform = stbi__get8(z->s); // color transform L -= 6; } } stbi__skip(z->s, L); return 1; } return stbi__err("unknown marker", "Corrupt JPEG"); } // after we see SOS static int stbi__process_scan_header(stbi__jpeg *z) { int i; int Ls = stbi__get16be(z->s); z->scan_n = stbi__get8(z->s); if (z->scan_n < 1 || z->scan_n > 4 || z->scan_n > (int)z->s->img_n) return stbi__err("bad SOS component count", "Corrupt JPEG"); if (Ls != 6 + 2 * z->scan_n) return stbi__err("bad SOS len", "Corrupt JPEG"); for (i = 0; i < z->scan_n; ++i) { int id = stbi__get8(z->s), which; int q = stbi__get8(z->s); for (which = 0; which < z->s->img_n; ++which) if (z->img_comp[which].id == id) break; if (which == z->s->img_n) return 0; // no match z->img_comp[which].hd = q >> 4; if (z->img_comp[which].hd > 3) return stbi__err("bad DC huff", "Corrupt JPEG"); z->img_comp[which].ha = q & 15; if (z->img_comp[which].ha > 3) return stbi__err("bad AC huff", "Corrupt JPEG"); z->order[i] = which; } { int aa; z->spec_start = stbi__get8(z->s); z->spec_end = stbi__get8(z->s); // should be 63, but might be 0 aa = stbi__get8(z->s); z->succ_high = (aa >> 4); z->succ_low = (aa & 15); if (z->progressive) { if (z->spec_start > 63 || z->spec_end > 63 || z->spec_start > z->spec_end || z->succ_high > 13 || z->succ_low > 13) return stbi__err("bad SOS", "Corrupt JPEG"); } else { if (z->spec_start != 0) return stbi__err("bad SOS", "Corrupt JPEG"); if (z->succ_high != 0 || z->succ_low != 0) return stbi__err("bad SOS", "Corrupt JPEG"); z->spec_end = 63; } } return 1; } static int stbi__free_jpeg_components(stbi__jpeg *z, int ncomp, int why) { int i; for (i = 0; i < ncomp; ++i) { if (z->img_comp[i].raw_data) { STBI_FREE(z->img_comp[i].raw_data); z->img_comp[i].raw_data = NULL; z->img_comp[i].data = NULL; } if (z->img_comp[i].raw_coeff) { STBI_FREE(z->img_comp[i].raw_coeff); z->img_comp[i].raw_coeff = 0; z->img_comp[i].coeff = 0; } if (z->img_comp[i].linebuf) { STBI_FREE(z->img_comp[i].linebuf); z->img_comp[i].linebuf = NULL; } } return why; } static int stbi__process_frame_header(stbi__jpeg *z, int scan) { stbi__context *s = z->s; int Lf, p, i, q, h_max = 1, v_max = 1, c; Lf = stbi__get16be(s); if (Lf < 11) return stbi__err("bad SOF len", "Corrupt JPEG"); // JPEG p = stbi__get8(s); if (p != 8) return stbi__err("only 8-bit", "JPEG format not supported: 8-bit only"); // JPEG baseline s->img_y = stbi__get16be(s); if (s->img_y == 0) return stbi__err("no header height", "JPEG format not supported: delayed height"); // Legal, but we don't handle it--but neither does IJG s->img_x = stbi__get16be(s); if (s->img_x == 0) return stbi__err("0 width", "Corrupt JPEG"); // JPEG requires c = stbi__get8(s); if (c != 3 && c != 1 && c != 4) return stbi__err("bad component count", "Corrupt JPEG"); s->img_n = c; for (i = 0; i < c; ++i) { z->img_comp[i].data = NULL; z->img_comp[i].linebuf = NULL; } if (Lf != 8 + 3 * s->img_n) return stbi__err("bad SOF len", "Corrupt JPEG"); z->rgb = 0; for (i = 0; i < s->img_n; ++i) { static const unsigned char rgb[3] = { 'R', 'G', 'B' }; z->img_comp[i].id = stbi__get8(s); if (s->img_n == 3 && z->img_comp[i].id == rgb[i]) ++z->rgb; q = stbi__get8(s); z->img_comp[i].h = (q >> 4); if (!z->img_comp[i].h || z->img_comp[i].h > 4) return stbi__err("bad H", "Corrupt JPEG"); z->img_comp[i].v = q & 15; if (!z->img_comp[i].v || z->img_comp[i].v > 4) return stbi__err("bad V", "Corrupt JPEG"); z->img_comp[i].tq = stbi__get8(s); if (z->img_comp[i].tq > 3) return stbi__err("bad TQ", "Corrupt JPEG"); } if (scan != STBI__SCAN_load) return 1; if (!stbi__mad3sizes_valid(s->img_x, s->img_y, s->img_n, 0)) return stbi__err("too large", "Image too large to decode"); for (i = 0; i < s->img_n; ++i) { if (z->img_comp[i].h > h_max) h_max = z->img_comp[i].h; if (z->img_comp[i].v > v_max) v_max = z->img_comp[i].v; } // compute interleaved mcu info z->img_h_max = h_max; z->img_v_max = v_max; z->img_mcu_w = h_max * 8; z->img_mcu_h = v_max * 8; // these sizes can't be more than 17 bits z->img_mcu_x = (s->img_x + z->img_mcu_w - 1) / z->img_mcu_w; z->img_mcu_y = (s->img_y + z->img_mcu_h - 1) / z->img_mcu_h; for (i = 0; i < s->img_n; ++i) { // number of effective pixels (e.g. for non-interleaved MCU) z->img_comp[i].x = (s->img_x * z->img_comp[i].h + h_max - 1) / h_max; z->img_comp[i].y = (s->img_y * z->img_comp[i].v + v_max - 1) / v_max; // to simplify generation, we'll allocate enough memory to decode // the bogus oversized data from using interleaved MCUs and their // big blocks (e.g. a 16x16 iMCU on an image of width 33); we won't // discard the extra data until colorspace conversion // // img_mcu_x, img_mcu_y: <=17 bits; comp[i].h and .v are <=4 (checked earlier) // so these muls can't overflow with 32-bit ints (which we require) z->img_comp[i].w2 = z->img_mcu_x * z->img_comp[i].h * 8; z->img_comp[i].h2 = z->img_mcu_y * z->img_comp[i].v * 8; z->img_comp[i].coeff = 0; z->img_comp[i].raw_coeff = 0; z->img_comp[i].linebuf = NULL; z->img_comp[i].raw_data = stbi__malloc_mad2(z->img_comp[i].w2, z->img_comp[i].h2, 15); if (z->img_comp[i].raw_data == NULL) return stbi__free_jpeg_components(z, i + 1, stbi__err("outofmem", "Out of memory")); // align blocks for idct using mmx/sse z->img_comp[i].data = (stbi_uc *)(((size_t)z->img_comp[i].raw_data + 15) & ~15); if (z->progressive) { // w2, h2 are multiples of 8 (see above) z->img_comp[i].coeff_w = z->img_comp[i].w2 / 8; z->img_comp[i].coeff_h = z->img_comp[i].h2 / 8; z->img_comp[i].raw_coeff = stbi__malloc_mad3(z->img_comp[i].w2, z->img_comp[i].h2, sizeof(short), 15); if (z->img_comp[i].raw_coeff == NULL) return stbi__free_jpeg_components(z, i + 1, stbi__err("outofmem", "Out of memory")); z->img_comp[i].coeff = (short *)(((size_t)z->img_comp[i].raw_coeff + 15) & ~15); } } return 1; } // use comparisons since in some cases we handle more than one case (e.g. SOF) #define stbi__DNL(x) ((x) == 0xdc) #define stbi__SOI(x) ((x) == 0xd8) #define stbi__EOI(x) ((x) == 0xd9) #define stbi__SOF(x) ((x) == 0xc0 || (x) == 0xc1 || (x) == 0xc2) #define stbi__SOS(x) ((x) == 0xda) #define stbi__SOF_progressive(x) ((x) == 0xc2) static int stbi__decode_jpeg_header(stbi__jpeg *z, int scan) { int m; z->jfif = 0; z->app14_color_transform = -1; // valid values are 0,1,2 z->marker = STBI__MARKER_none; // initialize cached marker to empty m = stbi__get_marker(z); if (!stbi__SOI(m)) return stbi__err("no SOI", "Corrupt JPEG"); if (scan == STBI__SCAN_type) return 1; m = stbi__get_marker(z); while (!stbi__SOF(m)) { if (!stbi__process_marker(z, m)) return 0; m = stbi__get_marker(z); while (m == STBI__MARKER_none) { // some files have extra padding after their blocks, so ok, we'll scan if (stbi__at_eof(z->s)) return stbi__err("no SOF", "Corrupt JPEG"); m = stbi__get_marker(z); } } z->progressive = stbi__SOF_progressive(m); if (!stbi__process_frame_header(z, scan)) return 0; return 1; } // decode image to YCbCr format static int stbi__decode_jpeg_image(stbi__jpeg *j) { int m; for (m = 0; m < 4; m++) { j->img_comp[m].raw_data = NULL; j->img_comp[m].raw_coeff = NULL; } j->restart_interval = 0; if (!stbi__decode_jpeg_header(j, STBI__SCAN_load)) return 0; m = stbi__get_marker(j); while (!stbi__EOI(m)) { if (stbi__SOS(m)) { if (!stbi__process_scan_header(j)) return 0; if (!stbi__parse_entropy_coded_data(j)) return 0; if (j->marker == STBI__MARKER_none) { // handle 0s at the end of image data from IP Kamera 9060 while (!stbi__at_eof(j->s)) { int x = stbi__get8(j->s); if (x == 255) { j->marker = stbi__get8(j->s); break; } } // if we reach eof without hitting a marker, stbi__get_marker() below will fail and we'll eventually return 0 } } else if (stbi__DNL(m)) { int Ld = stbi__get16be(j->s); stbi__uint32 NL = stbi__get16be(j->s); if (Ld != 4) return stbi__err("bad DNL len", "Corrupt JPEG"); if (NL != j->s->img_y) return stbi__err("bad DNL height", "Corrupt JPEG"); } else { if (!stbi__process_marker(j, m)) return 0; } m = stbi__get_marker(j); } if (j->progressive) stbi__jpeg_finish(j); return 1; } // static jfif-centered resampling (across block boundaries) typedef stbi_uc *(*resample_row_func)(stbi_uc *out, stbi_uc *in0, stbi_uc *in1, int w, int hs); #define stbi__div4(x) ((stbi_uc)((x) >> 2)) static stbi_uc *resample_row_1(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { STBI_NOTUSED(out); STBI_NOTUSED(in_far); STBI_NOTUSED(w); STBI_NOTUSED(hs); return in_near; } static stbi_uc *stbi__resample_row_v_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { // need to generate two samples vertically for every one in input int i; STBI_NOTUSED(hs); for (i = 0; i < w; ++i) out[i] = stbi__div4(3 * in_near[i] + in_far[i] + 2); return out; } static stbi_uc *stbi__resample_row_h_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { // need to generate two samples horizontally for every one in input int i; stbi_uc *input = in_near; if (w == 1) { // if only one sample, can't do any interpolation out[0] = out[1] = input[0]; return out; } out[0] = input[0]; out[1] = stbi__div4(input[0] * 3 + input[1] + 2); for (i = 1; i < w - 1; ++i) { int n = 3 * input[i] + 2; out[i * 2 + 0] = stbi__div4(n + input[i - 1]); out[i * 2 + 1] = stbi__div4(n + input[i + 1]); } out[i * 2 + 0] = stbi__div4(input[w - 2] * 3 + input[w - 1] + 2); out[i * 2 + 1] = input[w - 1]; STBI_NOTUSED(in_far); STBI_NOTUSED(hs); return out; } #define stbi__div16(x) ((stbi_uc)((x) >> 4)) static stbi_uc *stbi__resample_row_hv_2(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { // need to generate 2x2 samples for every one in input int i, t0, t1; if (w == 1) { out[0] = out[1] = stbi__div4(3 * in_near[0] + in_far[0] + 2); return out; } t1 = 3 * in_near[0] + in_far[0]; out[0] = stbi__div4(t1 + 2); for (i = 1; i < w; ++i) { t0 = t1; t1 = 3 * in_near[i] + in_far[i]; out[i * 2 - 1] = stbi__div16(3 * t0 + t1 + 8); out[i * 2] = stbi__div16(3 * t1 + t0 + 8); } out[w * 2 - 1] = stbi__div4(t1 + 2); STBI_NOTUSED(hs); return out; } #if defined(STBI_SSE2) || defined(STBI_NEON) static stbi_uc *stbi__resample_row_hv_2_simd(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { // need to generate 2x2 samples for every one in input int i = 0, t0, t1; if (w == 1) { out[0] = out[1] = stbi__div4(3 * in_near[0] + in_far[0] + 2); return out; } t1 = 3 * in_near[0] + in_far[0]; // process groups of 8 pixels for as long as we can. // note we can't handle the last pixel in a row in this loop // because we need to handle the filter boundary conditions. for (; i < ((w - 1) & ~7); i += 8) { #if defined(STBI_SSE2) // load and perform the vertical filtering pass // this uses 3*x + y = 4*x + (y - x) __m128i zero = _mm_setzero_si128(); __m128i farb = _mm_loadl_epi64((__m128i *)(in_far + i)); __m128i nearb = _mm_loadl_epi64((__m128i *)(in_near + i)); __m128i farw = _mm_unpacklo_epi8(farb, zero); __m128i nearw = _mm_unpacklo_epi8(nearb, zero); __m128i diff = _mm_sub_epi16(farw, nearw); __m128i nears = _mm_slli_epi16(nearw, 2); __m128i curr = _mm_add_epi16(nears, diff); // current row // horizontal filter works the same based on shifted vers of current // row. "prev" is current row shifted right by 1 pixel; we need to // insert the previous pixel value (from t1). // "next" is current row shifted left by 1 pixel, with first pixel // of next block of 8 pixels added in. __m128i prv0 = _mm_slli_si128(curr, 2); __m128i nxt0 = _mm_srli_si128(curr, 2); __m128i prev = _mm_insert_epi16(prv0, t1, 0); __m128i next = _mm_insert_epi16(nxt0, 3 * in_near[i + 8] + in_far[i + 8], 7); // horizontal filter, polyphase implementation since it's convenient: // even pixels = 3*cur + prev = cur*4 + (prev - cur) // odd pixels = 3*cur + next = cur*4 + (next - cur) // note the shared term. __m128i bias = _mm_set1_epi16(8); __m128i curs = _mm_slli_epi16(curr, 2); __m128i prvd = _mm_sub_epi16(prev, curr); __m128i nxtd = _mm_sub_epi16(next, curr); __m128i curb = _mm_add_epi16(curs, bias); __m128i even = _mm_add_epi16(prvd, curb); __m128i odd = _mm_add_epi16(nxtd, curb); // interleave even and odd pixels, then undo scaling. __m128i int0 = _mm_unpacklo_epi16(even, odd); __m128i int1 = _mm_unpackhi_epi16(even, odd); __m128i de0 = _mm_srli_epi16(int0, 4); __m128i de1 = _mm_srli_epi16(int1, 4); // pack and write output __m128i outv = _mm_packus_epi16(de0, de1); _mm_storeu_si128((__m128i *)(out + i * 2), outv); #elif defined(STBI_NEON) // load and perform the vertical filtering pass // this uses 3*x + y = 4*x + (y - x) uint8x8_t farb = vld1_u8(in_far + i); uint8x8_t nearb = vld1_u8(in_near + i); int16x8_t diff = vreinterpretq_s16_u16(vsubl_u8(farb, nearb)); int16x8_t nears = vreinterpretq_s16_u16(vshll_n_u8(nearb, 2)); int16x8_t curr = vaddq_s16(nears, diff); // current row // horizontal filter works the same based on shifted vers of current // row. "prev" is current row shifted right by 1 pixel; we need to // insert the previous pixel value (from t1). // "next" is current row shifted left by 1 pixel, with first pixel // of next block of 8 pixels added in. int16x8_t prv0 = vextq_s16(curr, curr, 7); int16x8_t nxt0 = vextq_s16(curr, curr, 1); int16x8_t prev = vsetq_lane_s16(t1, prv0, 0); int16x8_t next = vsetq_lane_s16(3 * in_near[i + 8] + in_far[i + 8], nxt0, 7); // horizontal filter, polyphase implementation since it's convenient: // even pixels = 3*cur + prev = cur*4 + (prev - cur) // odd pixels = 3*cur + next = cur*4 + (next - cur) // note the shared term. int16x8_t curs = vshlq_n_s16(curr, 2); int16x8_t prvd = vsubq_s16(prev, curr); int16x8_t nxtd = vsubq_s16(next, curr); int16x8_t even = vaddq_s16(curs, prvd); int16x8_t odd = vaddq_s16(curs, nxtd); // undo scaling and round, then store with even/odd phases interleaved uint8x8x2_t o; o.val[0] = vqrshrun_n_s16(even, 4); o.val[1] = vqrshrun_n_s16(odd, 4); vst2_u8(out + i * 2, o); #endif // "previous" value for next iter t1 = 3 * in_near[i + 7] + in_far[i + 7]; } t0 = t1; t1 = 3 * in_near[i] + in_far[i]; out[i * 2] = stbi__div16(3 * t1 + t0 + 8); for (++i; i < w; ++i) { t0 = t1; t1 = 3 * in_near[i] + in_far[i]; out[i * 2 - 1] = stbi__div16(3 * t0 + t1 + 8); out[i * 2] = stbi__div16(3 * t1 + t0 + 8); } out[w * 2 - 1] = stbi__div4(t1 + 2); STBI_NOTUSED(hs); return out; } #endif static stbi_uc *stbi__resample_row_generic(stbi_uc *out, stbi_uc *in_near, stbi_uc *in_far, int w, int hs) { // resample with nearest-neighbor int i, j; STBI_NOTUSED(in_far); for (i = 0; i < w; ++i) for (j = 0; j < hs; ++j) out[i * hs + j] = in_near[i]; return out; } // this is a reduced-precision calculation of YCbCr-to-RGB introduced // to make sure the code produces the same results in both SIMD and scalar #define stbi__float2fixed(x) (((int)((x)*4096.0f + 0.5f)) << 8) static void stbi__YCbCr_to_RGB_row(stbi_uc *out, const stbi_uc *y, const stbi_uc *pcb, const stbi_uc *pcr, int count, int step) { int i; for (i = 0; i < count; ++i) { int y_fixed = (y[i] << 20) + (1 << 19); // rounding int r, g, b; int cr = pcr[i] - 128; int cb = pcb[i] - 128; r = y_fixed + cr * stbi__float2fixed(1.40200f); g = y_fixed + (cr * -stbi__float2fixed(0.71414f)) + ((cb * -stbi__float2fixed(0.34414f)) & 0xffff0000); b = y_fixed + cb * stbi__float2fixed(1.77200f); r >>= 20; g >>= 20; b >>= 20; if ((unsigned)r > 255) { if (r < 0) r = 0; else r = 255; } if ((unsigned)g > 255) { if (g < 0) g = 0; else g = 255; } if ((unsigned)b > 255) { if (b < 0) b = 0; else b = 255; } out[0] = (stbi_uc)r; out[1] = (stbi_uc)g; out[2] = (stbi_uc)b; out[3] = 255; out += step; } } #if defined(STBI_SSE2) || defined(STBI_NEON) static void stbi__YCbCr_to_RGB_simd(stbi_uc *out, stbi_uc const *y, stbi_uc const *pcb, stbi_uc const *pcr, int count, int step) { int i = 0; #ifdef STBI_SSE2 // step == 3 is pretty ugly on the final interleave, and i'm not convinced // it's useful in practice (you wouldn't use it for textures, for example). // so just accelerate step == 4 case. if (step == 4) { // this is a fairly straightforward implementation and not super-optimized. __m128i signflip = _mm_set1_epi8(-0x80); __m128i cr_const0 = _mm_set1_epi16((short)(1.40200f * 4096.0f + 0.5f)); __m128i cr_const1 = _mm_set1_epi16(-(short)(0.71414f * 4096.0f + 0.5f)); __m128i cb_const0 = _mm_set1_epi16(-(short)(0.34414f * 4096.0f + 0.5f)); __m128i cb_const1 = _mm_set1_epi16((short)(1.77200f * 4096.0f + 0.5f)); __m128i y_bias = _mm_set1_epi8((char)(unsigned char)128); __m128i xw = _mm_set1_epi16(255); // alpha channel for (; i + 7 < count; i += 8) { // load __m128i y_bytes = _mm_loadl_epi64((__m128i *)(y + i)); __m128i cr_bytes = _mm_loadl_epi64((__m128i *)(pcr + i)); __m128i cb_bytes = _mm_loadl_epi64((__m128i *)(pcb + i)); __m128i cr_biased = _mm_xor_si128(cr_bytes, signflip); // -128 __m128i cb_biased = _mm_xor_si128(cb_bytes, signflip); // -128 // unpack to short (and left-shift cr, cb by 8) __m128i yw = _mm_unpacklo_epi8(y_bias, y_bytes); __m128i crw = _mm_unpacklo_epi8(_mm_setzero_si128(), cr_biased); __m128i cbw = _mm_unpacklo_epi8(_mm_setzero_si128(), cb_biased); // color transform __m128i yws = _mm_srli_epi16(yw, 4); __m128i cr0 = _mm_mulhi_epi16(cr_const0, crw); __m128i cb0 = _mm_mulhi_epi16(cb_const0, cbw); __m128i cb1 = _mm_mulhi_epi16(cbw, cb_const1); __m128i cr1 = _mm_mulhi_epi16(crw, cr_const1); __m128i rws = _mm_add_epi16(cr0, yws); __m128i gwt = _mm_add_epi16(cb0, yws); __m128i bws = _mm_add_epi16(yws, cb1); __m128i gws = _mm_add_epi16(gwt, cr1); // descale __m128i rw = _mm_srai_epi16(rws, 4); __m128i bw = _mm_srai_epi16(bws, 4); __m128i gw = _mm_srai_epi16(gws, 4); // back to byte, set up for transpose __m128i brb = _mm_packus_epi16(rw, bw); __m128i gxb = _mm_packus_epi16(gw, xw); // transpose to interleave channels __m128i t0 = _mm_unpacklo_epi8(brb, gxb); __m128i t1 = _mm_unpackhi_epi8(brb, gxb); __m128i o0 = _mm_unpacklo_epi16(t0, t1); __m128i o1 = _mm_unpackhi_epi16(t0, t1); // store _mm_storeu_si128((__m128i *)(out + 0), o0); _mm_storeu_si128((__m128i *)(out + 16), o1); out += 32; } } #endif #ifdef STBI_NEON // in this version, step=3 support would be easy to add. but is there demand? if (step == 4) { // this is a fairly straightforward implementation and not super-optimized. uint8x8_t signflip = vdup_n_u8(0x80); int16x8_t cr_const0 = vdupq_n_s16((short)(1.40200f * 4096.0f + 0.5f)); int16x8_t cr_const1 = vdupq_n_s16(-(short)(0.71414f * 4096.0f + 0.5f)); int16x8_t cb_const0 = vdupq_n_s16(-(short)(0.34414f * 4096.0f + 0.5f)); int16x8_t cb_const1 = vdupq_n_s16((short)(1.77200f * 4096.0f + 0.5f)); for (; i + 7 < count; i += 8) { // load uint8x8_t y_bytes = vld1_u8(y + i); uint8x8_t cr_bytes = vld1_u8(pcr + i); uint8x8_t cb_bytes = vld1_u8(pcb + i); int8x8_t cr_biased = vreinterpret_s8_u8(vsub_u8(cr_bytes, signflip)); int8x8_t cb_biased = vreinterpret_s8_u8(vsub_u8(cb_bytes, signflip)); // expand to s16 int16x8_t yws = vreinterpretq_s16_u16(vshll_n_u8(y_bytes, 4)); int16x8_t crw = vshll_n_s8(cr_biased, 7); int16x8_t cbw = vshll_n_s8(cb_biased, 7); // color transform int16x8_t cr0 = vqdmulhq_s16(crw, cr_const0); int16x8_t cb0 = vqdmulhq_s16(cbw, cb_const0); int16x8_t cr1 = vqdmulhq_s16(crw, cr_const1); int16x8_t cb1 = vqdmulhq_s16(cbw, cb_const1); int16x8_t rws = vaddq_s16(yws, cr0); int16x8_t gws = vaddq_s16(vaddq_s16(yws, cb0), cr1); int16x8_t bws = vaddq_s16(yws, cb1); // undo scaling, round, convert to byte uint8x8x4_t o; o.val[0] = vqrshrun_n_s16(rws, 4); o.val[1] = vqrshrun_n_s16(gws, 4); o.val[2] = vqrshrun_n_s16(bws, 4); o.val[3] = vdup_n_u8(255); // store, interleaving r/g/b/a vst4_u8(out, o); out += 8 * 4; } } #endif for (; i < count; ++i) { int y_fixed = (y[i] << 20) + (1 << 19); // rounding int r, g, b; int cr = pcr[i] - 128; int cb = pcb[i] - 128; r = y_fixed + cr * stbi__float2fixed(1.40200f); g = y_fixed + cr * -stbi__float2fixed(0.71414f) + ((cb * -stbi__float2fixed(0.34414f)) & 0xffff0000); b = y_fixed + cb * stbi__float2fixed(1.77200f); r >>= 20; g >>= 20; b >>= 20; if ((unsigned)r > 255) { if (r < 0) r = 0; else r = 255; } if ((unsigned)g > 255) { if (g < 0) g = 0; else g = 255; } if ((unsigned)b > 255) { if (b < 0) b = 0; else b = 255; } out[0] = (stbi_uc)r; out[1] = (stbi_uc)g; out[2] = (stbi_uc)b; out[3] = 255; out += step; } } #endif // set up the kernels static void stbi__setup_jpeg(stbi__jpeg *j) { j->idct_block_kernel = stbi__idct_block; j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_row; j->resample_row_hv_2_kernel = stbi__resample_row_hv_2; #ifdef STBI_SSE2 if (stbi__sse2_available()) { j->idct_block_kernel = stbi__idct_simd; j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd; j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd; } #endif #ifdef STBI_NEON j->idct_block_kernel = stbi__idct_simd; j->YCbCr_to_RGB_kernel = stbi__YCbCr_to_RGB_simd; j->resample_row_hv_2_kernel = stbi__resample_row_hv_2_simd; #endif } // clean up the temporary component buffers static void stbi__cleanup_jpeg(stbi__jpeg *j) { stbi__free_jpeg_components(j, j->s->img_n, 0); } typedef struct { resample_row_func resample; stbi_uc *line0, *line1; int hs, vs; // expansion factor in each axis int w_lores; // horizontal pixels pre-expansion int ystep; // how far through vertical expansion we are int ypos; // which pre-expansion row we're on } stbi__resample; // fast 0..255 * 0..255 => 0..255 rounded multiplication static stbi_uc stbi__blinn_8x8(stbi_uc x, stbi_uc y) { unsigned int t = x * y + 128; return (stbi_uc)((t + (t >> 8)) >> 8); } static stbi_uc *load_jpeg_image(stbi__jpeg *z, int *out_x, int *out_y, int *comp, int req_comp) { int n, decode_n, is_rgb; z->s->img_n = 0; // make stbi__cleanup_jpeg safe // validate req_comp if (req_comp < 0 || req_comp > 4) return stbi__errpuc("bad req_comp", "Internal error"); // load a jpeg image from whichever source, but leave in YCbCr format if (!stbi__decode_jpeg_image(z)) { stbi__cleanup_jpeg(z); return NULL; } // determine actual number of components to generate n = req_comp ? req_comp : z->s->img_n >= 3 ? 3 : 1; is_rgb = z->s->img_n == 3 && (z->rgb == 3 || (z->app14_color_transform == 0 && !z->jfif)); if (z->s->img_n == 3 && n < 3 && !is_rgb) decode_n = 1; else decode_n = z->s->img_n; // resample and color-convert { int k; unsigned int i, j; stbi_uc *output; stbi_uc *coutput[4] = { NULL, NULL, NULL, NULL }; stbi__resample res_comp[4]; for (k = 0; k < decode_n; ++k) { stbi__resample *r = &res_comp[k]; // allocate line buffer big enough for upsampling off the edges // with upsample factor of 4 z->img_comp[k].linebuf = (stbi_uc *)stbi__malloc(z->s->img_x + 3); if (!z->img_comp[k].linebuf) { stbi__cleanup_jpeg(z); return stbi__errpuc("outofmem", "Out of memory"); } r->hs = z->img_h_max / z->img_comp[k].h; r->vs = z->img_v_max / z->img_comp[k].v; r->ystep = r->vs >> 1; r->w_lores = (z->s->img_x + r->hs - 1) / r->hs; r->ypos = 0; r->line0 = r->line1 = z->img_comp[k].data; if (r->hs == 1 && r->vs == 1) r->resample = resample_row_1; else if (r->hs == 1 && r->vs == 2) r->resample = stbi__resample_row_v_2; else if (r->hs == 2 && r->vs == 1) r->resample = stbi__resample_row_h_2; else if (r->hs == 2 && r->vs == 2) r->resample = z->resample_row_hv_2_kernel; else r->resample = stbi__resample_row_generic; } // can't error after this so, this is safe output = (stbi_uc *)stbi__malloc_mad3(n, z->s->img_x, z->s->img_y, 1); if (!output) { stbi__cleanup_jpeg(z); return stbi__errpuc("outofmem", "Out of memory"); } // now go ahead and resample for (j = 0; j < z->s->img_y; ++j) { stbi_uc *out = output + n * z->s->img_x * j; for (k = 0; k < decode_n; ++k) { stbi__resample *r = &res_comp[k]; int y_bot = r->ystep >= (r->vs >> 1); coutput[k] = r->resample(z->img_comp[k].linebuf, y_bot ? r->line1 : r->line0, y_bot ? r->line0 : r->line1, r->w_lores, r->hs); if (++r->ystep >= r->vs) { r->ystep = 0; r->line0 = r->line1; if (++r->ypos < z->img_comp[k].y) r->line1 += z->img_comp[k].w2; } } if (n >= 3) { stbi_uc *y = coutput[0]; if (z->s->img_n == 3) { if (is_rgb) { for (i = 0; i < z->s->img_x; ++i) { out[0] = y[i]; out[1] = coutput[1][i]; out[2] = coutput[2][i]; out[3] = 255; out += n; } } else { z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n); } } else if (z->s->img_n == 4) { if (z->app14_color_transform == 0) { // CMYK for (i = 0; i < z->s->img_x; ++i) { stbi_uc m = coutput[3][i]; out[0] = stbi__blinn_8x8(coutput[0][i], m); out[1] = stbi__blinn_8x8(coutput[1][i], m); out[2] = stbi__blinn_8x8(coutput[2][i], m); out[3] = 255; out += n; } } else if (z->app14_color_transform == 2) { // YCCK z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n); for (i = 0; i < z->s->img_x; ++i) { stbi_uc m = coutput[3][i]; out[0] = stbi__blinn_8x8(255 - out[0], m); out[1] = stbi__blinn_8x8(255 - out[1], m); out[2] = stbi__blinn_8x8(255 - out[2], m); out += n; } } else { // YCbCr + alpha? Ignore the fourth channel for now z->YCbCr_to_RGB_kernel(out, y, coutput[1], coutput[2], z->s->img_x, n); } } else for (i = 0; i < z->s->img_x; ++i) { out[0] = out[1] = out[2] = y[i]; out[3] = 255; // not used if n==3 out += n; } } else { if (is_rgb) { if (n == 1) for (i = 0; i < z->s->img_x; ++i) *out++ = stbi__compute_y(coutput[0][i], coutput[1][i], coutput[2][i]); else { for (i = 0; i < z->s->img_x; ++i, out += 2) { out[0] = stbi__compute_y(coutput[0][i], coutput[1][i], coutput[2][i]); out[1] = 255; } } } else if (z->s->img_n == 4 && z->app14_color_transform == 0) { for (i = 0; i < z->s->img_x; ++i) { stbi_uc m = coutput[3][i]; stbi_uc r = stbi__blinn_8x8(coutput[0][i], m); stbi_uc g = stbi__blinn_8x8(coutput[1][i], m); stbi_uc b = stbi__blinn_8x8(coutput[2][i], m); out[0] = stbi__compute_y(r, g, b); out[1] = 255; out += n; } } else if (z->s->img_n == 4 && z->app14_color_transform == 2) { for (i = 0; i < z->s->img_x; ++i) { out[0] = stbi__blinn_8x8(255 - coutput[0][i], coutput[3][i]); out[1] = 255; out += n; } } else { stbi_uc *y = coutput[0]; if (n == 1) for (i = 0; i < z->s->img_x; ++i) out[i] = y[i]; else for (i = 0; i < z->s->img_x; ++i) { *out++ = y[i]; *out++ = 255; } } } } stbi__cleanup_jpeg(z); *out_x = z->s->img_x; *out_y = z->s->img_y; if (comp) *comp = z->s->img_n >= 3 ? 3 : 1; // report original components, not output return output; } } static void *stbi__jpeg_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { unsigned char *result; stbi__jpeg *j = (stbi__jpeg *)stbi__malloc(sizeof(stbi__jpeg)); STBI_NOTUSED(ri); j->s = s; stbi__setup_jpeg(j); result = load_jpeg_image(j, x, y, comp, req_comp); STBI_FREE(j); return result; } static int stbi__jpeg_test(stbi__context *s) { int r; stbi__jpeg *j = (stbi__jpeg *)stbi__malloc(sizeof(stbi__jpeg)); j->s = s; stbi__setup_jpeg(j); r = stbi__decode_jpeg_header(j, STBI__SCAN_type); stbi__rewind(s); STBI_FREE(j); return r; } static int stbi__jpeg_info_raw(stbi__jpeg *j, int *x, int *y, int *comp) { if (!stbi__decode_jpeg_header(j, STBI__SCAN_header)) { stbi__rewind(j->s); return 0; } if (x) *x = j->s->img_x; if (y) *y = j->s->img_y; if (comp) *comp = j->s->img_n >= 3 ? 3 : 1; return 1; } static int stbi__jpeg_info(stbi__context *s, int *x, int *y, int *comp) { int result; stbi__jpeg *j = (stbi__jpeg *)(stbi__malloc(sizeof(stbi__jpeg))); j->s = s; result = stbi__jpeg_info_raw(j, x, y, comp); STBI_FREE(j); return result; } #endif // public domain zlib decode v0.2 Sean Barrett 2006-11-18 // simple implementation // - all input must be provided in an upfront buffer // - all output is written to a single output buffer (can malloc/realloc) // performance // - fast huffman #ifndef STBI_NO_ZLIB // fast-way is faster to check than jpeg huffman, but slow way is slower #define STBI__ZFAST_BITS 9 // accelerate all cases in default tables #define STBI__ZFAST_MASK ((1 << STBI__ZFAST_BITS) - 1) // zlib-style huffman encoding // (jpegs packs from left, zlib from right, so can't share code) typedef struct { stbi__uint16 fast[1 << STBI__ZFAST_BITS]; stbi__uint16 firstcode[16]; int maxcode[17]; stbi__uint16 firstsymbol[16]; stbi_uc size[288]; stbi__uint16 value[288]; } stbi__zhuffman; stbi_inline static int stbi__bitreverse16(int n) { n = ((n & 0xAAAA) >> 1) | ((n & 0x5555) << 1); n = ((n & 0xCCCC) >> 2) | ((n & 0x3333) << 2); n = ((n & 0xF0F0) >> 4) | ((n & 0x0F0F) << 4); n = ((n & 0xFF00) >> 8) | ((n & 0x00FF) << 8); return n; } stbi_inline static int stbi__bit_reverse(int v, int bits) { STBI_ASSERT(bits <= 16); // to bit reverse n bits, reverse 16 and shift // e.g. 11 bits, bit reverse and shift away 5 return stbi__bitreverse16(v) >> (16 - bits); } static int stbi__zbuild_huffman(stbi__zhuffman *z, const stbi_uc *sizelist, int num) { int i, k = 0; int code, next_code[16], sizes[17]; // DEFLATE spec for generating codes memset(sizes, 0, sizeof(sizes)); memset(z->fast, 0, sizeof(z->fast)); for (i = 0; i < num; ++i) ++sizes[sizelist[i]]; sizes[0] = 0; for (i = 1; i < 16; ++i) if (sizes[i] > (1 << i)) return stbi__err("bad sizes", "Corrupt PNG"); code = 0; for (i = 1; i < 16; ++i) { next_code[i] = code; z->firstcode[i] = (stbi__uint16)code; z->firstsymbol[i] = (stbi__uint16)k; code = (code + sizes[i]); if (sizes[i]) if (code - 1 >= (1 << i)) return stbi__err("bad codelengths", "Corrupt PNG"); z->maxcode[i] = code << (16 - i); // preshift for inner loop code <<= 1; k += sizes[i]; } z->maxcode[16] = 0x10000; // sentinel for (i = 0; i < num; ++i) { int s = sizelist[i]; if (s) { int c = next_code[s] - z->firstcode[s] + z->firstsymbol[s]; stbi__uint16 fastv = (stbi__uint16)((s << 9) | i); z->size[c] = (stbi_uc)s; z->value[c] = (stbi__uint16)i; if (s <= STBI__ZFAST_BITS) { int j = stbi__bit_reverse(next_code[s], s); while (j < (1 << STBI__ZFAST_BITS)) { z->fast[j] = fastv; j += (1 << s); } } ++next_code[s]; } } return 1; } // zlib-from-memory implementation for PNG reading // because PNG allows splitting the zlib stream arbitrarily, // and it's annoying structurally to have PNG call ZLIB call PNG, // we require PNG read all the IDATs and combine them into a single // memory buffer typedef struct { stbi_uc *zbuffer, *zbuffer_end; int num_bits; stbi__uint32 code_buffer; char *zout; char *zout_start; char *zout_end; int z_expandable; stbi__zhuffman z_length, z_distance; } stbi__zbuf; stbi_inline static stbi_uc stbi__zget8(stbi__zbuf *z) { if (z->zbuffer >= z->zbuffer_end) return 0; return *z->zbuffer++; } static void stbi__fill_bits(stbi__zbuf *z) { do { STBI_ASSERT(z->code_buffer < (1U << z->num_bits)); z->code_buffer |= (unsigned int)stbi__zget8(z) << z->num_bits; z->num_bits += 8; } while (z->num_bits <= 24); } stbi_inline static unsigned int stbi__zreceive(stbi__zbuf *z, int n) { unsigned int k; if (z->num_bits < n) stbi__fill_bits(z); k = z->code_buffer & ((1 << n) - 1); z->code_buffer >>= n; z->num_bits -= n; return k; } static int stbi__zhuffman_decode_slowpath(stbi__zbuf *a, stbi__zhuffman *z) { int b, s, k; // not resolved by fast table, so compute it the slow way // use jpeg approach, which requires MSbits at top k = stbi__bit_reverse(a->code_buffer, 16); for (s = STBI__ZFAST_BITS + 1;; ++s) if (k < z->maxcode[s]) break; if (s == 16) return -1; // invalid code! // code size is s, so: b = (k >> (16 - s)) - z->firstcode[s] + z->firstsymbol[s]; STBI_ASSERT(z->size[b] == s); a->code_buffer >>= s; a->num_bits -= s; return z->value[b]; } stbi_inline static int stbi__zhuffman_decode(stbi__zbuf *a, stbi__zhuffman *z) { int b, s; if (a->num_bits < 16) stbi__fill_bits(a); b = z->fast[a->code_buffer & STBI__ZFAST_MASK]; if (b) { s = b >> 9; a->code_buffer >>= s; a->num_bits -= s; return b & 511; } return stbi__zhuffman_decode_slowpath(a, z); } static int stbi__zexpand(stbi__zbuf *z, char *zout, int n) // need to make room for n bytes { char *q; int cur, limit, old_limit; z->zout = zout; if (!z->z_expandable) return stbi__err("output buffer limit", "Corrupt PNG"); cur = (int)(z->zout - z->zout_start); limit = old_limit = (int)(z->zout_end - z->zout_start); while (cur + n > limit) limit *= 2; q = (char *)STBI_REALLOC_SIZED(z->zout_start, old_limit, limit); STBI_NOTUSED(old_limit); if (q == NULL) return stbi__err("outofmem", "Out of memory"); z->zout_start = q; z->zout = q + cur; z->zout_end = q + limit; return 1; } static const int stbi__zlength_base[31] = { 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31, 35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258, 0, 0 }; static const int stbi__zlength_extra[31] = { 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0, 0, 0 }; static const int stbi__zdist_base[32] = { 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193, 257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145, 8193, 12289, 16385, 24577, 0, 0 }; static const int stbi__zdist_extra[32] = { 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13 }; static int stbi__parse_huffman_block(stbi__zbuf *a) { char *zout = a->zout; for (;;) { int z = stbi__zhuffman_decode(a, &a->z_length); if (z < 256) { if (z < 0) return stbi__err("bad huffman code", "Corrupt PNG"); // error in huffman codes if (zout >= a->zout_end) { if (!stbi__zexpand(a, zout, 1)) return 0; zout = a->zout; } *zout++ = (char)z; } else { stbi_uc *p; int len, dist; if (z == 256) { a->zout = zout; return 1; } z -= 257; len = stbi__zlength_base[z]; if (stbi__zlength_extra[z]) len += stbi__zreceive(a, stbi__zlength_extra[z]); z = stbi__zhuffman_decode(a, &a->z_distance); if (z < 0) return stbi__err("bad huffman code", "Corrupt PNG"); dist = stbi__zdist_base[z]; if (stbi__zdist_extra[z]) dist += stbi__zreceive(a, stbi__zdist_extra[z]); if (zout - a->zout_start < dist) return stbi__err("bad dist", "Corrupt PNG"); if (zout + len > a->zout_end) { if (!stbi__zexpand(a, zout, len)) return 0; zout = a->zout; } p = (stbi_uc *)(zout - dist); if (dist == 1) { // run of one byte; common in images. stbi_uc v = *p; if (len) { do *zout++ = v; while (--len); } } else { if (len) { do *zout++ = *p++; while (--len); } } } } } static int stbi__compute_huffman_codes(stbi__zbuf *a) { static const stbi_uc length_dezigzag[19] = { 16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15 }; stbi__zhuffman z_codelength; stbi_uc lencodes[286 + 32 + 137]; //padding for maximum single op stbi_uc codelength_sizes[19]; int i, n; int hlit = stbi__zreceive(a, 5) + 257; int hdist = stbi__zreceive(a, 5) + 1; int hclen = stbi__zreceive(a, 4) + 4; int ntot = hlit + hdist; memset(codelength_sizes, 0, sizeof(codelength_sizes)); for (i = 0; i < hclen; ++i) { int s = stbi__zreceive(a, 3); codelength_sizes[length_dezigzag[i]] = (stbi_uc)s; } if (!stbi__zbuild_huffman(&z_codelength, codelength_sizes, 19)) return 0; n = 0; while (n < ntot) { int c = stbi__zhuffman_decode(a, &z_codelength); if (c < 0 || c >= 19) return stbi__err("bad codelengths", "Corrupt PNG"); if (c < 16) lencodes[n++] = (stbi_uc)c; else { stbi_uc fill = 0; if (c == 16) { c = stbi__zreceive(a, 2) + 3; if (n == 0) return stbi__err("bad codelengths", "Corrupt PNG"); fill = lencodes[n - 1]; } else if (c == 17) c = stbi__zreceive(a, 3) + 3; else { STBI_ASSERT(c == 18); c = stbi__zreceive(a, 7) + 11; } if (ntot - n < c) return stbi__err("bad codelengths", "Corrupt PNG"); memset(lencodes + n, fill, c); n += c; } } if (n != ntot) return stbi__err("bad codelengths", "Corrupt PNG"); if (!stbi__zbuild_huffman(&a->z_length, lencodes, hlit)) return 0; if (!stbi__zbuild_huffman(&a->z_distance, lencodes + hlit, hdist)) return 0; return 1; } static int stbi__parse_uncompressed_block(stbi__zbuf *a) { stbi_uc header[4]; int len, nlen, k; if (a->num_bits & 7) stbi__zreceive(a, a->num_bits & 7); // discard // drain the bit-packed data into header k = 0; while (a->num_bits > 0) { header[k++] = (stbi_uc)(a->code_buffer & 255); // suppress MSVC run-time check a->code_buffer >>= 8; a->num_bits -= 8; } STBI_ASSERT(a->num_bits == 0); // now fill header the normal way while (k < 4) header[k++] = stbi__zget8(a); len = header[1] * 256 + header[0]; nlen = header[3] * 256 + header[2]; if (nlen != (len ^ 0xffff)) return stbi__err("zlib corrupt", "Corrupt PNG"); if (a->zbuffer + len > a->zbuffer_end) return stbi__err("read past buffer", "Corrupt PNG"); if (a->zout + len > a->zout_end) if (!stbi__zexpand(a, a->zout, len)) return 0; memcpy(a->zout, a->zbuffer, len); a->zbuffer += len; a->zout += len; return 1; } static int stbi__parse_zlib_header(stbi__zbuf *a) { int cmf = stbi__zget8(a); int cm = cmf & 15; /* int cinfo = cmf >> 4; */ int flg = stbi__zget8(a); if ((cmf * 256 + flg) % 31 != 0) return stbi__err("bad zlib header", "Corrupt PNG"); // zlib spec if (flg & 32) return stbi__err("no preset dict", "Corrupt PNG"); // preset dictionary not allowed in png if (cm != 8) return stbi__err("bad compression", "Corrupt PNG"); // DEFLATE required for png // window = 1 << (8 + cinfo)... but who cares, we fully buffer output return 1; } static const stbi_uc stbi__zdefault_length[288] = { 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8 }; static const stbi_uc stbi__zdefault_distance[32] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5 }; /* Init algorithm: { int i; // use <= to match clearly with spec for (i=0; i <= 143; ++i) stbi__zdefault_length[i] = 8; for ( ; i <= 255; ++i) stbi__zdefault_length[i] = 9; for ( ; i <= 279; ++i) stbi__zdefault_length[i] = 7; for ( ; i <= 287; ++i) stbi__zdefault_length[i] = 8; for (i=0; i <= 31; ++i) stbi__zdefault_distance[i] = 5; } */ static int stbi__parse_zlib(stbi__zbuf *a, int parse_header) { int final, type; if (parse_header) if (!stbi__parse_zlib_header(a)) return 0; a->num_bits = 0; a->code_buffer = 0; do { final = stbi__zreceive(a, 1); type = stbi__zreceive(a, 2); if (type == 0) { if (!stbi__parse_uncompressed_block(a)) return 0; } else if (type == 3) { return 0; } else { if (type == 1) { // use fixed code lengths if (!stbi__zbuild_huffman(&a->z_length, stbi__zdefault_length, 288)) return 0; if (!stbi__zbuild_huffman(&a->z_distance, stbi__zdefault_distance, 32)) return 0; } else { if (!stbi__compute_huffman_codes(a)) return 0; } if (!stbi__parse_huffman_block(a)) return 0; } } while (!final); return 1; } static int stbi__do_zlib(stbi__zbuf *a, char *obuf, int olen, int exp, int parse_header) { a->zout_start = obuf; a->zout = obuf; a->zout_end = obuf + olen; a->z_expandable = exp; return stbi__parse_zlib(a, parse_header); } STBIDEF char *stbi_zlib_decode_malloc_guesssize(const char *buffer, int len, int initial_size, int *outlen) { stbi__zbuf a; char *p = (char *)stbi__malloc(initial_size); if (p == NULL) return NULL; a.zbuffer = (stbi_uc *)buffer; a.zbuffer_end = (stbi_uc *)buffer + len; if (stbi__do_zlib(&a, p, initial_size, 1, 1)) { if (outlen) *outlen = (int)(a.zout - a.zout_start); return a.zout_start; } else { STBI_FREE(a.zout_start); return NULL; } } STBIDEF char *stbi_zlib_decode_malloc(char const *buffer, int len, int *outlen) { return stbi_zlib_decode_malloc_guesssize(buffer, len, 16384, outlen); } STBIDEF char *stbi_zlib_decode_malloc_guesssize_headerflag(const char *buffer, int len, int initial_size, int *outlen, int parse_header) { stbi__zbuf a; char *p = (char *)stbi__malloc(initial_size); if (p == NULL) return NULL; a.zbuffer = (stbi_uc *)buffer; a.zbuffer_end = (stbi_uc *)buffer + len; if (stbi__do_zlib(&a, p, initial_size, 1, parse_header)) { if (outlen) *outlen = (int)(a.zout - a.zout_start); return a.zout_start; } else { STBI_FREE(a.zout_start); return NULL; } } STBIDEF int stbi_zlib_decode_buffer(char *obuffer, int olen, char const *ibuffer, int ilen) { stbi__zbuf a; a.zbuffer = (stbi_uc *)ibuffer; a.zbuffer_end = (stbi_uc *)ibuffer + ilen; if (stbi__do_zlib(&a, obuffer, olen, 0, 1)) return (int)(a.zout - a.zout_start); else return -1; } STBIDEF char *stbi_zlib_decode_noheader_malloc(char const *buffer, int len, int *outlen) { stbi__zbuf a; char *p = (char *)stbi__malloc(16384); if (p == NULL) return NULL; a.zbuffer = (stbi_uc *)buffer; a.zbuffer_end = (stbi_uc *)buffer + len; if (stbi__do_zlib(&a, p, 16384, 1, 0)) { if (outlen) *outlen = (int)(a.zout - a.zout_start); return a.zout_start; } else { STBI_FREE(a.zout_start); return NULL; } } STBIDEF int stbi_zlib_decode_noheader_buffer(char *obuffer, int olen, const char *ibuffer, int ilen) { stbi__zbuf a; a.zbuffer = (stbi_uc *)ibuffer; a.zbuffer_end = (stbi_uc *)ibuffer + ilen; if (stbi__do_zlib(&a, obuffer, olen, 0, 0)) return (int)(a.zout - a.zout_start); else return -1; } #endif // public domain "baseline" PNG decoder v0.10 Sean Barrett 2006-11-18 // simple implementation // - only 8-bit samples // - no CRC checking // - allocates lots of intermediate memory // - avoids problem of streaming data between subsystems // - avoids explicit window management // performance // - uses stb_zlib, a PD zlib implementation with fast huffman decoding #ifndef STBI_NO_PNG typedef struct { stbi__uint32 length; stbi__uint32 type; } stbi__pngchunk; static stbi__pngchunk stbi__get_chunk_header(stbi__context *s) { stbi__pngchunk c; c.length = stbi__get32be(s); c.type = stbi__get32be(s); return c; } static int stbi__check_png_header(stbi__context *s) { static const stbi_uc png_sig[8] = { 137, 80, 78, 71, 13, 10, 26, 10 }; int i; for (i = 0; i < 8; ++i) if (stbi__get8(s) != png_sig[i]) return stbi__err("bad png sig", "Not a PNG"); return 1; } typedef struct { stbi__context *s; stbi_uc *idata, *expanded, *out; int depth; } stbi__png; enum { STBI__F_none = 0, STBI__F_sub = 1, STBI__F_up = 2, STBI__F_avg = 3, STBI__F_paeth = 4, // synthetic filters used for first scanline to avoid needing a dummy row of 0s STBI__F_avg_first, STBI__F_paeth_first }; static stbi_uc first_row_filter[5] = { STBI__F_none, STBI__F_sub, STBI__F_none, STBI__F_avg_first, STBI__F_paeth_first }; static int stbi__paeth(int a, int b, int c) { int p = a + b - c; int pa = abs(p - a); int pb = abs(p - b); int pc = abs(p - c); if (pa <= pb && pa <= pc) return a; if (pb <= pc) return b; return c; } static const stbi_uc stbi__depth_scale_table[9] = { 0, 0xff, 0x55, 0, 0x11, 0, 0, 0, 0x01 }; // create the png data from post-deflated data static int stbi__create_png_image_raw(stbi__png *a, stbi_uc *raw, stbi__uint32 raw_len, int out_n, stbi__uint32 x, stbi__uint32 y, int depth, int color) { int bytes = (depth == 16 ? 2 : 1); stbi__context *s = a->s; stbi__uint32 i, j, stride = x * out_n * bytes; stbi__uint32 img_len, img_width_bytes; int k; int img_n = s->img_n; // copy it into a local for later int output_bytes = out_n * bytes; int filter_bytes = img_n * bytes; int width = x; STBI_ASSERT(out_n == s->img_n || out_n == s->img_n + 1); a->out = (stbi_uc *)stbi__malloc_mad3(x, y, output_bytes, 0); // extra bytes to write off the end into if (!a->out) return stbi__err("outofmem", "Out of memory"); if (!stbi__mad3sizes_valid(img_n, x, depth, 7)) return stbi__err("too large", "Corrupt PNG"); img_width_bytes = (((img_n * x * depth) + 7) >> 3); img_len = (img_width_bytes + 1) * y; // we used to check for exact match between raw_len and img_len on non-interlaced PNGs, // but issue #276 reported a PNG in the wild that had extra data at the end (all zeros), // so just check for raw_len < img_len always. if (raw_len < img_len) return stbi__err("not enough pixels", "Corrupt PNG"); for (j = 0; j < y; ++j) { stbi_uc *cur = a->out + stride * j; stbi_uc *prior; int filter = *raw++; if (filter > 4) return stbi__err("invalid filter", "Corrupt PNG"); if (depth < 8) { STBI_ASSERT(img_width_bytes <= x); cur += x * out_n - img_width_bytes; // store output to the rightmost img_len bytes, so we can decode in place filter_bytes = 1; width = img_width_bytes; } prior = cur - stride; // bugfix: need to compute this after 'cur +=' computation above // if first row, use special filter that doesn't sample previous row if (j == 0) filter = first_row_filter[filter]; // handle first byte explicitly for (k = 0; k < filter_bytes; ++k) { switch (filter) { case STBI__F_none: cur[k] = raw[k]; break; case STBI__F_sub: cur[k] = raw[k]; break; case STBI__F_up: cur[k] = STBI__BYTECAST(raw[k] + prior[k]); break; case STBI__F_avg: cur[k] = STBI__BYTECAST(raw[k] + (prior[k] >> 1)); break; case STBI__F_paeth: cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(0, prior[k], 0)); break; case STBI__F_avg_first: cur[k] = raw[k]; break; case STBI__F_paeth_first: cur[k] = raw[k]; break; } } if (depth == 8) { if (img_n != out_n) cur[img_n] = 255; // first pixel raw += img_n; cur += out_n; prior += out_n; } else if (depth == 16) { if (img_n != out_n) { cur[filter_bytes] = 255; // first pixel top byte cur[filter_bytes + 1] = 255; // first pixel bottom byte } raw += filter_bytes; cur += output_bytes; prior += output_bytes; } else { raw += 1; cur += 1; prior += 1; } // this is a little gross, so that we don't switch per-pixel or per-component if (depth < 8 || img_n == out_n) { int nk = (width - 1) * filter_bytes; #define STBI__CASE(f) \ case f: \ for (k = 0; k < nk; ++k) switch (filter) { // "none" filter turns into a memcpy here; make that explicit. case STBI__F_none: memcpy(cur, raw, nk); break; STBI__CASE(STBI__F_sub) { cur[k] = STBI__BYTECAST(raw[k] + cur[k - filter_bytes]); } break; STBI__CASE(STBI__F_up) { cur[k] = STBI__BYTECAST(raw[k] + prior[k]); } break; STBI__CASE(STBI__F_avg) { cur[k] = STBI__BYTECAST(raw[k] + ((prior[k] + cur[k - filter_bytes]) >> 1)); } break; STBI__CASE(STBI__F_paeth) { cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k - filter_bytes], prior[k], prior[k - filter_bytes])); } break; STBI__CASE(STBI__F_avg_first) { cur[k] = STBI__BYTECAST(raw[k] + (cur[k - filter_bytes] >> 1)); } break; STBI__CASE(STBI__F_paeth_first) { cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k - filter_bytes], 0, 0)); } break; } #undef STBI__CASE raw += nk; } else { STBI_ASSERT(img_n + 1 == out_n); #define STBI__CASE(f) \ case f: \ for (i = x - 1; i >= 1; --i, cur[filter_bytes] = 255, raw += filter_bytes, cur += output_bytes, prior += output_bytes) \ for (k = 0; k < filter_bytes; ++k) switch (filter) { STBI__CASE(STBI__F_none) { cur[k] = raw[k]; } break; STBI__CASE(STBI__F_sub) { cur[k] = STBI__BYTECAST(raw[k] + cur[k - output_bytes]); } break; STBI__CASE(STBI__F_up) { cur[k] = STBI__BYTECAST(raw[k] + prior[k]); } break; STBI__CASE(STBI__F_avg) { cur[k] = STBI__BYTECAST(raw[k] + ((prior[k] + cur[k - output_bytes]) >> 1)); } break; STBI__CASE(STBI__F_paeth) { cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k - output_bytes], prior[k], prior[k - output_bytes])); } break; STBI__CASE(STBI__F_avg_first) { cur[k] = STBI__BYTECAST(raw[k] + (cur[k - output_bytes] >> 1)); } break; STBI__CASE(STBI__F_paeth_first) { cur[k] = STBI__BYTECAST(raw[k] + stbi__paeth(cur[k - output_bytes], 0, 0)); } break; } #undef STBI__CASE // the loop above sets the high byte of the pixels' alpha, but for // 16 bit png files we also need the low byte set. we'll do that here. if (depth == 16) { cur = a->out + stride * j; // start at the beginning of the row again for (i = 0; i < x; ++i, cur += output_bytes) { cur[filter_bytes + 1] = 255; } } } } // we make a separate pass to expand bits to pixels; for performance, // this could run two scanlines behind the above code, so it won't // intefere with filtering but will still be in the cache. if (depth < 8) { for (j = 0; j < y; ++j) { stbi_uc *cur = a->out + stride * j; stbi_uc *in = a->out + stride * j + x * out_n - img_width_bytes; // unpack 1/2/4-bit into a 8-bit buffer. allows us to keep the common 8-bit path optimal at minimal cost for 1/2/4-bit // png guarante byte alignment, if width is not multiple of 8/4/2 we'll decode dummy trailing data that will be skipped in the later loop stbi_uc scale = (color == 0) ? stbi__depth_scale_table[depth] : 1; // scale grayscale values to 0..255 range // note that the final byte might overshoot and write more data than desired. // we can allocate enough data that this never writes out of memory, but it // could also overwrite the next scanline. can it overwrite non-empty data // on the next scanline? yes, consider 1-pixel-wide scanlines with 1-bit-per-pixel. // so we need to explicitly clamp the final ones if (depth == 4) { for (k = x * img_n; k >= 2; k -= 2, ++in) { *cur++ = scale * ((*in >> 4)); *cur++ = scale * ((*in) & 0x0f); } if (k > 0) *cur++ = scale * ((*in >> 4)); } else if (depth == 2) { for (k = x * img_n; k >= 4; k -= 4, ++in) { *cur++ = scale * ((*in >> 6)); *cur++ = scale * ((*in >> 4) & 0x03); *cur++ = scale * ((*in >> 2) & 0x03); *cur++ = scale * ((*in) & 0x03); } if (k > 0) *cur++ = scale * ((*in >> 6)); if (k > 1) *cur++ = scale * ((*in >> 4) & 0x03); if (k > 2) *cur++ = scale * ((*in >> 2) & 0x03); } else if (depth == 1) { for (k = x * img_n; k >= 8; k -= 8, ++in) { *cur++ = scale * ((*in >> 7)); *cur++ = scale * ((*in >> 6) & 0x01); *cur++ = scale * ((*in >> 5) & 0x01); *cur++ = scale * ((*in >> 4) & 0x01); *cur++ = scale * ((*in >> 3) & 0x01); *cur++ = scale * ((*in >> 2) & 0x01); *cur++ = scale * ((*in >> 1) & 0x01); *cur++ = scale * ((*in) & 0x01); } if (k > 0) *cur++ = scale * ((*in >> 7)); if (k > 1) *cur++ = scale * ((*in >> 6) & 0x01); if (k > 2) *cur++ = scale * ((*in >> 5) & 0x01); if (k > 3) *cur++ = scale * ((*in >> 4) & 0x01); if (k > 4) *cur++ = scale * ((*in >> 3) & 0x01); if (k > 5) *cur++ = scale * ((*in >> 2) & 0x01); if (k > 6) *cur++ = scale * ((*in >> 1) & 0x01); } if (img_n != out_n) { int q; // insert alpha = 255 cur = a->out + stride * j; if (img_n == 1) { for (q = x - 1; q >= 0; --q) { cur[q * 2 + 1] = 255; cur[q * 2 + 0] = cur[q]; } } else { STBI_ASSERT(img_n == 3); for (q = x - 1; q >= 0; --q) { cur[q * 4 + 3] = 255; cur[q * 4 + 2] = cur[q * 3 + 2]; cur[q * 4 + 1] = cur[q * 3 + 1]; cur[q * 4 + 0] = cur[q * 3 + 0]; } } } } } else if (depth == 16) { // force the image data from big-endian to platform-native. // this is done in a separate pass due to the decoding relying // on the data being untouched, but could probably be done // per-line during decode if care is taken. stbi_uc *cur = a->out; stbi__uint16 *cur16 = (stbi__uint16 *)cur; for (i = 0; i < x * y * out_n; ++i, cur16++, cur += 2) { *cur16 = (cur[0] << 8) | cur[1]; } } return 1; } static int stbi__create_png_image(stbi__png *a, stbi_uc *image_data, stbi__uint32 image_data_len, int out_n, int depth, int color, int interlaced) { int bytes = (depth == 16 ? 2 : 1); int out_bytes = out_n * bytes; stbi_uc *final; int p; if (!interlaced) return stbi__create_png_image_raw(a, image_data, image_data_len, out_n, a->s->img_x, a->s->img_y, depth, color); // de-interlacing final = (stbi_uc *)stbi__malloc_mad3(a->s->img_x, a->s->img_y, out_bytes, 0); for (p = 0; p < 7; ++p) { int xorig[] = { 0, 4, 0, 2, 0, 1, 0 }; int yorig[] = { 0, 0, 4, 0, 2, 0, 1 }; int xspc[] = { 8, 8, 4, 4, 2, 2, 1 }; int yspc[] = { 8, 8, 8, 4, 4, 2, 2 }; int i, j, x, y; // pass1_x[4] = 0, pass1_x[5] = 1, pass1_x[12] = 1 x = (a->s->img_x - xorig[p] + xspc[p] - 1) / xspc[p]; y = (a->s->img_y - yorig[p] + yspc[p] - 1) / yspc[p]; if (x && y) { stbi__uint32 img_len = ((((a->s->img_n * x * depth) + 7) >> 3) + 1) * y; if (!stbi__create_png_image_raw(a, image_data, image_data_len, out_n, x, y, depth, color)) { STBI_FREE(final); return 0; } for (j = 0; j < y; ++j) { for (i = 0; i < x; ++i) { int out_y = j * yspc[p] + yorig[p]; int out_x = i * xspc[p] + xorig[p]; memcpy(final + out_y * a->s->img_x * out_bytes + out_x * out_bytes, a->out + (j * x + i) * out_bytes, out_bytes); } } STBI_FREE(a->out); image_data += img_len; image_data_len -= img_len; } } a->out = final; return 1; } static int stbi__compute_transparency(stbi__png *z, stbi_uc tc[3], int out_n) { stbi__context *s = z->s; stbi__uint32 i, pixel_count = s->img_x * s->img_y; stbi_uc *p = z->out; // compute color-based transparency, assuming we've // already got 255 as the alpha value in the output STBI_ASSERT(out_n == 2 || out_n == 4); if (out_n == 2) { for (i = 0; i < pixel_count; ++i) { p[1] = (p[0] == tc[0] ? 0 : 255); p += 2; } } else { for (i = 0; i < pixel_count; ++i) { if (p[0] == tc[0] && p[1] == tc[1] && p[2] == tc[2]) p[3] = 0; p += 4; } } return 1; } static int stbi__compute_transparency16(stbi__png *z, stbi__uint16 tc[3], int out_n) { stbi__context *s = z->s; stbi__uint32 i, pixel_count = s->img_x * s->img_y; stbi__uint16 *p = (stbi__uint16 *)z->out; // compute color-based transparency, assuming we've // already got 65535 as the alpha value in the output STBI_ASSERT(out_n == 2 || out_n == 4); if (out_n == 2) { for (i = 0; i < pixel_count; ++i) { p[1] = (p[0] == tc[0] ? 0 : 65535); p += 2; } } else { for (i = 0; i < pixel_count; ++i) { if (p[0] == tc[0] && p[1] == tc[1] && p[2] == tc[2]) p[3] = 0; p += 4; } } return 1; } static int stbi__expand_png_palette(stbi__png *a, stbi_uc *palette, int len, int pal_img_n) { stbi__uint32 i, pixel_count = a->s->img_x * a->s->img_y; stbi_uc *p, *temp_out, *orig = a->out; p = (stbi_uc *)stbi__malloc_mad2(pixel_count, pal_img_n, 0); if (p == NULL) return stbi__err("outofmem", "Out of memory"); // between here and free(out) below, exitting would leak temp_out = p; if (pal_img_n == 3) { for (i = 0; i < pixel_count; ++i) { int n = orig[i] * 4; p[0] = palette[n]; p[1] = palette[n + 1]; p[2] = palette[n + 2]; p += 3; } } else { for (i = 0; i < pixel_count; ++i) { int n = orig[i] * 4; p[0] = palette[n]; p[1] = palette[n + 1]; p[2] = palette[n + 2]; p[3] = palette[n + 3]; p += 4; } } STBI_FREE(a->out); a->out = temp_out; STBI_NOTUSED(len); return 1; } static int stbi__unpremultiply_on_load = 0; static int stbi__de_iphone_flag = 0; STBIDEF void stbi_set_unpremultiply_on_load(int flag_true_if_should_unpremultiply) { stbi__unpremultiply_on_load = flag_true_if_should_unpremultiply; } STBIDEF void stbi_convert_iphone_png_to_rgb(int flag_true_if_should_convert) { stbi__de_iphone_flag = flag_true_if_should_convert; } static void stbi__de_iphone(stbi__png *z) { stbi__context *s = z->s; stbi__uint32 i, pixel_count = s->img_x * s->img_y; stbi_uc *p = z->out; if (s->img_out_n == 3) { // convert bgr to rgb for (i = 0; i < pixel_count; ++i) { stbi_uc t = p[0]; p[0] = p[2]; p[2] = t; p += 3; } } else { STBI_ASSERT(s->img_out_n == 4); if (stbi__unpremultiply_on_load) { // convert bgr to rgb and unpremultiply for (i = 0; i < pixel_count; ++i) { stbi_uc a = p[3]; stbi_uc t = p[0]; if (a) { stbi_uc half = a / 2; p[0] = (p[2] * 255 + half) / a; p[1] = (p[1] * 255 + half) / a; p[2] = (t * 255 + half) / a; } else { p[0] = p[2]; p[2] = t; } p += 4; } } else { // convert bgr to rgb for (i = 0; i < pixel_count; ++i) { stbi_uc t = p[0]; p[0] = p[2]; p[2] = t; p += 4; } } } } #define STBI__PNG_TYPE(a, b, c, d) (((unsigned)(a) << 24) + ((unsigned)(b) << 16) + ((unsigned)(c) << 8) + (unsigned)(d)) static int stbi__parse_png_file(stbi__png *z, int scan, int req_comp) { stbi_uc palette[1024], pal_img_n = 0; stbi_uc has_trans = 0, tc[3] = { 0 }; stbi__uint16 tc16[3]; stbi__uint32 ioff = 0, idata_limit = 0, i, pal_len = 0; int first = 1, k, interlace = 0, color = 0, is_iphone = 0; stbi__context *s = z->s; z->expanded = NULL; z->idata = NULL; z->out = NULL; if (!stbi__check_png_header(s)) return 0; if (scan == STBI__SCAN_type) return 1; for (;;) { stbi__pngchunk c = stbi__get_chunk_header(s); switch (c.type) { case STBI__PNG_TYPE('C', 'g', 'B', 'I'): is_iphone = 1; stbi__skip(s, c.length); break; case STBI__PNG_TYPE('I', 'H', 'D', 'R'): { int comp, filter; if (!first) return stbi__err("multiple IHDR", "Corrupt PNG"); first = 0; if (c.length != 13) return stbi__err("bad IHDR len", "Corrupt PNG"); s->img_x = stbi__get32be(s); if (s->img_x > (1 << 24)) return stbi__err("too large", "Very large image (corrupt?)"); s->img_y = stbi__get32be(s); if (s->img_y > (1 << 24)) return stbi__err("too large", "Very large image (corrupt?)"); z->depth = stbi__get8(s); if (z->depth != 1 && z->depth != 2 && z->depth != 4 && z->depth != 8 && z->depth != 16) return stbi__err("1/2/4/8/16-bit only", "PNG not supported: 1/2/4/8/16-bit only"); color = stbi__get8(s); if (color > 6) return stbi__err("bad ctype", "Corrupt PNG"); if (color == 3 && z->depth == 16) return stbi__err("bad ctype", "Corrupt PNG"); if (color == 3) pal_img_n = 3; else if (color & 1) return stbi__err("bad ctype", "Corrupt PNG"); comp = stbi__get8(s); if (comp) return stbi__err("bad comp method", "Corrupt PNG"); filter = stbi__get8(s); if (filter) return stbi__err("bad filter method", "Corrupt PNG"); interlace = stbi__get8(s); if (interlace > 1) return stbi__err("bad interlace method", "Corrupt PNG"); if (!s->img_x || !s->img_y) return stbi__err("0-pixel image", "Corrupt PNG"); if (!pal_img_n) { s->img_n = (color & 2 ? 3 : 1) + (color & 4 ? 1 : 0); if ((1 << 30) / s->img_x / s->img_n < s->img_y) return stbi__err("too large", "Image too large to decode"); if (scan == STBI__SCAN_header) return 1; } else { // if paletted, then pal_n is our final components, and // img_n is # components to decompress/filter. s->img_n = 1; if ((1 << 30) / s->img_x / 4 < s->img_y) return stbi__err("too large", "Corrupt PNG"); // if SCAN_header, have to scan to see if we have a tRNS } break; } case STBI__PNG_TYPE('P', 'L', 'T', 'E'): { if (first) return stbi__err("first not IHDR", "Corrupt PNG"); if (c.length > 256 * 3) return stbi__err("invalid PLTE", "Corrupt PNG"); pal_len = c.length / 3; if (pal_len * 3 != c.length) return stbi__err("invalid PLTE", "Corrupt PNG"); for (i = 0; i < pal_len; ++i) { palette[i * 4 + 0] = stbi__get8(s); palette[i * 4 + 1] = stbi__get8(s); palette[i * 4 + 2] = stbi__get8(s); palette[i * 4 + 3] = 255; } break; } case STBI__PNG_TYPE('t', 'R', 'N', 'S'): { if (first) return stbi__err("first not IHDR", "Corrupt PNG"); if (z->idata) return stbi__err("tRNS after IDAT", "Corrupt PNG"); if (pal_img_n) { if (scan == STBI__SCAN_header) { s->img_n = 4; return 1; } if (pal_len == 0) return stbi__err("tRNS before PLTE", "Corrupt PNG"); if (c.length > pal_len) return stbi__err("bad tRNS len", "Corrupt PNG"); pal_img_n = 4; for (i = 0; i < c.length; ++i) palette[i * 4 + 3] = stbi__get8(s); } else { if (!(s->img_n & 1)) return stbi__err("tRNS with alpha", "Corrupt PNG"); if (c.length != (stbi__uint32)s->img_n * 2) return stbi__err("bad tRNS len", "Corrupt PNG"); has_trans = 1; if (z->depth == 16) { for (k = 0; k < s->img_n; ++k) tc16[k] = (stbi__uint16)stbi__get16be(s); // copy the values as-is } else { for (k = 0; k < s->img_n; ++k) tc[k] = (stbi_uc)(stbi__get16be(s) & 255) * stbi__depth_scale_table[z->depth]; // non 8-bit images will be larger } } break; } case STBI__PNG_TYPE('I', 'D', 'A', 'T'): { if (first) return stbi__err("first not IHDR", "Corrupt PNG"); if (pal_img_n && !pal_len) return stbi__err("no PLTE", "Corrupt PNG"); if (scan == STBI__SCAN_header) { s->img_n = pal_img_n; return 1; } if ((int)(ioff + c.length) < (int)ioff) return 0; if (ioff + c.length > idata_limit) { stbi__uint32 idata_limit_old = idata_limit; stbi_uc *p; if (idata_limit == 0) idata_limit = c.length > 4096 ? c.length : 4096; while (ioff + c.length > idata_limit) idata_limit *= 2; STBI_NOTUSED(idata_limit_old); p = (stbi_uc *)STBI_REALLOC_SIZED(z->idata, idata_limit_old, idata_limit); if (p == NULL) return stbi__err("outofmem", "Out of memory"); z->idata = p; } if (!stbi__getn(s, z->idata + ioff, c.length)) return stbi__err("outofdata", "Corrupt PNG"); ioff += c.length; break; } case STBI__PNG_TYPE('I', 'E', 'N', 'D'): { stbi__uint32 raw_len, bpl; if (first) return stbi__err("first not IHDR", "Corrupt PNG"); if (scan != STBI__SCAN_load) return 1; if (z->idata == NULL) return stbi__err("no IDAT", "Corrupt PNG"); // initial guess for decoded data size to avoid unnecessary reallocs bpl = (s->img_x * z->depth + 7) / 8; // bytes per line, per component raw_len = bpl * s->img_y * s->img_n /* pixels */ + s->img_y /* filter mode per row */; z->expanded = (stbi_uc *)stbi_zlib_decode_malloc_guesssize_headerflag((char *)z->idata, ioff, raw_len, (int *)&raw_len, !is_iphone); if (z->expanded == NULL) return 0; // zlib should set error STBI_FREE(z->idata); z->idata = NULL; if ((req_comp == s->img_n + 1 && req_comp != 3 && !pal_img_n) || has_trans) s->img_out_n = s->img_n + 1; else s->img_out_n = s->img_n; if (!stbi__create_png_image(z, z->expanded, raw_len, s->img_out_n, z->depth, color, interlace)) return 0; if (has_trans) { if (z->depth == 16) { if (!stbi__compute_transparency16(z, tc16, s->img_out_n)) return 0; } else { if (!stbi__compute_transparency(z, tc, s->img_out_n)) return 0; } } if (is_iphone && stbi__de_iphone_flag && s->img_out_n > 2) stbi__de_iphone(z); if (pal_img_n) { // pal_img_n == 3 or 4 s->img_n = pal_img_n; // record the actual colors we had s->img_out_n = pal_img_n; if (req_comp >= 3) s->img_out_n = req_comp; if (!stbi__expand_png_palette(z, palette, pal_len, s->img_out_n)) return 0; } else if (has_trans) { // non-paletted image with tRNS -> source image has (constant) alpha ++s->img_n; } STBI_FREE(z->expanded); z->expanded = NULL; return 1; } default: // if critical, fail if (first) return stbi__err("first not IHDR", "Corrupt PNG"); if ((c.type & (1 << 29)) == 0) { #ifndef STBI_NO_FAILURE_STRINGS // not threadsafe static char invalid_chunk[] = "XXXX PNG chunk not known"; invalid_chunk[0] = STBI__BYTECAST(c.type >> 24); invalid_chunk[1] = STBI__BYTECAST(c.type >> 16); invalid_chunk[2] = STBI__BYTECAST(c.type >> 8); invalid_chunk[3] = STBI__BYTECAST(c.type >> 0); #endif return stbi__err(invalid_chunk, "PNG not supported: unknown PNG chunk type"); } stbi__skip(s, c.length); break; } // end of PNG chunk, read and skip CRC stbi__get32be(s); } } static void *stbi__do_png(stbi__png *p, int *x, int *y, int *n, int req_comp, stbi__result_info *ri) { void *result = NULL; if (req_comp < 0 || req_comp > 4) return stbi__errpuc("bad req_comp", "Internal error"); if (stbi__parse_png_file(p, STBI__SCAN_load, req_comp)) { if (p->depth < 8) ri->bits_per_channel = 8; else ri->bits_per_channel = p->depth; result = p->out; p->out = NULL; if (req_comp && req_comp != p->s->img_out_n) { if (ri->bits_per_channel == 8) result = stbi__convert_format((unsigned char *)result, p->s->img_out_n, req_comp, p->s->img_x, p->s->img_y); else result = stbi__convert_format16((stbi__uint16 *)result, p->s->img_out_n, req_comp, p->s->img_x, p->s->img_y); p->s->img_out_n = req_comp; if (result == NULL) return result; } *x = p->s->img_x; *y = p->s->img_y; if (n) *n = p->s->img_n; } STBI_FREE(p->out); p->out = NULL; STBI_FREE(p->expanded); p->expanded = NULL; STBI_FREE(p->idata); p->idata = NULL; return result; } static void *stbi__png_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { stbi__png p; p.s = s; return stbi__do_png(&p, x, y, comp, req_comp, ri); } static int stbi__png_test(stbi__context *s) { int r; r = stbi__check_png_header(s); stbi__rewind(s); return r; } static int stbi__png_info_raw(stbi__png *p, int *x, int *y, int *comp) { if (!stbi__parse_png_file(p, STBI__SCAN_header, 0)) { stbi__rewind(p->s); return 0; } if (x) *x = p->s->img_x; if (y) *y = p->s->img_y; if (comp) *comp = p->s->img_n; return 1; } static int stbi__png_info(stbi__context *s, int *x, int *y, int *comp) { stbi__png p; p.s = s; return stbi__png_info_raw(&p, x, y, comp); } static int stbi__png_is16(stbi__context *s) { stbi__png p; p.s = s; if (!stbi__png_info_raw(&p, NULL, NULL, NULL)) return 0; if (p.depth != 16) { stbi__rewind(p.s); return 0; } return 1; } #endif // Microsoft/Windows BMP image #ifndef STBI_NO_BMP static int stbi__bmp_test_raw(stbi__context *s) { int r; int sz; if (stbi__get8(s) != 'B') return 0; if (stbi__get8(s) != 'M') return 0; stbi__get32le(s); // discard filesize stbi__get16le(s); // discard reserved stbi__get16le(s); // discard reserved stbi__get32le(s); // discard data offset sz = stbi__get32le(s); r = (sz == 12 || sz == 40 || sz == 56 || sz == 108 || sz == 124); return r; } static int stbi__bmp_test(stbi__context *s) { int r = stbi__bmp_test_raw(s); stbi__rewind(s); return r; } // returns 0..31 for the highest set bit static int stbi__high_bit(unsigned int z) { int n = 0; if (z == 0) return -1; if (z >= 0x10000) { n += 16; z >>= 16; } if (z >= 0x00100) { n += 8; z >>= 8; } if (z >= 0x00010) { n += 4; z >>= 4; } if (z >= 0x00004) { n += 2; z >>= 2; } if (z >= 0x00002) { n += 1; z >>= 1; } return n; } static int stbi__bitcount(unsigned int a) { a = (a & 0x55555555) + ((a >> 1) & 0x55555555); // max 2 a = (a & 0x33333333) + ((a >> 2) & 0x33333333); // max 4 a = (a + (a >> 4)) & 0x0f0f0f0f; // max 8 per 4, now 8 bits a = (a + (a >> 8)); // max 16 per 8 bits a = (a + (a >> 16)); // max 32 per 8 bits return a & 0xff; } // extract an arbitrarily-aligned N-bit value (N=bits) // from v, and then make it 8-bits long and fractionally // extend it to full full range. static int stbi__shiftsigned(unsigned int v, int shift, int bits) { static unsigned int mul_table[9] = { 0, 0xff /*0b11111111*/, 0x55 /*0b01010101*/, 0x49 /*0b01001001*/, 0x11 /*0b00010001*/, 0x21 /*0b00100001*/, 0x41 /*0b01000001*/, 0x81 /*0b10000001*/, 0x01 /*0b00000001*/, }; static unsigned int shift_table[9] = { 0, 0, 0, 1, 0, 2, 4, 6, 0, }; if (shift < 0) v <<= -shift; else v >>= shift; STBI_ASSERT(v >= 0 && v < 256); v >>= (8 - bits); STBI_ASSERT(bits >= 0 && bits <= 8); return (int)((unsigned)v * mul_table[bits]) >> shift_table[bits]; } typedef struct { int bpp, offset, hsz; unsigned int mr, mg, mb, ma, all_a; } stbi__bmp_data; static void *stbi__bmp_parse_header(stbi__context *s, stbi__bmp_data *info) { int hsz; if (stbi__get8(s) != 'B' || stbi__get8(s) != 'M') return stbi__errpuc("not BMP", "Corrupt BMP"); stbi__get32le(s); // discard filesize stbi__get16le(s); // discard reserved stbi__get16le(s); // discard reserved info->offset = stbi__get32le(s); info->hsz = hsz = stbi__get32le(s); info->mr = info->mg = info->mb = info->ma = 0; if (hsz != 12 && hsz != 40 && hsz != 56 && hsz != 108 && hsz != 124) return stbi__errpuc("unknown BMP", "BMP type not supported: unknown"); if (hsz == 12) { s->img_x = stbi__get16le(s); s->img_y = stbi__get16le(s); } else { s->img_x = stbi__get32le(s); s->img_y = stbi__get32le(s); } if (stbi__get16le(s) != 1) return stbi__errpuc("bad BMP", "bad BMP"); info->bpp = stbi__get16le(s); if (hsz != 12) { int compress = stbi__get32le(s); if (compress == 1 || compress == 2) return stbi__errpuc("BMP RLE", "BMP type not supported: RLE"); stbi__get32le(s); // discard sizeof stbi__get32le(s); // discard hres stbi__get32le(s); // discard vres stbi__get32le(s); // discard colorsused stbi__get32le(s); // discard max important if (hsz == 40 || hsz == 56) { if (hsz == 56) { stbi__get32le(s); stbi__get32le(s); stbi__get32le(s); stbi__get32le(s); } if (info->bpp == 16 || info->bpp == 32) { if (compress == 0) { if (info->bpp == 32) { info->mr = 0xffu << 16; info->mg = 0xffu << 8; info->mb = 0xffu << 0; info->ma = 0xffu << 24; info->all_a = 0; // if all_a is 0 at end, then we loaded alpha channel but it was all 0 } else { info->mr = 31u << 10; info->mg = 31u << 5; info->mb = 31u << 0; } } else if (compress == 3) { info->mr = stbi__get32le(s); info->mg = stbi__get32le(s); info->mb = stbi__get32le(s); // not documented, but generated by photoshop and handled by mspaint if (info->mr == info->mg && info->mg == info->mb) { // ?!?!? return stbi__errpuc("bad BMP", "bad BMP"); } } else return stbi__errpuc("bad BMP", "bad BMP"); } } else { int i; if (hsz != 108 && hsz != 124) return stbi__errpuc("bad BMP", "bad BMP"); info->mr = stbi__get32le(s); info->mg = stbi__get32le(s); info->mb = stbi__get32le(s); info->ma = stbi__get32le(s); stbi__get32le(s); // discard color space for (i = 0; i < 12; ++i) stbi__get32le(s); // discard color space parameters if (hsz == 124) { stbi__get32le(s); // discard rendering intent stbi__get32le(s); // discard offset of profile data stbi__get32le(s); // discard size of profile data stbi__get32le(s); // discard reserved } } } return (void *)1; } static void *stbi__bmp_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { stbi_uc *out; unsigned int mr = 0, mg = 0, mb = 0, ma = 0, all_a; stbi_uc pal[256][4]; int psize = 0, i, j, width; int flip_vertically, pad, target; stbi__bmp_data info; STBI_NOTUSED(ri); info.all_a = 255; if (stbi__bmp_parse_header(s, &info) == NULL) return NULL; // error code already set flip_vertically = ((int)s->img_y) > 0; s->img_y = abs((int)s->img_y); mr = info.mr; mg = info.mg; mb = info.mb; ma = info.ma; all_a = info.all_a; if (info.hsz == 12) { if (info.bpp < 24) psize = (info.offset - 14 - 24) / 3; } else { if (info.bpp < 16) psize = (info.offset - 14 - info.hsz) >> 2; } s->img_n = ma ? 4 : 3; if (req_comp && req_comp >= 3) // we can directly decode 3 or 4 target = req_comp; else target = s->img_n; // if they want monochrome, we'll post-convert // sanity-check size if (!stbi__mad3sizes_valid(target, s->img_x, s->img_y, 0)) return stbi__errpuc("too large", "Corrupt BMP"); out = (stbi_uc *)stbi__malloc_mad3(target, s->img_x, s->img_y, 0); if (!out) return stbi__errpuc("outofmem", "Out of memory"); if (info.bpp < 16) { int z = 0; if (psize == 0 || psize > 256) { STBI_FREE(out); return stbi__errpuc("invalid", "Corrupt BMP"); } for (i = 0; i < psize; ++i) { pal[i][2] = stbi__get8(s); pal[i][1] = stbi__get8(s); pal[i][0] = stbi__get8(s); if (info.hsz != 12) stbi__get8(s); pal[i][3] = 255; } stbi__skip(s, info.offset - 14 - info.hsz - psize * (info.hsz == 12 ? 3 : 4)); if (info.bpp == 1) width = (s->img_x + 7) >> 3; else if (info.bpp == 4) width = (s->img_x + 1) >> 1; else if (info.bpp == 8) width = s->img_x; else { STBI_FREE(out); return stbi__errpuc("bad bpp", "Corrupt BMP"); } pad = (-width) & 3; if (info.bpp == 1) { for (j = 0; j < (int)s->img_y; ++j) { int bit_offset = 7, v = stbi__get8(s); for (i = 0; i < (int)s->img_x; ++i) { int color = (v >> bit_offset) & 0x1; out[z++] = pal[color][0]; out[z++] = pal[color][1]; out[z++] = pal[color][2]; if (target == 4) out[z++] = 255; if (i + 1 == (int)s->img_x) break; if ((--bit_offset) < 0) { bit_offset = 7; v = stbi__get8(s); } } stbi__skip(s, pad); } } else { for (j = 0; j < (int)s->img_y; ++j) { for (i = 0; i < (int)s->img_x; i += 2) { int v = stbi__get8(s), v2 = 0; if (info.bpp == 4) { v2 = v & 15; v >>= 4; } out[z++] = pal[v][0]; out[z++] = pal[v][1]; out[z++] = pal[v][2]; if (target == 4) out[z++] = 255; if (i + 1 == (int)s->img_x) break; v = (info.bpp == 8) ? stbi__get8(s) : v2; out[z++] = pal[v][0]; out[z++] = pal[v][1]; out[z++] = pal[v][2]; if (target == 4) out[z++] = 255; } stbi__skip(s, pad); } } } else { int rshift = 0, gshift = 0, bshift = 0, ashift = 0, rcount = 0, gcount = 0, bcount = 0, acount = 0; int z = 0; int easy = 0; stbi__skip(s, info.offset - 14 - info.hsz); if (info.bpp == 24) width = 3 * s->img_x; else if (info.bpp == 16) width = 2 * s->img_x; else /* bpp = 32 and pad = 0 */ width = 0; pad = (-width) & 3; if (info.bpp == 24) { easy = 1; } else if (info.bpp == 32) { if (mb == 0xff && mg == 0xff00 && mr == 0x00ff0000 && ma == 0xff000000) easy = 2; } if (!easy) { if (!mr || !mg || !mb) { STBI_FREE(out); return stbi__errpuc("bad masks", "Corrupt BMP"); } // right shift amt to put high bit in position #7 rshift = stbi__high_bit(mr) - 7; rcount = stbi__bitcount(mr); gshift = stbi__high_bit(mg) - 7; gcount = stbi__bitcount(mg); bshift = stbi__high_bit(mb) - 7; bcount = stbi__bitcount(mb); ashift = stbi__high_bit(ma) - 7; acount = stbi__bitcount(ma); } for (j = 0; j < (int)s->img_y; ++j) { if (easy) { for (i = 0; i < (int)s->img_x; ++i) { unsigned char a; out[z + 2] = stbi__get8(s); out[z + 1] = stbi__get8(s); out[z + 0] = stbi__get8(s); z += 3; a = (easy == 2 ? stbi__get8(s) : 255); all_a |= a; if (target == 4) out[z++] = a; } } else { int bpp = info.bpp; for (i = 0; i < (int)s->img_x; ++i) { stbi__uint32 v = (bpp == 16 ? (stbi__uint32)stbi__get16le(s) : stbi__get32le(s)); unsigned int a; out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mr, rshift, rcount)); out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mg, gshift, gcount)); out[z++] = STBI__BYTECAST(stbi__shiftsigned(v & mb, bshift, bcount)); a = (ma ? stbi__shiftsigned(v & ma, ashift, acount) : 255); all_a |= a; if (target == 4) out[z++] = STBI__BYTECAST(a); } } stbi__skip(s, pad); } } // if alpha channel is all 0s, replace with all 255s if (target == 4 && all_a == 0) for (i = 4 * s->img_x * s->img_y - 1; i >= 0; i -= 4) out[i] = 255; if (flip_vertically) { stbi_uc t; for (j = 0; j<(int)s->img_y>> 1; ++j) { stbi_uc *p1 = out + j * s->img_x * target; stbi_uc *p2 = out + (s->img_y - 1 - j) * s->img_x * target; for (i = 0; i < (int)s->img_x * target; ++i) { t = p1[i]; p1[i] = p2[i]; p2[i] = t; } } } if (req_comp && req_comp != target) { out = stbi__convert_format(out, target, req_comp, s->img_x, s->img_y); if (out == NULL) return out; // stbi__convert_format frees input on failure } *x = s->img_x; *y = s->img_y; if (comp) *comp = s->img_n; return out; } #endif // Targa Truevision - TGA // by Jonathan Dummer #ifndef STBI_NO_TGA // returns STBI_rgb or whatever, 0 on error static int stbi__tga_get_comp(int bits_per_pixel, int is_grey, int *is_rgb16) { // only RGB or RGBA (incl. 16bit) or grey allowed if (is_rgb16) *is_rgb16 = 0; switch (bits_per_pixel) { case 8: return STBI_grey; case 16: if (is_grey) return STBI_grey_alpha; // fallthrough case 15: if (is_rgb16) *is_rgb16 = 1; return STBI_rgb; case 24: // fallthrough case 32: return bits_per_pixel / 8; default: return 0; } } static int stbi__tga_info(stbi__context *s, int *x, int *y, int *comp) { int tga_w, tga_h, tga_comp, tga_image_type, tga_bits_per_pixel, tga_colormap_bpp; int sz, tga_colormap_type; stbi__get8(s); // discard Offset tga_colormap_type = stbi__get8(s); // colormap type if (tga_colormap_type > 1) { stbi__rewind(s); return 0; // only RGB or indexed allowed } tga_image_type = stbi__get8(s); // image type if (tga_colormap_type == 1) { // colormapped (paletted) image if (tga_image_type != 1 && tga_image_type != 9) { stbi__rewind(s); return 0; } stbi__skip(s, 4); // skip index of first colormap entry and number of entries sz = stbi__get8(s); // check bits per palette color entry if ((sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32)) { stbi__rewind(s); return 0; } stbi__skip(s, 4); // skip image x and y origin tga_colormap_bpp = sz; } else { // "normal" image w/o colormap - only RGB or grey allowed, +/- RLE if ((tga_image_type != 2) && (tga_image_type != 3) && (tga_image_type != 10) && (tga_image_type != 11)) { stbi__rewind(s); return 0; // only RGB or grey allowed, +/- RLE } stbi__skip(s, 9); // skip colormap specification and image x/y origin tga_colormap_bpp = 0; } tga_w = stbi__get16le(s); if (tga_w < 1) { stbi__rewind(s); return 0; // test width } tga_h = stbi__get16le(s); if (tga_h < 1) { stbi__rewind(s); return 0; // test height } tga_bits_per_pixel = stbi__get8(s); // bits per pixel stbi__get8(s); // ignore alpha bits if (tga_colormap_bpp != 0) { if ((tga_bits_per_pixel != 8) && (tga_bits_per_pixel != 16)) { // when using a colormap, tga_bits_per_pixel is the size of the indexes // I don't think anything but 8 or 16bit indexes makes sense stbi__rewind(s); return 0; } tga_comp = stbi__tga_get_comp(tga_colormap_bpp, 0, NULL); } else { tga_comp = stbi__tga_get_comp(tga_bits_per_pixel, (tga_image_type == 3) || (tga_image_type == 11), NULL); } if (!tga_comp) { stbi__rewind(s); return 0; } if (x) *x = tga_w; if (y) *y = tga_h; if (comp) *comp = tga_comp; return 1; // seems to have passed everything } static int stbi__tga_test(stbi__context *s) { int res = 0; int sz, tga_color_type; stbi__get8(s); // discard Offset tga_color_type = stbi__get8(s); // color type if (tga_color_type > 1) goto errorEnd; // only RGB or indexed allowed sz = stbi__get8(s); // image type if (tga_color_type == 1) { // colormapped (paletted) image if (sz != 1 && sz != 9) goto errorEnd; // colortype 1 demands image type 1 or 9 stbi__skip(s, 4); // skip index of first colormap entry and number of entries sz = stbi__get8(s); // check bits per palette color entry if ((sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32)) goto errorEnd; stbi__skip(s, 4); // skip image x and y origin } else { // "normal" image w/o colormap if ((sz != 2) && (sz != 3) && (sz != 10) && (sz != 11)) goto errorEnd; // only RGB or grey allowed, +/- RLE stbi__skip(s, 9); // skip colormap specification and image x/y origin } if (stbi__get16le(s) < 1) goto errorEnd; // test width if (stbi__get16le(s) < 1) goto errorEnd; // test height sz = stbi__get8(s); // bits per pixel if ((tga_color_type == 1) && (sz != 8) && (sz != 16)) goto errorEnd; // for colormapped images, bpp is size of an index if ((sz != 8) && (sz != 15) && (sz != 16) && (sz != 24) && (sz != 32)) goto errorEnd; res = 1; // if we got this far, everything's good and we can return 1 instead of 0 errorEnd: stbi__rewind(s); return res; } // read 16bit value and convert to 24bit RGB static void stbi__tga_read_rgb16(stbi__context *s, stbi_uc *out) { stbi__uint16 px = (stbi__uint16)stbi__get16le(s); stbi__uint16 fiveBitMask = 31; // we have 3 channels with 5bits each int r = (px >> 10) & fiveBitMask; int g = (px >> 5) & fiveBitMask; int b = px & fiveBitMask; // Note that this saves the data in RGB(A) order, so it doesn't need to be swapped later out[0] = (stbi_uc)((r * 255) / 31); out[1] = (stbi_uc)((g * 255) / 31); out[2] = (stbi_uc)((b * 255) / 31); // some people claim that the most significant bit might be used for alpha // (possibly if an alpha-bit is set in the "image descriptor byte") // but that only made 16bit test images completely translucent.. // so let's treat all 15 and 16bit TGAs as RGB with no alpha. } static void *stbi__tga_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { // read in the TGA header stuff int tga_offset = stbi__get8(s); int tga_indexed = stbi__get8(s); int tga_image_type = stbi__get8(s); int tga_is_RLE = 0; int tga_palette_start = stbi__get16le(s); int tga_palette_len = stbi__get16le(s); int tga_palette_bits = stbi__get8(s); int tga_x_origin = stbi__get16le(s); int tga_y_origin = stbi__get16le(s); int tga_width = stbi__get16le(s); int tga_height = stbi__get16le(s); int tga_bits_per_pixel = stbi__get8(s); int tga_comp, tga_rgb16 = 0; int tga_inverted = stbi__get8(s); // int tga_alpha_bits = tga_inverted & 15; // the 4 lowest bits - unused (useless?) // image data unsigned char *tga_data; unsigned char *tga_palette = NULL; int i, j; unsigned char raw_data[4] = { 0 }; int RLE_count = 0; int RLE_repeating = 0; int read_next_pixel = 1; STBI_NOTUSED(ri); // do a tiny bit of precessing if (tga_image_type >= 8) { tga_image_type -= 8; tga_is_RLE = 1; } tga_inverted = 1 - ((tga_inverted >> 5) & 1); // If I'm paletted, then I'll use the number of bits from the palette if (tga_indexed) tga_comp = stbi__tga_get_comp(tga_palette_bits, 0, &tga_rgb16); else tga_comp = stbi__tga_get_comp(tga_bits_per_pixel, (tga_image_type == 3), &tga_rgb16); if (!tga_comp) // shouldn't really happen, stbi__tga_test() should have ensured basic consistency return stbi__errpuc("bad format", "Can't find out TGA pixelformat"); // tga info *x = tga_width; *y = tga_height; if (comp) *comp = tga_comp; if (!stbi__mad3sizes_valid(tga_width, tga_height, tga_comp, 0)) return stbi__errpuc("too large", "Corrupt TGA"); tga_data = (unsigned char *)stbi__malloc_mad3(tga_width, tga_height, tga_comp, 0); if (!tga_data) return stbi__errpuc("outofmem", "Out of memory"); // skip to the data's starting position (offset usually = 0) stbi__skip(s, tga_offset); if (!tga_indexed && !tga_is_RLE && !tga_rgb16) { for (i = 0; i < tga_height; ++i) { int row = tga_inverted ? tga_height - i - 1 : i; stbi_uc *tga_row = tga_data + row * tga_width * tga_comp; stbi__getn(s, tga_row, tga_width * tga_comp); } } else { // do I need to load a palette? if (tga_indexed) { // any data to skip? (offset usually = 0) stbi__skip(s, tga_palette_start); // load the palette tga_palette = (unsigned char *)stbi__malloc_mad2(tga_palette_len, tga_comp, 0); if (!tga_palette) { STBI_FREE(tga_data); return stbi__errpuc("outofmem", "Out of memory"); } if (tga_rgb16) { stbi_uc *pal_entry = tga_palette; STBI_ASSERT(tga_comp == STBI_rgb); for (i = 0; i < tga_palette_len; ++i) { stbi__tga_read_rgb16(s, pal_entry); pal_entry += tga_comp; } } else if (!stbi__getn(s, tga_palette, tga_palette_len * tga_comp)) { STBI_FREE(tga_data); STBI_FREE(tga_palette); return stbi__errpuc("bad palette", "Corrupt TGA"); } } // load the data for (i = 0; i < tga_width * tga_height; ++i) { // if I'm in RLE mode, do I need to get a RLE stbi__pngchunk? if (tga_is_RLE) { if (RLE_count == 0) { // yep, get the next byte as a RLE command int RLE_cmd = stbi__get8(s); RLE_count = 1 + (RLE_cmd & 127); RLE_repeating = RLE_cmd >> 7; read_next_pixel = 1; } else if (!RLE_repeating) { read_next_pixel = 1; } } else { read_next_pixel = 1; } // OK, if I need to read a pixel, do it now if (read_next_pixel) { // load however much data we did have if (tga_indexed) { // read in index, then perform the lookup int pal_idx = (tga_bits_per_pixel == 8) ? stbi__get8(s) : stbi__get16le(s); if (pal_idx >= tga_palette_len) { // invalid index pal_idx = 0; } pal_idx *= tga_comp; for (j = 0; j < tga_comp; ++j) { raw_data[j] = tga_palette[pal_idx + j]; } } else if (tga_rgb16) { STBI_ASSERT(tga_comp == STBI_rgb); stbi__tga_read_rgb16(s, raw_data); } else { // read in the data raw for (j = 0; j < tga_comp; ++j) { raw_data[j] = stbi__get8(s); } } // clear the reading flag for the next pixel read_next_pixel = 0; } // end of reading a pixel // copy data for (j = 0; j < tga_comp; ++j) tga_data[i * tga_comp + j] = raw_data[j]; // in case we're in RLE mode, keep counting down --RLE_count; } // do I need to invert the image? if (tga_inverted) { for (j = 0; j * 2 < tga_height; ++j) { int index1 = j * tga_width * tga_comp; int index2 = (tga_height - 1 - j) * tga_width * tga_comp; for (i = tga_width * tga_comp; i > 0; --i) { unsigned char temp = tga_data[index1]; tga_data[index1] = tga_data[index2]; tga_data[index2] = temp; ++index1; ++index2; } } } // clear my palette, if I had one if (tga_palette != NULL) { STBI_FREE(tga_palette); } } // swap RGB - if the source data was RGB16, it already is in the right order if (tga_comp >= 3 && !tga_rgb16) { unsigned char *tga_pixel = tga_data; for (i = 0; i < tga_width * tga_height; ++i) { unsigned char temp = tga_pixel[0]; tga_pixel[0] = tga_pixel[2]; tga_pixel[2] = temp; tga_pixel += tga_comp; } } // convert to target component count if (req_comp && req_comp != tga_comp) tga_data = stbi__convert_format(tga_data, tga_comp, req_comp, tga_width, tga_height); // the things I do to get rid of an error message, and yet keep // Microsoft's C compilers happy... [8^( tga_palette_start = tga_palette_len = tga_palette_bits = tga_x_origin = tga_y_origin = 0; // OK, done return tga_data; } #endif // ************************************************************************************************* // Photoshop PSD loader -- PD by Thatcher Ulrich, integration by Nicolas Schulz, tweaked by STB #ifndef STBI_NO_PSD static int stbi__psd_test(stbi__context *s) { int r = (stbi__get32be(s) == 0x38425053); stbi__rewind(s); return r; } static int stbi__psd_decode_rle(stbi__context *s, stbi_uc *p, int pixelCount) { int count, nleft, len; count = 0; while ((nleft = pixelCount - count) > 0) { len = stbi__get8(s); if (len == 128) { // No-op. } else if (len < 128) { // Copy next len+1 bytes literally. len++; if (len > nleft) return 0; // corrupt data count += len; while (len) { *p = stbi__get8(s); p += 4; len--; } } else if (len > 128) { stbi_uc val; // Next -len+1 bytes in the dest are replicated from next source byte. // (Interpret len as a negative 8-bit int.) len = 257 - len; if (len > nleft) return 0; // corrupt data val = stbi__get8(s); count += len; while (len) { *p = val; p += 4; len--; } } } return 1; } static void *stbi__psd_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri, int bpc) { int pixelCount; int channelCount, compression; int channel, i; int bitdepth; int w, h; stbi_uc *out; STBI_NOTUSED(ri); // Check identifier if (stbi__get32be(s) != 0x38425053) // "8BPS" return stbi__errpuc("not PSD", "Corrupt PSD image"); // Check file type version. if (stbi__get16be(s) != 1) return stbi__errpuc("wrong version", "Unsupported version of PSD image"); // Skip 6 reserved bytes. stbi__skip(s, 6); // Read the number of channels (R, G, B, A, etc). channelCount = stbi__get16be(s); if (channelCount < 0 || channelCount > 16) return stbi__errpuc("wrong channel count", "Unsupported number of channels in PSD image"); // Read the rows and columns of the image. h = stbi__get32be(s); w = stbi__get32be(s); // Make sure the depth is 8 bits. bitdepth = stbi__get16be(s); if (bitdepth != 8 && bitdepth != 16) return stbi__errpuc("unsupported bit depth", "PSD bit depth is not 8 or 16 bit"); // Make sure the color mode is RGB. // Valid options are: // 0: Bitmap // 1: Grayscale // 2: Indexed color // 3: RGB color // 4: CMYK color // 7: Multichannel // 8: Duotone // 9: Lab color if (stbi__get16be(s) != 3) return stbi__errpuc("wrong color format", "PSD is not in RGB color format"); // Skip the Mode Data. (It's the palette for indexed color; other info for other modes.) stbi__skip(s, stbi__get32be(s)); // Skip the image resources. (resolution, pen tool paths, etc) stbi__skip(s, stbi__get32be(s)); // Skip the reserved data. stbi__skip(s, stbi__get32be(s)); // Find out if the data is compressed. // Known values: // 0: no compression // 1: RLE compressed compression = stbi__get16be(s); if (compression > 1) return stbi__errpuc("bad compression", "PSD has an unknown compression format"); // Check size if (!stbi__mad3sizes_valid(4, w, h, 0)) return stbi__errpuc("too large", "Corrupt PSD"); // Create the destination image. if (!compression && bitdepth == 16 && bpc == 16) { out = (stbi_uc *)stbi__malloc_mad3(8, w, h, 0); ri->bits_per_channel = 16; } else out = (stbi_uc *)stbi__malloc(4 * w * h); if (!out) return stbi__errpuc("outofmem", "Out of memory"); pixelCount = w * h; // Initialize the data to zero. //memset( out, 0, pixelCount * 4 ); // Finally, the image data. if (compression) { // RLE as used by .PSD and .TIFF // Loop until you get the number of unpacked bytes you are expecting: // Read the next source byte into n. // If n is between 0 and 127 inclusive, copy the next n+1 bytes literally. // Else if n is between -127 and -1 inclusive, copy the next byte -n+1 times. // Else if n is 128, noop. // Endloop // The RLE-compressed data is preceded by a 2-byte data count for each row in the data, // which we're going to just skip. stbi__skip(s, h * channelCount * 2); // Read the RLE data by channel. for (channel = 0; channel < 4; channel++) { stbi_uc *p; p = out + channel; if (channel >= channelCount) { // Fill this channel with default data. for (i = 0; i < pixelCount; i++, p += 4) *p = (channel == 3 ? 255 : 0); } else { // Read the RLE data. if (!stbi__psd_decode_rle(s, p, pixelCount)) { STBI_FREE(out); return stbi__errpuc("corrupt", "bad RLE data"); } } } } else { // We're at the raw image data. It's each channel in order (Red, Green, Blue, Alpha, ...) // where each channel consists of an 8-bit (or 16-bit) value for each pixel in the image. // Read the data by channel. for (channel = 0; channel < 4; channel++) { if (channel >= channelCount) { // Fill this channel with default data. if (bitdepth == 16 && bpc == 16) { stbi__uint16 *q = ((stbi__uint16 *)out) + channel; stbi__uint16 val = channel == 3 ? 65535 : 0; for (i = 0; i < pixelCount; i++, q += 4) *q = val; } else { stbi_uc *p = out + channel; stbi_uc val = channel == 3 ? 255 : 0; for (i = 0; i < pixelCount; i++, p += 4) *p = val; } } else { if (ri->bits_per_channel == 16) { // output bpc stbi__uint16 *q = ((stbi__uint16 *)out) + channel; for (i = 0; i < pixelCount; i++, q += 4) *q = (stbi__uint16)stbi__get16be(s); } else { stbi_uc *p = out + channel; if (bitdepth == 16) { // input bpc for (i = 0; i < pixelCount; i++, p += 4) *p = (stbi_uc)(stbi__get16be(s) >> 8); } else { for (i = 0; i < pixelCount; i++, p += 4) *p = stbi__get8(s); } } } } } // remove weird white matte from PSD if (channelCount >= 4) { if (ri->bits_per_channel == 16) { for (i = 0; i < w * h; ++i) { stbi__uint16 *pixel = (stbi__uint16 *)out + 4 * i; if (pixel[3] != 0 && pixel[3] != 65535) { float a = pixel[3] / 65535.0f; float ra = 1.0f / a; float inv_a = 65535.0f * (1 - ra); pixel[0] = (stbi__uint16)(pixel[0] * ra + inv_a); pixel[1] = (stbi__uint16)(pixel[1] * ra + inv_a); pixel[2] = (stbi__uint16)(pixel[2] * ra + inv_a); } } } else { for (i = 0; i < w * h; ++i) { unsigned char *pixel = out + 4 * i; if (pixel[3] != 0 && pixel[3] != 255) { float a = pixel[3] / 255.0f; float ra = 1.0f / a; float inv_a = 255.0f * (1 - ra); pixel[0] = (unsigned char)(pixel[0] * ra + inv_a); pixel[1] = (unsigned char)(pixel[1] * ra + inv_a); pixel[2] = (unsigned char)(pixel[2] * ra + inv_a); } } } } // convert to desired output format if (req_comp && req_comp != 4) { if (ri->bits_per_channel == 16) out = (stbi_uc *)stbi__convert_format16((stbi__uint16 *)out, 4, req_comp, w, h); else out = stbi__convert_format(out, 4, req_comp, w, h); if (out == NULL) return out; // stbi__convert_format frees input on failure } if (comp) *comp = 4; *y = h; *x = w; return out; } #endif // ************************************************************************************************* // Softimage PIC loader // by Tom Seddon // // See http://softimage.wiki.softimage.com/index.php/INFO:_PIC_file_format // See http://ozviz.wasp.uwa.edu.au/~pbourke/dataformats/softimagepic/ #ifndef STBI_NO_PIC static int stbi__pic_is4(stbi__context *s, const char *str) { int i; for (i = 0; i < 4; ++i) if (stbi__get8(s) != (stbi_uc)str[i]) return 0; return 1; } static int stbi__pic_test_core(stbi__context *s) { int i; if (!stbi__pic_is4(s, "\x53\x80\xF6\x34")) return 0; for (i = 0; i < 84; ++i) stbi__get8(s); if (!stbi__pic_is4(s, "PICT")) return 0; return 1; } typedef struct { stbi_uc size, type, channel; } stbi__pic_packet; static stbi_uc *stbi__readval(stbi__context *s, int channel, stbi_uc *dest) { int mask = 0x80, i; for (i = 0; i < 4; ++i, mask >>= 1) { if (channel & mask) { if (stbi__at_eof(s)) return stbi__errpuc("bad file", "PIC file too short"); dest[i] = stbi__get8(s); } } return dest; } static void stbi__copyval(int channel, stbi_uc *dest, const stbi_uc *src) { int mask = 0x80, i; for (i = 0; i < 4; ++i, mask >>= 1) if (channel & mask) dest[i] = src[i]; } static stbi_uc *stbi__pic_load_core(stbi__context *s, int width, int height, int *comp, stbi_uc *result) { int act_comp = 0, num_packets = 0, y, chained; stbi__pic_packet packets[10]; // this will (should...) cater for even some bizarre stuff like having data // for the same channel in multiple packets. do { stbi__pic_packet *packet; if (num_packets == sizeof(packets) / sizeof(packets[0])) return stbi__errpuc("bad format", "too many packets"); packet = &packets[num_packets++]; chained = stbi__get8(s); packet->size = stbi__get8(s); packet->type = stbi__get8(s); packet->channel = stbi__get8(s); act_comp |= packet->channel; if (stbi__at_eof(s)) return stbi__errpuc("bad file", "file too short (reading packets)"); if (packet->size != 8) return stbi__errpuc("bad format", "packet isn't 8bpp"); } while (chained); *comp = (act_comp & 0x10 ? 4 : 3); // has alpha channel? for (y = 0; y < height; ++y) { int packet_idx; for (packet_idx = 0; packet_idx < num_packets; ++packet_idx) { stbi__pic_packet *packet = &packets[packet_idx]; stbi_uc *dest = result + y * width * 4; switch (packet->type) { default: return stbi__errpuc("bad format", "packet has bad compression type"); case 0: { //uncompressed int x; for (x = 0; x < width; ++x, dest += 4) if (!stbi__readval(s, packet->channel, dest)) return 0; break; } case 1: //Pure RLE { int left = width, i; while (left > 0) { stbi_uc count, value[4]; count = stbi__get8(s); if (stbi__at_eof(s)) return stbi__errpuc("bad file", "file too short (pure read count)"); if (count > left) count = (stbi_uc)left; if (!stbi__readval(s, packet->channel, value)) return 0; for (i = 0; i < count; ++i, dest += 4) stbi__copyval(packet->channel, dest, value); left -= count; } } break; case 2: { //Mixed RLE int left = width; while (left > 0) { int count = stbi__get8(s), i; if (stbi__at_eof(s)) return stbi__errpuc("bad file", "file too short (mixed read count)"); if (count >= 128) { // Repeated stbi_uc value[4]; if (count == 128) count = stbi__get16be(s); else count -= 127; if (count > left) return stbi__errpuc("bad file", "scanline overrun"); if (!stbi__readval(s, packet->channel, value)) return 0; for (i = 0; i < count; ++i, dest += 4) stbi__copyval(packet->channel, dest, value); } else { // Raw ++count; if (count > left) return stbi__errpuc("bad file", "scanline overrun"); for (i = 0; i < count; ++i, dest += 4) if (!stbi__readval(s, packet->channel, dest)) return 0; } left -= count; } break; } } } } return result; } static void *stbi__pic_load(stbi__context *s, int *px, int *py, int *comp, int req_comp, stbi__result_info *ri) { stbi_uc *result; int i, x, y, internal_comp; STBI_NOTUSED(ri); if (!comp) comp = &internal_comp; for (i = 0; i < 92; ++i) stbi__get8(s); x = stbi__get16be(s); y = stbi__get16be(s); if (stbi__at_eof(s)) return stbi__errpuc("bad file", "file too short (pic header)"); if (!stbi__mad3sizes_valid(x, y, 4, 0)) return stbi__errpuc("too large", "PIC image too large to decode"); stbi__get32be(s); //skip `ratio' stbi__get16be(s); //skip `fields' stbi__get16be(s); //skip `pad' // intermediate buffer is RGBA result = (stbi_uc *)stbi__malloc_mad3(x, y, 4, 0); memset(result, 0xff, x * y * 4); if (!stbi__pic_load_core(s, x, y, comp, result)) { STBI_FREE(result); result = 0; } *px = x; *py = y; if (req_comp == 0) req_comp = *comp; result = stbi__convert_format(result, 4, req_comp, x, y); return result; } static int stbi__pic_test(stbi__context *s) { int r = stbi__pic_test_core(s); stbi__rewind(s); return r; } #endif // ************************************************************************************************* // GIF loader -- public domain by Jean-Marc Lienher -- simplified/shrunk by stb #ifndef STBI_NO_GIF typedef struct { stbi__int16 prefix; stbi_uc first; stbi_uc suffix; } stbi__gif_lzw; typedef struct { int w, h; stbi_uc *out; // output buffer (always 4 components) stbi_uc *background; // The current "background" as far as a gif is concerned stbi_uc *history; int flags, bgindex, ratio, transparent, eflags; stbi_uc pal[256][4]; stbi_uc lpal[256][4]; stbi__gif_lzw codes[8192]; stbi_uc *color_table; int parse, step; int lflags; int start_x, start_y; int max_x, max_y; int cur_x, cur_y; int line_size; int delay; } stbi__gif; static int stbi__gif_test_raw(stbi__context *s) { int sz; if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8') return 0; sz = stbi__get8(s); if (sz != '9' && sz != '7') return 0; if (stbi__get8(s) != 'a') return 0; return 1; } static int stbi__gif_test(stbi__context *s) { int r = stbi__gif_test_raw(s); stbi__rewind(s); return r; } static void stbi__gif_parse_colortable(stbi__context *s, stbi_uc pal[256][4], int num_entries, int transp) { int i; for (i = 0; i < num_entries; ++i) { pal[i][2] = stbi__get8(s); pal[i][1] = stbi__get8(s); pal[i][0] = stbi__get8(s); pal[i][3] = transp == i ? 0 : 255; } } static int stbi__gif_header(stbi__context *s, stbi__gif *g, int *comp, int is_info) { stbi_uc version; if (stbi__get8(s) != 'G' || stbi__get8(s) != 'I' || stbi__get8(s) != 'F' || stbi__get8(s) != '8') return stbi__err("not GIF", "Corrupt GIF"); version = stbi__get8(s); if (version != '7' && version != '9') return stbi__err("not GIF", "Corrupt GIF"); if (stbi__get8(s) != 'a') return stbi__err("not GIF", "Corrupt GIF"); stbi__g_failure_reason = ""; g->w = stbi__get16le(s); g->h = stbi__get16le(s); g->flags = stbi__get8(s); g->bgindex = stbi__get8(s); g->ratio = stbi__get8(s); g->transparent = -1; if (comp != 0) *comp = 4; // can't actually tell whether it's 3 or 4 until we parse the comments if (is_info) return 1; if (g->flags & 0x80) stbi__gif_parse_colortable(s, g->pal, 2 << (g->flags & 7), -1); return 1; } static int stbi__gif_info_raw(stbi__context *s, int *x, int *y, int *comp) { stbi__gif *g = (stbi__gif *)stbi__malloc(sizeof(stbi__gif)); if (!stbi__gif_header(s, g, comp, 1)) { STBI_FREE(g); stbi__rewind(s); return 0; } if (x) *x = g->w; if (y) *y = g->h; STBI_FREE(g); return 1; } static void stbi__out_gif_code(stbi__gif *g, stbi__uint16 code) { stbi_uc *p, *c; int idx; // recurse to decode the prefixes, since the linked-list is backwards, // and working backwards through an interleaved image would be nasty if (g->codes[code].prefix >= 0) stbi__out_gif_code(g, g->codes[code].prefix); if (g->cur_y >= g->max_y) return; idx = g->cur_x + g->cur_y; p = &g->out[idx]; g->history[idx / 4] = 1; c = &g->color_table[g->codes[code].suffix * 4]; if (c[3] > 128) { // don't render transparent pixels; p[0] = c[2]; p[1] = c[1]; p[2] = c[0]; p[3] = c[3]; } g->cur_x += 4; if (g->cur_x >= g->max_x) { g->cur_x = g->start_x; g->cur_y += g->step; while (g->cur_y >= g->max_y && g->parse > 0) { g->step = (1 << g->parse) * g->line_size; g->cur_y = g->start_y + (g->step >> 1); --g->parse; } } } static stbi_uc *stbi__process_gif_raster(stbi__context *s, stbi__gif *g) { stbi_uc lzw_cs; stbi__int32 len, init_code; stbi__uint32 first; stbi__int32 codesize, codemask, avail, oldcode, bits, valid_bits, clear; stbi__gif_lzw *p; lzw_cs = stbi__get8(s); if (lzw_cs > 12) return NULL; clear = 1 << lzw_cs; first = 1; codesize = lzw_cs + 1; codemask = (1 << codesize) - 1; bits = 0; valid_bits = 0; for (init_code = 0; init_code < clear; init_code++) { g->codes[init_code].prefix = -1; g->codes[init_code].first = (stbi_uc)init_code; g->codes[init_code].suffix = (stbi_uc)init_code; } // support no starting clear code avail = clear + 2; oldcode = -1; len = 0; for (;;) { if (valid_bits < codesize) { if (len == 0) { len = stbi__get8(s); // start new block if (len == 0) return g->out; } --len; bits |= (stbi__int32)stbi__get8(s) << valid_bits; valid_bits += 8; } else { stbi__int32 code = bits & codemask; bits >>= codesize; valid_bits -= codesize; // @OPTIMIZE: is there some way we can accelerate the non-clear path? if (code == clear) { // clear code codesize = lzw_cs + 1; codemask = (1 << codesize) - 1; avail = clear + 2; oldcode = -1; first = 0; } else if (code == clear + 1) { // end of stream code stbi__skip(s, len); while ((len = stbi__get8(s)) > 0) stbi__skip(s, len); return g->out; } else if (code <= avail) { if (first) { return stbi__errpuc("no clear code", "Corrupt GIF"); } if (oldcode >= 0) { p = &g->codes[avail++]; if (avail > 8192) { return stbi__errpuc("too many codes", "Corrupt GIF"); } p->prefix = (stbi__int16)oldcode; p->first = g->codes[oldcode].first; p->suffix = (code == avail) ? p->first : g->codes[code].first; } else if (code == avail) return stbi__errpuc("illegal code in raster", "Corrupt GIF"); stbi__out_gif_code(g, (stbi__uint16)code); if ((avail & codemask) == 0 && avail <= 0x0FFF) { codesize++; codemask = (1 << codesize) - 1; } oldcode = code; } else { return stbi__errpuc("illegal code in raster", "Corrupt GIF"); } } } } // this function is designed to support animated gifs, although stb_image doesn't support it // two back is the image from two frames ago, used for a very specific disposal format static stbi_uc *stbi__gif_load_next(stbi__context *s, stbi__gif *g, int *comp, int req_comp, stbi_uc *two_back) { int dispose; int first_frame; int pi; int pcount; STBI_NOTUSED(req_comp); // on first frame, any non-written pixels get the background colour (non-transparent) first_frame = 0; if (g->out == 0) { if (!stbi__gif_header(s, g, comp, 0)) return 0; // stbi__g_failure_reason set by stbi__gif_header if (!stbi__mad3sizes_valid(4, g->w, g->h, 0)) return stbi__errpuc("too large", "GIF image is too large"); pcount = g->w * g->h; g->out = (stbi_uc *)stbi__malloc(4 * pcount); g->background = (stbi_uc *)stbi__malloc(4 * pcount); g->history = (stbi_uc *)stbi__malloc(pcount); if (!g->out || !g->background || !g->history) return stbi__errpuc("outofmem", "Out of memory"); // image is treated as "transparent" at the start - ie, nothing overwrites the current background; // background colour is only used for pixels that are not rendered first frame, after that "background" // color refers to the color that was there the previous frame. memset(g->out, 0x00, 4 * pcount); memset(g->background, 0x00, 4 * pcount); // state of the background (starts transparent) memset(g->history, 0x00, pcount); // pixels that were affected previous frame first_frame = 1; } else { // second frame - how do we dispoase of the previous one? dispose = (g->eflags & 0x1C) >> 2; pcount = g->w * g->h; if ((dispose == 3) && (two_back == 0)) { dispose = 2; // if I don't have an image to revert back to, default to the old background } if (dispose == 3) { // use previous graphic for (pi = 0; pi < pcount; ++pi) { if (g->history[pi]) { memcpy(&g->out[pi * 4], &two_back[pi * 4], 4); } } } else if (dispose == 2) { // restore what was changed last frame to background before that frame; for (pi = 0; pi < pcount; ++pi) { if (g->history[pi]) { memcpy(&g->out[pi * 4], &g->background[pi * 4], 4); } } } else { // This is a non-disposal case eithe way, so just // leave the pixels as is, and they will become the new background // 1: do not dispose // 0: not specified. } // background is what out is after the undoing of the previou frame; memcpy(g->background, g->out, 4 * g->w * g->h); } // clear my history; memset(g->history, 0x00, g->w * g->h); // pixels that were affected previous frame for (;;) { int tag = stbi__get8(s); switch (tag) { case 0x2C: /* Image Descriptor */ { stbi__int32 x, y, w, h; stbi_uc *o; x = stbi__get16le(s); y = stbi__get16le(s); w = stbi__get16le(s); h = stbi__get16le(s); if (((x + w) > (g->w)) || ((y + h) > (g->h))) return stbi__errpuc("bad Image Descriptor", "Corrupt GIF"); g->line_size = g->w * 4; g->start_x = x * 4; g->start_y = y * g->line_size; g->max_x = g->start_x + w * 4; g->max_y = g->start_y + h * g->line_size; g->cur_x = g->start_x; g->cur_y = g->start_y; // if the width of the specified rectangle is 0, that means // we may not see *any* pixels or the image is malformed; // to make sure this is caught, move the current y down to // max_y (which is what out_gif_code checks). if (w == 0) g->cur_y = g->max_y; g->lflags = stbi__get8(s); if (g->lflags & 0x40) { g->step = 8 * g->line_size; // first interlaced spacing g->parse = 3; } else { g->step = g->line_size; g->parse = 0; } if (g->lflags & 0x80) { stbi__gif_parse_colortable(s, g->lpal, 2 << (g->lflags & 7), g->eflags & 0x01 ? g->transparent : -1); g->color_table = (stbi_uc *)g->lpal; } else if (g->flags & 0x80) { g->color_table = (stbi_uc *)g->pal; } else return stbi__errpuc("missing color table", "Corrupt GIF"); o = stbi__process_gif_raster(s, g); if (!o) return NULL; // if this was the first frame, pcount = g->w * g->h; if (first_frame && (g->bgindex > 0)) { // if first frame, any pixel not drawn to gets the background color for (pi = 0; pi < pcount; ++pi) { if (g->history[pi] == 0) { g->pal[g->bgindex][3] = 255; // just in case it was made transparent, undo that; It will be reset next frame if need be; memcpy(&g->out[pi * 4], &g->pal[g->bgindex], 4); } } } return o; } case 0x21: // Comment Extension. { int len; int ext = stbi__get8(s); if (ext == 0xF9) { // Graphic Control Extension. len = stbi__get8(s); if (len == 4) { g->eflags = stbi__get8(s); g->delay = 10 * stbi__get16le(s); // delay - 1/100th of a second, saving as 1/1000ths. // unset old transparent if (g->transparent >= 0) { g->pal[g->transparent][3] = 255; } if (g->eflags & 0x01) { g->transparent = stbi__get8(s); if (g->transparent >= 0) { g->pal[g->transparent][3] = 0; } } else { // don't need transparent stbi__skip(s, 1); g->transparent = -1; } } else { stbi__skip(s, len); break; } } while ((len = stbi__get8(s)) != 0) { stbi__skip(s, len); } break; } case 0x3B: // gif stream termination code return (stbi_uc *)s; // using '1' causes warning on some compilers default: return stbi__errpuc("unknown code", "Corrupt GIF"); } } } static void *stbi__load_gif_main(stbi__context *s, int **delays, int *x, int *y, int *z, int *comp, int req_comp) { if (stbi__gif_test(s)) { int layers = 0; stbi_uc *u = 0; stbi_uc *out = 0; stbi_uc *two_back = 0; stbi__gif g; int stride; memset(&g, 0, sizeof(g)); if (delays) { *delays = 0; } do { u = stbi__gif_load_next(s, &g, comp, req_comp, two_back); if (u == (stbi_uc *)s) u = 0; // end of animated gif marker if (u) { *x = g.w; *y = g.h; ++layers; stride = g.w * g.h * 4; if (out) { out = (stbi_uc *)STBI_REALLOC(out, layers * stride); if (delays) { *delays = (int *)STBI_REALLOC(*delays, sizeof(int) * layers); } } else { out = (stbi_uc *)stbi__malloc(layers * stride); if (delays) { *delays = (int *)stbi__malloc(layers * sizeof(int)); } } memcpy(out + ((layers - 1) * stride), u, stride); if (layers >= 2) { two_back = out - 2 * stride; } if (delays) { (*delays)[layers - 1U] = g.delay; } } } while (u != 0); // free temp buffer; STBI_FREE(g.out); STBI_FREE(g.history); STBI_FREE(g.background); // do the final conversion after loading everything; if (req_comp && req_comp != 4) out = stbi__convert_format(out, 4, req_comp, layers * g.w, g.h); *z = layers; return out; } else { return stbi__errpuc("not GIF", "Image was not as a gif type."); } } static void *stbi__gif_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { stbi_uc *u = 0; stbi__gif g; memset(&g, 0, sizeof(g)); STBI_NOTUSED(ri); u = stbi__gif_load_next(s, &g, comp, req_comp, 0); if (u == (stbi_uc *)s) u = 0; // end of animated gif marker if (u) { *x = g.w; *y = g.h; // moved conversion to after successful load so that the same // can be done for multiple frames. if (req_comp && req_comp != 4) u = stbi__convert_format(u, 4, req_comp, g.w, g.h); } else if (g.out) { // if there was an error and we allocated an image buffer, free it! STBI_FREE(g.out); } // free buffers needed for multiple frame loading; STBI_FREE(g.history); STBI_FREE(g.background); return u; } static int stbi__gif_info(stbi__context *s, int *x, int *y, int *comp) { return stbi__gif_info_raw(s, x, y, comp); } #endif // ************************************************************************************************* // Radiance RGBE HDR loader // originally by Nicolas Schulz #ifndef STBI_NO_HDR static int stbi__hdr_test_core(stbi__context *s, const char *signature) { int i; for (i = 0; signature[i]; ++i) if (stbi__get8(s) != signature[i]) return 0; stbi__rewind(s); return 1; } static int stbi__hdr_test(stbi__context *s) { int r = stbi__hdr_test_core(s, "#?RADIANCE\n"); stbi__rewind(s); if (!r) { r = stbi__hdr_test_core(s, "#?RGBE\n"); stbi__rewind(s); } return r; } #define STBI__HDR_BUFLEN 1024 static char *stbi__hdr_gettoken(stbi__context *z, char *buffer) { int len = 0; char c = '\0'; c = (char)stbi__get8(z); while (!stbi__at_eof(z) && c != '\n') { buffer[len++] = c; if (len == STBI__HDR_BUFLEN - 1) { // flush to end of line while (!stbi__at_eof(z) && stbi__get8(z) != '\n') ; break; } c = (char)stbi__get8(z); } buffer[len] = 0; return buffer; } static void stbi__hdr_convert(float *output, stbi_uc *input, int req_comp) { if (input[3] != 0) { float f1; // Exponent f1 = (float)ldexp(1.0f, input[3] - (int)(128 + 8)); if (req_comp <= 2) output[0] = (input[0] + input[1] + input[2]) * f1 / 3; else { output[0] = input[0] * f1; output[1] = input[1] * f1; output[2] = input[2] * f1; } if (req_comp == 2) output[1] = 1; if (req_comp == 4) output[3] = 1; } else { switch (req_comp) { case 4: output[3] = 1; /* fallthrough */ case 3: output[0] = output[1] = output[2] = 0; break; case 2: output[1] = 1; /* fallthrough */ case 1: output[0] = 0; break; } } } static float *stbi__hdr_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { char buffer[STBI__HDR_BUFLEN]; char *token; int valid = 0; int width, height; stbi_uc *scanline; float *hdr_data; int len; unsigned char count, value; int i, j, k, c1, c2, z; const char *headerToken; STBI_NOTUSED(ri); // Check identifier headerToken = stbi__hdr_gettoken(s, buffer); if (strcmp(headerToken, "#?RADIANCE") != 0 && strcmp(headerToken, "#?RGBE") != 0) return stbi__errpf("not HDR", "Corrupt HDR image"); // Parse header for (;;) { token = stbi__hdr_gettoken(s, buffer); if (token[0] == 0) break; if (strcmp(token, "FORMAT=32-bit_rle_rgbe") == 0) valid = 1; } if (!valid) return stbi__errpf("unsupported format", "Unsupported HDR format"); // Parse width and height // can't use sscanf() if we're not using stdio! token = stbi__hdr_gettoken(s, buffer); if (strncmp(token, "-Y ", 3)) return stbi__errpf("unsupported data layout", "Unsupported HDR format"); token += 3; height = (int)strtol(token, &token, 10); while (*token == ' ') ++token; if (strncmp(token, "+X ", 3)) return stbi__errpf("unsupported data layout", "Unsupported HDR format"); token += 3; width = (int)strtol(token, NULL, 10); *x = width; *y = height; if (comp) *comp = 3; if (req_comp == 0) req_comp = 3; if (!stbi__mad4sizes_valid(width, height, req_comp, sizeof(float), 0)) return stbi__errpf("too large", "HDR image is too large"); // Read data hdr_data = (float *)stbi__malloc_mad4(width, height, req_comp, sizeof(float), 0); if (!hdr_data) return stbi__errpf("outofmem", "Out of memory"); // Load image data // image data is stored as some number of sca if (width < 8 || width >= 32768) { // Read flat data for (j = 0; j < height; ++j) { for (i = 0; i < width; ++i) { stbi_uc rgbe[4]; main_decode_loop: stbi__getn(s, rgbe, 4); stbi__hdr_convert(hdr_data + j * width * req_comp + i * req_comp, rgbe, req_comp); } } } else { // Read RLE-encoded data scanline = NULL; for (j = 0; j < height; ++j) { c1 = stbi__get8(s); c2 = stbi__get8(s); len = stbi__get8(s); if (c1 != 2 || c2 != 2 || (len & 0x80)) { // not run-length encoded, so we have to actually use THIS data as a decoded // pixel (note this can't be a valid pixel--one of RGB must be >= 128) stbi_uc rgbe[4]; rgbe[0] = (stbi_uc)c1; rgbe[1] = (stbi_uc)c2; rgbe[2] = (stbi_uc)len; rgbe[3] = (stbi_uc)stbi__get8(s); stbi__hdr_convert(hdr_data, rgbe, req_comp); i = 1; j = 0; STBI_FREE(scanline); goto main_decode_loop; // yes, this makes no sense } len <<= 8; len |= stbi__get8(s); if (len != width) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf("invalid decoded scanline length", "corrupt HDR"); } if (scanline == NULL) { scanline = (stbi_uc *)stbi__malloc_mad2(width, 4, 0); if (!scanline) { STBI_FREE(hdr_data); return stbi__errpf("outofmem", "Out of memory"); } } for (k = 0; k < 4; ++k) { int nleft; i = 0; while ((nleft = width - i) > 0) { count = stbi__get8(s); if (count > 128) { // Run value = stbi__get8(s); count -= 128; if (count > nleft) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf("corrupt", "bad RLE data in HDR"); } for (z = 0; z < count; ++z) scanline[i++ * 4 + k] = value; } else { // Dump if (count > nleft) { STBI_FREE(hdr_data); STBI_FREE(scanline); return stbi__errpf("corrupt", "bad RLE data in HDR"); } for (z = 0; z < count; ++z) scanline[i++ * 4 + k] = stbi__get8(s); } } } for (i = 0; i < width; ++i) stbi__hdr_convert(hdr_data + (j * width + i) * req_comp, scanline + i * 4, req_comp); } if (scanline) STBI_FREE(scanline); } return hdr_data; } static int stbi__hdr_info(stbi__context *s, int *x, int *y, int *comp) { char buffer[STBI__HDR_BUFLEN]; char *token; int valid = 0; int dummy; if (!x) x = &dummy; if (!y) y = &dummy; if (!comp) comp = &dummy; if (stbi__hdr_test(s) == 0) { stbi__rewind(s); return 0; } for (;;) { token = stbi__hdr_gettoken(s, buffer); if (token[0] == 0) break; if (strcmp(token, "FORMAT=32-bit_rle_rgbe") == 0) valid = 1; } if (!valid) { stbi__rewind(s); return 0; } token = stbi__hdr_gettoken(s, buffer); if (strncmp(token, "-Y ", 3)) { stbi__rewind(s); return 0; } token += 3; *y = (int)strtol(token, &token, 10); while (*token == ' ') ++token; if (strncmp(token, "+X ", 3)) { stbi__rewind(s); return 0; } token += 3; *x = (int)strtol(token, NULL, 10); *comp = 3; return 1; } #endif // STBI_NO_HDR #ifndef STBI_NO_BMP static int stbi__bmp_info(stbi__context *s, int *x, int *y, int *comp) { void *p; stbi__bmp_data info; info.all_a = 255; p = stbi__bmp_parse_header(s, &info); stbi__rewind(s); if (p == NULL) return 0; if (x) *x = s->img_x; if (y) *y = s->img_y; if (comp) *comp = info.ma ? 4 : 3; return 1; } #endif #ifndef STBI_NO_PSD static int stbi__psd_info(stbi__context *s, int *x, int *y, int *comp) { int channelCount, dummy, depth; if (!x) x = &dummy; if (!y) y = &dummy; if (!comp) comp = &dummy; if (stbi__get32be(s) != 0x38425053) { stbi__rewind(s); return 0; } if (stbi__get16be(s) != 1) { stbi__rewind(s); return 0; } stbi__skip(s, 6); channelCount = stbi__get16be(s); if (channelCount < 0 || channelCount > 16) { stbi__rewind(s); return 0; } *y = stbi__get32be(s); *x = stbi__get32be(s); depth = stbi__get16be(s); if (depth != 8 && depth != 16) { stbi__rewind(s); return 0; } if (stbi__get16be(s) != 3) { stbi__rewind(s); return 0; } *comp = 4; return 1; } static int stbi__psd_is16(stbi__context *s) { int channelCount, depth; if (stbi__get32be(s) != 0x38425053) { stbi__rewind(s); return 0; } if (stbi__get16be(s) != 1) { stbi__rewind(s); return 0; } stbi__skip(s, 6); channelCount = stbi__get16be(s); if (channelCount < 0 || channelCount > 16) { stbi__rewind(s); return 0; } (void)stbi__get32be(s); (void)stbi__get32be(s); depth = stbi__get16be(s); if (depth != 16) { stbi__rewind(s); return 0; } return 1; } #endif #ifndef STBI_NO_PIC static int stbi__pic_info(stbi__context *s, int *x, int *y, int *comp) { int act_comp = 0, num_packets = 0, chained, dummy; stbi__pic_packet packets[10]; if (!x) x = &dummy; if (!y) y = &dummy; if (!comp) comp = &dummy; if (!stbi__pic_is4(s, "\x53\x80\xF6\x34")) { stbi__rewind(s); return 0; } stbi__skip(s, 88); *x = stbi__get16be(s); *y = stbi__get16be(s); if (stbi__at_eof(s)) { stbi__rewind(s); return 0; } if ((*x) != 0 && (1 << 28) / (*x) < (*y)) { stbi__rewind(s); return 0; } stbi__skip(s, 8); do { stbi__pic_packet *packet; if (num_packets == sizeof(packets) / sizeof(packets[0])) return 0; packet = &packets[num_packets++]; chained = stbi__get8(s); packet->size = stbi__get8(s); packet->type = stbi__get8(s); packet->channel = stbi__get8(s); act_comp |= packet->channel; if (stbi__at_eof(s)) { stbi__rewind(s); return 0; } if (packet->size != 8) { stbi__rewind(s); return 0; } } while (chained); *comp = (act_comp & 0x10 ? 4 : 3); return 1; } #endif // ************************************************************************************************* // Portable Gray Map and Portable Pixel Map loader // by Ken Miller // // PGM: http://netpbm.sourceforge.net/doc/pgm.html // PPM: http://netpbm.sourceforge.net/doc/ppm.html // // Known limitations: // Does not support comments in the header section // Does not support ASCII image data (formats P2 and P3) // Does not support 16-bit-per-channel #ifndef STBI_NO_PNM static int stbi__pnm_test(stbi__context *s) { char p, t; p = (char)stbi__get8(s); t = (char)stbi__get8(s); if (p != 'P' || (t != '5' && t != '6')) { stbi__rewind(s); return 0; } return 1; } static void *stbi__pnm_load(stbi__context *s, int *x, int *y, int *comp, int req_comp, stbi__result_info *ri) { stbi_uc *out; STBI_NOTUSED(ri); if (!stbi__pnm_info(s, (int *)&s->img_x, (int *)&s->img_y, (int *)&s->img_n)) return 0; *x = s->img_x; *y = s->img_y; if (comp) *comp = s->img_n; if (!stbi__mad3sizes_valid(s->img_n, s->img_x, s->img_y, 0)) return stbi__errpuc("too large", "PNM too large"); out = (stbi_uc *)stbi__malloc_mad3(s->img_n, s->img_x, s->img_y, 0); if (!out) return stbi__errpuc("outofmem", "Out of memory"); stbi__getn(s, out, s->img_n * s->img_x * s->img_y); if (req_comp && req_comp != s->img_n) { out = stbi__convert_format(out, s->img_n, req_comp, s->img_x, s->img_y); if (out == NULL) return out; // stbi__convert_format frees input on failure } return out; } static int stbi__pnm_isspace(char c) { return c == ' ' || c == '\t' || c == '\n' || c == '\v' || c == '\f' || c == '\r'; } static void stbi__pnm_skip_whitespace(stbi__context *s, char *c) { for (;;) { while (!stbi__at_eof(s) && stbi__pnm_isspace(*c)) *c = (char)stbi__get8(s); if (stbi__at_eof(s) || *c != '#') break; while (!stbi__at_eof(s) && *c != '\n' && *c != '\r') *c = (char)stbi__get8(s); } } static int stbi__pnm_isdigit(char c) { return c >= '0' && c <= '9'; } static int stbi__pnm_getinteger(stbi__context *s, char *c) { int value = 0; while (!stbi__at_eof(s) && stbi__pnm_isdigit(*c)) { value = value * 10 + (*c - '0'); *c = (char)stbi__get8(s); } return value; } static int stbi__pnm_info(stbi__context *s, int *x, int *y, int *comp) { int maxv, dummy; char c, p, t; if (!x) x = &dummy; if (!y) y = &dummy; if (!comp) comp = &dummy; stbi__rewind(s); // Get identifier p = (char)stbi__get8(s); t = (char)stbi__get8(s); if (p != 'P' || (t != '5' && t != '6')) { stbi__rewind(s); return 0; } *comp = (t == '6') ? 3 : 1; // '5' is 1-component .pgm; '6' is 3-component .ppm c = (char)stbi__get8(s); stbi__pnm_skip_whitespace(s, &c); *x = stbi__pnm_getinteger(s, &c); // read width stbi__pnm_skip_whitespace(s, &c); *y = stbi__pnm_getinteger(s, &c); // read height stbi__pnm_skip_whitespace(s, &c); maxv = stbi__pnm_getinteger(s, &c); // read max value if (maxv > 255) return stbi__err("max value > 255", "PPM image not 8-bit"); else return 1; } #endif static int stbi__info_main(stbi__context *s, int *x, int *y, int *comp) { #ifndef STBI_NO_JPEG if (stbi__jpeg_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_PNG if (stbi__png_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_GIF if (stbi__gif_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_BMP if (stbi__bmp_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_PSD if (stbi__psd_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_PIC if (stbi__pic_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_PNM if (stbi__pnm_info(s, x, y, comp)) return 1; #endif #ifndef STBI_NO_HDR if (stbi__hdr_info(s, x, y, comp)) return 1; #endif // test tga last because it's a crappy test! #ifndef STBI_NO_TGA if (stbi__tga_info(s, x, y, comp)) return 1; #endif return stbi__err("unknown image type", "Image not of any known type, or corrupt"); } static int stbi__is_16_main(stbi__context *s) { #ifndef STBI_NO_PNG if (stbi__png_is16(s)) return 1; #endif #ifndef STBI_NO_PSD if (stbi__psd_is16(s)) return 1; #endif return 0; } #ifndef STBI_NO_STDIO STBIDEF int stbi_info(char const *filename, int *x, int *y, int *comp) { FILE *f = stbi__fopen(filename, "rb"); int result; if (!f) return stbi__err("can't fopen", "Unable to open file"); result = stbi_info_from_file(f, x, y, comp); fclose(f); return result; } STBIDEF int stbi_info_from_file(FILE *f, int *x, int *y, int *comp) { int r; stbi__context s; long pos = ftell(f); stbi__start_file(&s, f); r = stbi__info_main(&s, x, y, comp); fseek(f, pos, SEEK_SET); return r; } STBIDEF int stbi_is_16_bit(char const *filename) { FILE *f = stbi__fopen(filename, "rb"); int result; if (!f) return stbi__err("can't fopen", "Unable to open file"); result = stbi_is_16_bit_from_file(f); fclose(f); return result; } STBIDEF int stbi_is_16_bit_from_file(FILE *f) { int r; stbi__context s; long pos = ftell(f); stbi__start_file(&s, f); r = stbi__is_16_main(&s); fseek(f, pos, SEEK_SET); return r; } #endif // !STBI_NO_STDIO STBIDEF int stbi_info_from_memory(stbi_uc const *buffer, int len, int *x, int *y, int *comp) { stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__info_main(&s, x, y, comp); } STBIDEF int stbi_info_from_callbacks(stbi_io_callbacks const *c, void *user, int *x, int *y, int *comp) { stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)c, user); return stbi__info_main(&s, x, y, comp); } STBIDEF int stbi_is_16_bit_from_memory(stbi_uc const *buffer, int len) { stbi__context s; stbi__start_mem(&s, buffer, len); return stbi__is_16_main(&s); } STBIDEF int stbi_is_16_bit_from_callbacks(stbi_io_callbacks const *c, void *user) { stbi__context s; stbi__start_callbacks(&s, (stbi_io_callbacks *)c, user); return stbi__is_16_main(&s); } #endif // STB_IMAGE_IMPLEMENTATION /* revision history: 2.20 (2019-02-07) support utf8 filenames in Windows; fix warnings and platform ifdefs 2.19 (2018-02-11) fix warning 2.18 (2018-01-30) fix warnings 2.17 (2018-01-29) change sbti__shiftsigned to avoid clang -O2 bug 1-bit BMP *_is_16_bit api avoid warnings 2.16 (2017-07-23) all functions have 16-bit variants; STBI_NO_STDIO works again; compilation fixes; fix rounding in unpremultiply; optimize vertical flip; disable raw_len validation; documentation fixes 2.15 (2017-03-18) fix png-1,2,4 bug; now all Imagenet JPGs decode; warning fixes; disable run-time SSE detection on gcc; uniform handling of optional "return" values; thread-safe initialization of zlib tables 2.14 (2017-03-03) remove deprecated STBI_JPEG_OLD; fixes for Imagenet JPGs 2.13 (2016-11-29) add 16-bit API, only supported for PNG right now 2.12 (2016-04-02) fix typo in 2.11 PSD fix that caused crashes 2.11 (2016-04-02) allocate large structures on the stack remove white matting for transparent PSD fix reported channel count for PNG & BMP re-enable SSE2 in non-gcc 64-bit support RGB-formatted JPEG read 16-bit PNGs (only as 8-bit) 2.10 (2016-01-22) avoid warning introduced in 2.09 by STBI_REALLOC_SIZED 2.09 (2016-01-16) allow comments in PNM files 16-bit-per-pixel TGA (not bit-per-component) info() for TGA could break due to .hdr handling info() for BMP to shares code instead of sloppy parse can use STBI_REALLOC_SIZED if allocator doesn't support realloc code cleanup 2.08 (2015-09-13) fix to 2.07 cleanup, reading RGB PSD as RGBA 2.07 (2015-09-13) fix compiler warnings partial animated GIF support limited 16-bpc PSD support #ifdef unused functions bug with < 92 byte PIC,PNM,HDR,TGA 2.06 (2015-04-19) fix bug where PSD returns wrong '*comp' value 2.05 (2015-04-19) fix bug in progressive JPEG handling, fix warning 2.04 (2015-04-15) try to re-enable SIMD on MinGW 64-bit 2.03 (2015-04-12) extra corruption checking (mmozeiko) stbi_set_flip_vertically_on_load (nguillemot) fix NEON support; fix mingw support 2.02 (2015-01-19) fix incorrect assert, fix warning 2.01 (2015-01-17) fix various warnings; suppress SIMD on gcc 32-bit without -msse2 2.00b (2014-12-25) fix STBI_MALLOC in progressive JPEG 2.00 (2014-12-25) optimize JPG, including x86 SSE2 & NEON SIMD (ryg) progressive JPEG (stb) PGM/PPM support (Ken Miller) STBI_MALLOC,STBI_REALLOC,STBI_FREE GIF bugfix -- seemingly never worked STBI_NO_*, STBI_ONLY_* 1.48 (2014-12-14) fix incorrectly-named assert() 1.47 (2014-12-14) 1/2/4-bit PNG support, both direct and paletted (Omar Cornut & stb) optimize PNG (ryg) fix bug in interlaced PNG with user-specified channel count (stb) 1.46 (2014-08-26) fix broken tRNS chunk (colorkey-style transparency) in non-paletted PNG 1.45 (2014-08-16) fix MSVC-ARM internal compiler error by wrapping malloc 1.44 (2014-08-07) various warning fixes from Ronny Chevalier 1.43 (2014-07-15) fix MSVC-only compiler problem in code changed in 1.42 1.42 (2014-07-09) don't define _CRT_SECURE_NO_WARNINGS (affects user code) fixes to stbi__cleanup_jpeg path added STBI_ASSERT to avoid requiring assert.h 1.41 (2014-06-25) fix search&replace from 1.36 that messed up comments/error messages 1.40 (2014-06-22) fix gcc struct-initialization warning 1.39 (2014-06-15) fix to TGA optimization when req_comp != number of components in TGA; fix to GIF loading because BMP wasn't rewinding (whoops, no GIFs in my test suite) add support for BMP version 5 (more ignored fields) 1.38 (2014-06-06) suppress MSVC warnings on integer casts truncating values fix accidental rename of 'skip' field of I/O 1.37 (2014-06-04) remove duplicate typedef 1.36 (2014-06-03) convert to header file single-file library if de-iphone isn't set, load iphone images color-swapped instead of returning NULL 1.35 (2014-05-27) various warnings fix broken STBI_SIMD path fix bug where stbi_load_from_file no longer left file pointer in correct place fix broken non-easy path for 32-bit BMP (possibly never used) TGA optimization by Arseny Kapoulkine 1.34 (unknown) use STBI_NOTUSED in stbi__resample_row_generic(), fix one more leak in tga failure case 1.33 (2011-07-14) make stbi_is_hdr work in STBI_NO_HDR (as specified), minor compiler-friendly improvements 1.32 (2011-07-13) support for "info" function for all supported filetypes (SpartanJ) 1.31 (2011-06-20) a few more leak fixes, bug in PNG handling (SpartanJ) 1.30 (2011-06-11) added ability to load files via callbacks to accomidate custom input streams (Ben Wenger) removed deprecated format-specific test/load functions removed support for installable file formats (stbi_loader) -- would have been broken for IO callbacks anyway error cases in bmp and tga give messages and don't leak (Raymond Barbiero, grisha) fix inefficiency in decoding 32-bit BMP (David Woo) 1.29 (2010-08-16) various warning fixes from Aurelien Pocheville 1.28 (2010-08-01) fix bug in GIF palette transparency (SpartanJ) 1.27 (2010-08-01) cast-to-stbi_uc to fix warnings 1.26 (2010-07-24) fix bug in file buffering for PNG reported by SpartanJ 1.25 (2010-07-17) refix trans_data warning (Won Chun) 1.24 (2010-07-12) perf improvements reading from files on platforms with lock-heavy fgetc() minor perf improvements for jpeg deprecated type-specific functions so we'll get feedback if they're needed attempt to fix trans_data warning (Won Chun) 1.23 fixed bug in iPhone support 1.22 (2010-07-10) removed image *writing* support stbi_info support from Jetro Lauha GIF support from Jean-Marc Lienher iPhone PNG-extensions from James Brown warning-fixes from Nicolas Schulz and Janez Zemva (i.stbi__err. Janez (U+017D)emva) 1.21 fix use of 'stbi_uc' in header (reported by jon blow) 1.20 added support for Softimage PIC, by Tom Seddon 1.19 bug in interlaced PNG corruption check (found by ryg) 1.18 (2008-08-02) fix a threading bug (local mutable static) 1.17 support interlaced PNG 1.16 major bugfix - stbi__convert_format converted one too many pixels 1.15 initialize some fields for thread safety 1.14 fix threadsafe conversion bug header-file-only version (#define STBI_HEADER_FILE_ONLY before including) 1.13 threadsafe 1.12 const qualifiers in the API 1.11 Support installable IDCT, colorspace conversion routines 1.10 Fixes for 64-bit (don't use "unsigned long") optimized upsampling by Fabian "ryg" Giesen 1.09 Fix format-conversion for PSD code (bad global variables!) 1.08 Thatcher Ulrich's PSD code integrated by Nicolas Schulz 1.07 attempt to fix C++ warning/errors again 1.06 attempt to fix C++ warning/errors again 1.05 fix TGA loading to return correct *comp and use good luminance calc 1.04 default float alpha is 1, not 255; use 'void *' for stbi_image_free 1.03 bugfixes to STBI_NO_STDIO, STBI_NO_HDR 1.02 support for (subset of) HDR files, float interface for preferred access to them 1.01 fix bug: possible bug in handling right-side up bmps... not sure fix bug: the stbi__bmp_load() and stbi__tga_load() functions didn't work at all 1.00 interface to zlib that skips zlib header 0.99 correct handling of alpha in palette 0.98 TGA loader by lonesock; dynamically add loaders (untested) 0.97 jpeg errors on too large a file; also catch another malloc failure 0.96 fix detection of invalid v value - particleman@mollyrocket forum 0.95 during header scan, seek to markers in case of padding 0.94 STBI_NO_STDIO to disable stdio usage; rename all #defines the same 0.93 handle jpegtran output; verbose errors 0.92 read 4,8,16,24,32-bit BMP files of several formats 0.91 output 24-bit Windows 3.0 BMP files 0.90 fix a few more warnings; bump version number to approach 1.0 0.61 bugfixes due to Marc LeBlanc, Christopher Lloyd 0.60 fix compiling as c++ 0.59 fix warnings: merge Dave Moore's -Wall fixes 0.58 fix bug: zlib uncompressed mode len/nlen was wrong endian 0.57 fix bug: jpg last huffman symbol before marker was >9 bits but less than 16 available 0.56 fix bug: zlib uncompressed mode len vs. nlen 0.55 fix bug: restart_interval not initialized to 0 0.54 allow NULL for 'int *comp' 0.53 fix bug in png 3->4; speedup png decoding 0.52 png handles req_comp=3,4 directly; minor cleanup; jpeg comments 0.51 obey req_comp requests, 1-component jpegs return as 1-component, on 'test' only check type, not whether we support this variant 0.50 (2006-11-19) first released version */ /* ------------------------------------------------------------------------------ This software is available under 2 licenses -- choose whichever you prefer. ------------------------------------------------------------------------------ ALTERNATIVE A - MIT License Copyright (c) 2017 Sean Barrett Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------ ALTERNATIVE B - Public Domain (www.unlicense.org) This is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------ */ zytrax-master/gui/track_editor.cpp000066400000000000000000000775571347722000700177010ustar00rootroot00000000000000#include "track_editor.h" void TrackRackVolume::_mouse_button_event(GdkEventButton *event, bool p_press) { if (event->button == 1) { if (p_press && event->x >= grabber_x && event->x < grabber_x + grabber_w && event->y >= grabber_y && event->y < grabber_y + grabber_h) { grabbing_y = event->y; grabbing_db = song->get_track(track_idx)->get_mix_volume_db(); grabbing = true; queue_draw(); } if (!p_press) { grabbing = false; queue_draw(); } } } bool TrackRackVolume::on_button_press_event(GdkEventButton *event) { grab_focus(); _mouse_button_event(event, true); return false; } bool TrackRackVolume::on_button_release_event(GdkEventButton *release_event) { _mouse_button_event(release_event, false); return false; } bool TrackRackVolume::on_motion_notify_event(GdkEventMotion *motion_event) { if (grabbing) { float new_db = grabbing_db - (motion_event->y - grabbing_y) * (TRACK_MAX_DB - TRACK_MIN_DB) / vu_h; new_db = CLAMP(new_db, TRACK_MIN_DB, TRACK_MAX_DB); volume_db_changed.emit(track_idx, new_db); queue_draw(); } return false; } bool TrackRackVolume::on_key_press_event(GdkEventKey *key_event) { return true; } bool TrackRackVolume::on_key_release_event(GdkEventKey *key_event) { return false; } Gtk::SizeRequestMode TrackRackVolume::get_request_mode_vfunc() const { // Accept the default value supplied by the base class. return Gtk::Widget::get_request_mode_vfunc(); } // Discover the total amount of minimum space and natural space needed by // this widget. // Let's make this simple example widget always need minimum 60 by 50 and // natural 100 by 70. void TrackRackVolume::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void TrackRackVolume::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void TrackRackVolume::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void TrackRackVolume::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void TrackRackVolume::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void TrackRackVolume::on_map() { // Call base class: Gtk::Widget::on_map(); } void TrackRackVolume::on_unmap() { // Call base class: Gtk::Widget::on_unmap(); } void TrackRackVolume::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void TrackRackVolume::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void TrackRackVolume::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } int TrackRackVolume::_get_text_width(const Cairo::RefPtr &cr, const String &p_text) const { Cairo::TextExtents te; cr->get_text_extents(p_text.utf8().get_data(), te); return te.width; } void TrackRackVolume::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void TrackRackVolume::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } void TrackRackVolume::_draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x + w / 4, y + h / 4); cr->line_to(x + w * 3 / 4, y + h / 4); cr->line_to(x + w / 2, y + h * 3 / 4); cr->line_to(x + w / 4, y + h / 4); cr->fill(); cr->stroke(); } bool TrackRackVolume::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); { //update min width theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); Cairo::TextExtents te; cr->get_text_extents("XXX", te); int fw = te.width; cr->get_text_extents("XX", te); fw -= te.width; int new_width = fw * min_width_chars + fe.height + separator * 2; int new_height = fe.height; if (new_width != min_width || new_height != min_height) { min_width = new_width; min_height = new_height; char_width = fw; font_height = fe.height; font_ascent = fe.ascent; queue_resize(); /*Gtk::Widget *w = this; while (w) { w->queue_resize(); w = w->get_parent(); }*/ } } Gdk::Cairo::set_source_rgba(cr, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]); cr->rectangle(0, 0, font_height + separator, h); cr->fill(); _draw_text(cr, separator, separator, song->get_track(track_idx)->get_name(), selected ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE], true); vu_x = font_height + separator; vu_y = font_height / 2; vu_w = char_width * min_width_chars; vu_h = h - vu_y * 2; Gdk::Cairo::set_source_rgba(cr, theme->colors[Theme::COLOR_BACKGROUND]); Gdk::Cairo::set_source_rgba(cr, Theme::make_rgba(0, 0, 0)); cr->rectangle(font_height + separator - 1, 0, w - (font_height + separator) + 1, h); cr->fill(); Gdk::RGBA rgba; rgba.set_alpha(1); cr->set_line_width(1); for (int i = 0; i < vu_h; i += 2) { float db = TRACK_MAX_DB - float(i) * (TRACK_MAX_DB - TRACK_MIN_DB) / vu_h; float r = 0, g = 0, b = 0; if (db > 0) { r = 1.0; g = 1.0 - db / TRACK_MAX_DB; } else { r = 1.0 - db / TRACK_MIN_DB; g = 1.0; } float lr = r; float lg = g; float lb = b; float rr = r; float rg = g; float rb = b; { if (db > peak_db_l) { lr *= 0.3; lg *= 0.3; lb *= 0.3; } if (db > peak_db_r) { rr *= 0.3; rg *= 0.3; rb *= 0.3; } int middle = vu_x + vu_w / 2; { rgba.set_red(lr); rgba.set_green(lg); rgba.set_blue(lb); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(vu_x, vu_y + i + 0.5); cr->line_to(middle - 1, vu_y + i + 0.5); cr->stroke(); } { rgba.set_red(rr); rgba.set_green(rg); rgba.set_blue(rb); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(middle + 1, vu_y + i + 0.5); cr->line_to(vu_x + vu_w, vu_y + i + 0.5); cr->stroke(); } } } //white line at 0db rgba.set_red(1); rgba.set_green(1); rgba.set_blue(1); rgba.set_alpha(0.5); Gdk::Cairo::set_source_rgba(cr, rgba); int db0 = TRACK_MAX_DB * float(vu_h) / float(TRACK_MAX_DB - TRACK_MIN_DB); cr->move_to(vu_x, vu_y + db0 + 0.5); cr->line_to(vu_x + vu_w, vu_y + db0 + 0.5); cr->move_to(vu_x, vu_y + db0 + 1.5); cr->line_to(vu_x + vu_w, vu_y + db0 + 1.5); cr->stroke(); //draw handle float track_db = song->get_track(track_idx)->get_mix_volume_db(); int db_handle = (TRACK_MAX_DB - track_db) * vu_h / float(TRACK_MAX_DB - TRACK_MIN_DB); rgba.set_red(1); rgba.set_green(0.5); rgba.set_blue(0.5); rgba.set_alpha(1.0); Gdk::Cairo::set_source_rgba(cr, rgba); cr->move_to(vu_x, vu_y + db_handle - 0.5); cr->line_to(vu_x + vu_w, vu_y + db_handle - 0.5); //cr->move_to(vu_x, vu_y + db_handle + 1.5); //cr->line_to(vu_x + vu_w, vu_y + db_handle + 1.5); cr->stroke(); rgba.set_red(1); rgba.set_green(1); rgba.set_blue(1); rgba.set_alpha(1.0); cr->set_line_width(2); _draw_rect(cr, vu_x + 0.5, db_handle + font_height / 4 + 0.5, vu_w - 1, font_height / 2, rgba); grabber_x = vu_x; grabber_y = db_handle; grabber_w = vu_w - 1; grabber_h = font_height; /*if (selected) { _draw_rect(cr, 1, 0, w + 1, h, theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR]); }*/ return false; } void TrackRackVolume::update_peak() { uint64_t current_time = g_get_monotonic_time(); double diff = double(current_time - last_time) / 1000000.0; last_time = current_time; { float current_peak_l = song->get_track(track_idx)->get_peak_volume_db_l(); float new_peak_l; if (current_peak_l > peak_db_l) { new_peak_l = current_peak_l; } else { //decrement new_peak_l = peak_db_l - 48 * diff; //24db per second? } if (new_peak_l < TRACK_MIN_DB) { //so it stops redrawing eventually; new_peak_l = TRACK_MIN_DB; } if (new_peak_l != peak_db_l) { peak_db_l = new_peak_l; queue_draw(); } } { float current_peak_r = song->get_track(track_idx)->get_peak_volume_db_r(); float new_peak_r; if (current_peak_r > peak_db_r) { new_peak_r = current_peak_r; } else { //decrement new_peak_r = peak_db_r - 48 * diff; //24db per second? } if (new_peak_r < TRACK_MIN_DB) { //so it stops redrawing eventually; new_peak_r = TRACK_MIN_DB; } if (new_peak_r != peak_db_r) { peak_db_r = new_peak_r; queue_draw(); } } } void TrackRackVolume::on_parsing_error( const Glib::RefPtr §ion, const Glib::Error &error) {} TrackRackVolume::TrackRackVolume(int p_track, Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("track_editor"), Gtk::Widget() { // This shows the GType name, which must be used in the CSS file. // std::cout << "GType name: " << G_OBJECT_TYPE_NAME(gobj()) << std::endl; // This shows that the GType still derives from GtkWidget: // std::cout << "Gtype is a GtkWidget?:" << GTK_IS_WIDGET(gobj()) << // std::endl; track_idx = p_track; song = p_song; undo_redo = p_undo_redo; key_bindings = p_bindings; theme = p_theme; set_has_window(true); set_can_focus(true); set_focus_on_click(true); // Gives Exposure & Button presses to the widget. set_name("rack_volume"); min_width = 1; min_height = 1; min_width_chars = 3; min_height_lines = 10; char_width = 1; font_height = 1; font_ascent = 1; separator = 2; selected = false; grabbing = false; grabber_x = grabber_y = grabber_w = grabber_h = -1; last_time = 0; peak_db_l = -100; peak_db_r = -100; } TrackRackVolume::~TrackRackVolume() { } //////////////////////////////////////////////////////// //////////////////////////////////////////////////////// //////////////////////////////////////////////////////// //////////////////////////////////////////////////////// void TrackRackEditor::_mouse_button_event(GdkEventButton *event, bool p_press) { if (event->button == 1) { int new_mouse_over_area = -1; for (int i = 0; i < areas.size(); i++) { if (event->y >= areas[i].y && event->y < areas[i].y + areas[i].h) { new_mouse_over_area = i; break; } } if (p_press) { if (new_mouse_over_area == -1) { return; //nothing to do here } if (areas[new_mouse_over_area].insert) { //open dialog and insert if (areas[new_mouse_over_area].is_fx) { add_effect.emit(track_idx); } else { if (!send_menu) { send_menu = new Gtk::Menu; } for (int i = 0; i < available_tracks.size(); i++) { delete available_tracks[i]; } available_tracks.clear(); for (int i = 0; i < song->get_track_count(); i++) { String track_name = song->get_track(i)->get_name(); Gtk::MenuItem *item = new Gtk::MenuItem; item->show(); item->set_label(track_name.utf8().get_data()); bool can_add = track_idx != i; if (song->get_track(i)->has_send(i)) { can_add = false; } item->set_sensitive(can_add); item->signal_activate().connect(sigc::bind(sigc::mem_fun(*this, &TrackRackEditor::_insert_send_to_track), i)); available_tracks.push_back(item); send_menu->append(*item); } { //sep Gtk::MenuItem *sep = new Gtk::SeparatorMenuItem; sep->show(); send_menu->append(*sep); available_tracks.push_back(sep); //main Gtk::MenuItem *item = new Gtk::MenuItem; item->show(); item->set_label("Speakers"); bool can_add = !song->get_track(track_idx)->has_send(Track::SEND_SPEAKERS); item->set_sensitive(can_add); if (can_add) { item->signal_activate().connect(sigc::bind(sigc::mem_fun(*this, &TrackRackEditor::_insert_send_to_track), Track::SEND_SPEAKERS)); } available_tracks.push_back(item); send_menu->append(*item); } Gdk::Rectangle alloc; alloc.set_x(0); alloc.set_width(get_allocated_width()); alloc.set_y(areas[new_mouse_over_area].y); alloc.set_height(areas[new_mouse_over_area].h); send_menu->popup_at_rect(get_window(), alloc, Gdk::GRAVITY_NORTH_WEST, Gdk::GRAVITY_SOUTH_WEST, (GdkEvent *)event); } } else { pressing_area = new_mouse_over_area; press_y = event->y; queue_draw(); } } else { if (new_mouse_over_area < 0) { //none? } else if (pressing_area == new_mouse_over_area) { if (areas[pressing_area].is_fx) { effect_request_editor.emit(track_idx, areas[pressing_area].which); } else { Gdk::Rectangle alloc; alloc.set_x(0); alloc.set_width(get_allocated_width()); alloc.set_y(areas[pressing_area].y); alloc.set_height(areas[pressing_area].h); send_amount.set_size_request(alloc.get_width(), 1); send_amount.set_value(song->get_track(track_idx)->get_send_amount(areas[pressing_area].which) * 100); send_popover_index = areas[pressing_area].which; send_popover.set_pointing_to(alloc); send_popover.popup(); } //press } if (dragging && pressing_area != -1 && mouse_over_area != -1) { if (drag_fx == areas[mouse_over_area].is_fx) { int from = areas[pressing_area].which; int to = areas[mouse_over_area].which; if (from != to) { if (drag_fx) { track_swap_effects.emit(track_idx, from, to); } else { track_swap_sends.emit(track_idx, from, to); } } } } pressing_area = -1; dragging = false; drag_fx = false; dragging_y = -1; queue_draw(); } } else if (event->button == 3 && p_press) { int at_idx = -1; for (int i = 0; i < areas.size(); i++) { if (event->y >= areas[i].y && event->y < areas[i].y + areas[i].h) { at_idx = i; break; } } if (at_idx < 0) { return; } if (areas[at_idx].insert) { return; } menu_at_index = at_idx; if (areas[at_idx].is_fx) { bool skipped = song->get_track(track_idx)->get_audio_effect(areas[at_idx].which)->is_skipped(); _update_menu(skipped, true); } else { bool muted = song->get_track(track_idx)->is_send_muted(areas[at_idx].which); _update_menu(muted, false); } menu->popup_at_pointer((GdkEvent *)event); } } bool TrackRackEditor::on_button_press_event(GdkEventButton *event) { _mouse_button_event(event, true); return false; } bool TrackRackEditor::on_button_release_event(GdkEventButton *release_event) { _mouse_button_event(release_event, false); return false; } bool TrackRackEditor::on_leave_notify_event(GdkEventCrossing *crossing_event) { if (mouse_over_area != -1) { mouse_over_area = -1; queue_draw(); } return false; } bool TrackRackEditor::on_motion_notify_event(GdkEventMotion *motion_event) { int new_mouse_over_area = -1; for (int i = 0; i < areas.size(); i++) { if (motion_event->y >= areas[i].y && motion_event->y < areas[i].y + areas[i].h) { new_mouse_over_area = i; break; } } if (mouse_over_area != new_mouse_over_area) { mouse_over_area = new_mouse_over_area; queue_draw(); } if (pressing_area >= 0 && !dragging) { dragging = true; queue_draw(); } if (dragging) { dragging_y = motion_event->y; drag_fx = areas[pressing_area].is_fx; queue_draw(); } gdk_event_request_motions(motion_event); return false; } bool TrackRackEditor::on_key_press_event(GdkEventKey *key_event) { return true; } bool TrackRackEditor::on_key_release_event(GdkEventKey *key_event) { return false; } Gtk::SizeRequestMode TrackRackEditor::get_request_mode_vfunc() const { // Accept the default value supplied by the base class. return Gtk::Widget::get_request_mode_vfunc(); } // Discover the total amount of minimum space and natural space needed by // this widget. // Let's make this simple example widget always need minimum 60 by 50 and // natural 100 by 70. void TrackRackEditor::get_preferred_width_vfunc(int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void TrackRackEditor::get_preferred_height_for_width_vfunc( int /* width */, int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void TrackRackEditor::get_preferred_height_vfunc(int &minimum_height, int &natural_height) const { minimum_height = min_height; natural_height = min_height; } void TrackRackEditor::get_preferred_width_for_height_vfunc( int /* height */, int &minimum_width, int &natural_width) const { minimum_width = min_width; natural_width = min_width; } void TrackRackEditor::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void TrackRackEditor::on_map() { // Call base class: Gtk::Widget::on_map(); } void TrackRackEditor::on_unmap() { // Call base class: Gtk::Widget::on_unmap(); } void TrackRackEditor::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::POINTER_MOTION_MASK | Gdk::LEAVE_NOTIFY_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void TrackRackEditor::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } void TrackRackEditor::_draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x, y); if (p_down) cr->rotate_degrees(90); cr->show_text(p_text.utf8().get_data()); if (p_down) cr->rotate_degrees(-90); cr->move_to(0, 0); cr->stroke(); } int TrackRackEditor::_get_text_width(const Cairo::RefPtr &cr, const String &p_text) const { Cairo::TextExtents te; cr->get_text_extents(p_text.utf8().get_data(), te); return te.width; } void TrackRackEditor::_draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->fill(); cr->stroke(); } void TrackRackEditor::_draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->rectangle(x, y, w, h); cr->stroke(); } void TrackRackEditor::_draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color) { Gdk::Cairo::set_source_rgba(cr, p_color); cr->move_to(x + w / 4, y + h / 4); cr->line_to(x + w * 3 / 4, y + h / 4); cr->line_to(x + w / 2, y + h * 3 / 4); cr->line_to(x + w / 4, y + h / 4); cr->fill(); cr->stroke(); } bool TrackRackEditor::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); { //update min width theme->select_font_face(cr); Cairo::FontExtents fe; cr->get_font_extents(fe); int new_width = fe.height * min_width_chars; int new_height = (fe.height + 2) * min_height_lines; if (new_width != min_width || new_height != min_height) { min_width = new_width; min_height = new_height; char_width = fe.max_x_advance; font_height = fe.height; font_ascent = fe.ascent; queue_resize(); /*Gtk::Widget *w = this; while (w) { w->queue_resize(); w = w->get_parent(); }*/ } } Gdk::Cairo::set_source_rgba(cr, selected ? theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_RACK_SELECTED] : theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]); cr->rectangle(0, 0, w, h); cr->fill(); int idx = 0; int y = 0; int row_height = (font_height + separator); int max_effects = song->get_track(track_idx)->get_audio_effect_count(); int max_sends = song->get_track(track_idx)->get_send_count(); areas.clear(); for (int i = 0; i <= max_effects; i++) { if (idx >= v_offset) { Area area; area.index = idx; area.which = i; area.h = row_height; area.y = y; area.is_fx = true; String text; Gdk::RGBA color = (pressing_area == -1 && idx == mouse_over_area) ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE]; if (i == max_effects) { text = ""; color.set_alpha(color.get_alpha() * 0.7); area.insert = true; } else { if (dragging && drag_fx && mouse_over_area == idx) { _draw_rect(cr, 0, y, w, row_height - 2, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_SELECTED]); } text = song->get_track(track_idx)->get_audio_effect(i)->get_name(); _draw_rect(cr, 0, y + row_height - 1, w, 0, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT]); if (song->get_track(track_idx)->get_audio_effect(i)->is_skipped()) { color = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_NOFIT]; color.set_alpha(color.get_alpha() * 0.7); } area.insert = false; } int insert_ofs = (w - _get_text_width(cr, text)) / 2; int draw_y = y + separator / 2 + font_ascent; if (dragging && pressing_area == idx) { draw_y += dragging_y - press_y; } _draw_text(cr, insert_ofs, draw_y, text, color, false); y += row_height; areas.push_back(area); } idx++; } //if enough room, just put sends at the end if ((max_sends + max_effects + 2) * row_height < h) { int from = h - (max_sends + 1) * row_height; while (y + row_height < from) { _draw_rect(cr, 0, y - 1, w, 0, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT]); y += row_height; } y = from; } _draw_rect(cr, 0, y - 1, w, 1, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BAR]); for (int i = 0; i <= max_sends; i++) { if (idx >= v_offset) { String text; Area area; area.index = idx; area.which = i; area.h = row_height; area.y = y; area.is_fx = false; Gdk::RGBA color = (pressing_area == -1 && idx == mouse_over_area) ? theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR] : theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE]; if (i == max_sends) { text = ""; color.set_alpha(color.get_alpha() * 0.7); area.insert = true; } else { if (dragging && !drag_fx && mouse_over_area == idx) { _draw_rect(cr, 0, y, w, row_height - 2, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG_SELECTED]); } bool muted = song->get_track(track_idx)->is_send_muted(i); int to = song->get_track(track_idx)->get_send_track(i); if (to < 0 || to >= song->get_track_count()) { text = "Speakers: "; } else { text = song->get_track(to)->get_name() + ": "; } int amount_percent = int(song->get_track(track_idx)->get_send_amount(i) * 100); if (muted) { text += "Muted"; } else { text += String::num(amount_percent) + "%"; } _draw_rect(cr, 0, y + row_height - 1, w, 0, theme->colors[Theme::COLOR_PATTERN_EDITOR_HL_BEAT]); area.insert = false; if (muted) { color = theme->colors[Theme::COLOR_PATTERN_EDITOR_NOTE_NOFIT]; color.set_alpha(color.get_alpha() * 0.7); } } int insert_ofs = (w - _get_text_width(cr, text)) / 2; int draw_y = y + separator / 2 + font_ascent; if (dragging && pressing_area == idx) { draw_y += dragging_y - press_y; } _draw_text(cr, insert_ofs, draw_y, text, color, false); y += row_height; areas.push_back(area); } idx++; } /*if (selected) { _draw_rect(cr, -1, 0, w + 1, h, theme->colors[Theme::COLOR_PATTERN_EDITOR_CURSOR]); }*/ return false; } void TrackRackEditor::set_v_offset(int p_offset) { v_offset = p_offset; } int TrackRackEditor::get_v_offset() const { return v_offset; } Track *TrackRackEditor::get_track() const { return track; } void TrackRackEditor::_update_menu(bool p_muted, bool p_is_fx) { if (!menu) { menu = new (Gtk::Menu); menu->append(menu_item_mute); menu->append(menu_item_separator); menu->append(menu_item_remove); } if (p_is_fx) { menu_item_mute.set_label("Skip"); } else { menu_item_mute.set_label("Mute"); } menu_item_mute.set_active(p_muted); } void TrackRackEditor::_insert_send_to_track(int p_idx) { insert_send_to_track.emit(track_idx, p_idx); } void TrackRackEditor::_item_toggle_mute() { if (areas[menu_at_index].is_fx) { toggle_effect_skip.emit(track_idx, areas[menu_at_index].which); } else { toggle_send_mute.emit(track_idx, areas[menu_at_index].which); } } void TrackRackEditor::_item_removed() { if (areas[menu_at_index].is_fx) { remove_effect.emit(track_idx, areas[menu_at_index].which); } else { remove_send.emit(track_idx, areas[menu_at_index].which); } } void TrackRackEditor::_send_amount_changed() { send_amount_changed.emit(track_idx, send_popover_index, send_amount.get_adjustment()->get_value() / 100.0); } void TrackRackEditor::on_parsing_error( const Glib::RefPtr §ion, const Glib::Error &error) {} TrackRackEditor::TrackRackEditor(int p_track, Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings, Gtk::VScrollbar *p_v_scroll) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("track_editor"), Gtk::Widget(), send_popover(*this) { v_scroll = p_v_scroll; // This shows the GType name, which must be used in the CSS file. // std::cout << "GType name: " << G_OBJECT_TYPE_NAME(gobj()) << std::endl; // This shows that the GType still derives from GtkWidget: // std::cout << "Gtype is a GtkWidget?:" << GTK_IS_WIDGET(gobj()) << // std::endl; track_idx = p_track; track = p_song->get_track(track_idx); song = p_song; undo_redo = p_undo_redo; key_bindings = p_bindings; theme = p_theme; set_has_window(true); set_can_focus(true); set_focus_on_click(true); // Gives Exposure & Button presses to the widget. set_name("pattern_editor"); min_width = 1; min_height = 1; min_width_chars = 8; min_height_lines = 8; char_width = 1; font_height = 1; font_ascent = 1; separator = 1; v_offset = 0; pressing_area = -1; selected = false; mouse_over_area = -1; dragging = false; dragging_y = 0; drag_fx = false; menu_item_mute.set_label("Skip"); menu_item_mute.signal_activate().connect(sigc::mem_fun(this, &TrackRackEditor::_item_toggle_mute)); menu_item_mute.show(); menu_item_separator.show(); menu_item_remove.set_label("Remove"); menu_item_remove.signal_activate().connect(sigc::mem_fun(this, &TrackRackEditor::_item_removed)); menu_item_remove.show(); menu = NULL; send_menu = NULL; menu_at_index = -1; send_popover.add(send_amount); send_amount.get_adjustment()->set_lower(0); send_amount.get_adjustment()->set_upper(100); send_amount.get_adjustment()->set_page_size(0); send_amount.get_adjustment()->set_value(20); send_amount.get_adjustment()->signal_value_changed().connect(sigc::mem_fun(this, &TrackRackEditor::_send_amount_changed)); send_amount.show(); } TrackRackEditor::~TrackRackEditor() { if (menu) { delete menu; } for (int i = 0; i < available_tracks.size(); i++) { delete available_tracks[i]; } if (send_menu) { delete menu; } } ////////////////////////// void TrackRackFiller::on_size_allocate(Gtk::Allocation &allocation) { // Do something with the space that we have actually been given: //(We will not be given heights or widths less than we have requested, though // we might get more) // Use the offered allocation for this container: set_allocation(allocation); if (m_refGdkWindow) { m_refGdkWindow->move_resize(allocation.get_x(), allocation.get_y(), allocation.get_width(), allocation.get_height()); } } void TrackRackFiller::on_realize() { // Do not call base class Gtk::Widget::on_realize(). // It's intended only for widgets that set_has_window(false). set_realized(); if (!m_refGdkWindow) { // Create the GdkWindow: GdkWindowAttr attributes; memset(&attributes, 0, sizeof(attributes)); Gtk::Allocation allocation = get_allocation(); // Set initial position and size of the Gdk::Window: attributes.x = allocation.get_x(); attributes.y = allocation.get_y(); attributes.width = allocation.get_width(); attributes.height = allocation.get_height(); attributes.event_mask = get_events() | Gdk::EXPOSURE_MASK | Gdk::BUTTON_PRESS_MASK | Gdk::BUTTON_RELEASE_MASK | Gdk::BUTTON1_MOTION_MASK | Gdk::KEY_PRESS_MASK | Gdk::KEY_RELEASE_MASK; attributes.window_type = GDK_WINDOW_CHILD; attributes.wclass = GDK_INPUT_OUTPUT; m_refGdkWindow = Gdk::Window::create(get_parent_window(), &attributes, GDK_WA_X | GDK_WA_Y); set_window(m_refGdkWindow); // make the widget receive expose events m_refGdkWindow->set_user_data(gobj()); } } void TrackRackFiller::on_unrealize() { m_refGdkWindow.reset(); // Call base class: Gtk::Widget::on_unrealize(); } bool TrackRackFiller::on_draw(const Cairo::RefPtr &cr) { const Gtk::Allocation allocation = get_allocation(); int w = allocation.get_width(); int h = allocation.get_height(); Gdk::Cairo::set_source_rgba(cr, theme->colors[Theme::COLOR_PATTERN_EDITOR_BG]); cr->rectangle(0, 0, w, h); cr->fill(); return false; } TrackRackFiller::TrackRackFiller(Theme *p_theme) : // The GType name will actually be gtkmm__CustomObject_mywidget Glib::ObjectBase("filler"), Gtk::Widget() { theme = p_theme; } zytrax-master/gui/track_editor.h000066400000000000000000000157271347722000700173350ustar00rootroot00000000000000#ifndef TRACK_EDITOR_H #define TRACK_EDITOR_H #include "engine/song.h" #include "engine/undo_redo.h" #include #include #include #include "gui/color_theme.h" #include "gui/key_bindings.h" class TrackRackVolume : public Gtk::Widget { protected: enum { TRACK_MAX_DB = 12, TRACK_MIN_DB = -60 }; int min_width; int min_height; int min_width_chars; int min_height_lines; int char_width; int font_height; int font_ascent; int separator; int vu_x, vu_y; int vu_w, vu_h; int grabber_x, grabber_y; int grabber_w, grabber_h; bool grabbing; float grabbing_db; int grabbing_y; // Overrides: Gtk::SizeRequestMode get_request_mode_vfunc() const override; void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_map() override; void on_unmap() override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; // Signal handler: void on_parsing_error(const Glib::RefPtr §ion, const Glib::Error &error); void _mouse_button_event(GdkEventButton *event, bool p_press); bool on_button_press_event(GdkEventButton *event); bool on_button_release_event(GdkEventButton *event); bool on_motion_notify_event(GdkEventMotion *motion_event); bool on_key_press_event(GdkEventKey *key_event); bool on_key_release_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; UndoRedo *undo_redo; Theme *theme; KeyBindings *key_bindings; int track_idx; Song *song; void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down); int _get_text_width(const Cairo::RefPtr &cr, const String &p_text) const; void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); bool selected; uint64_t last_time; float peak_db_l; float peak_db_r; public: sigc::signal2 volume_db_changed; void set_selected(bool p_selected) { selected = p_selected; queue_draw(); } void update_peak(); TrackRackVolume(int p_track, Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings); ~TrackRackVolume(); }; class TrackRackEditor : public Gtk::Widget { protected: int min_width; int min_height; int min_width_chars; int min_height_lines; int char_width; int font_height; int font_ascent; int separator; struct Area { bool is_fx; int which; int index; int y; int h; bool insert; }; Vector areas; int mouse_over_area; int pressing_area; int menu_at_index; int press_y; bool dragging; bool drag_fx; int dragging_y; int v_offset; // Overrides: Gtk::SizeRequestMode get_request_mode_vfunc() const override; void get_preferred_width_vfunc(int &minimum_width, int &natural_width) const override; void get_preferred_height_for_width_vfunc(int width, int &minimum_height, int &natural_height) const override; void get_preferred_height_vfunc(int &minimum_height, int &natural_height) const override; void get_preferred_width_for_height_vfunc(int height, int &minimum_width, int &natural_width) const override; void on_size_allocate(Gtk::Allocation &allocation) override; void on_map() override; void on_unmap() override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; // Signal handler: void on_parsing_error(const Glib::RefPtr §ion, const Glib::Error &error); void _mouse_button_event(GdkEventButton *event, bool p_press); bool on_button_press_event(GdkEventButton *event); bool on_button_release_event(GdkEventButton *event); bool on_leave_notify_event(GdkEventCrossing *crossing_event); bool on_motion_notify_event(GdkEventMotion *motion_event); bool on_key_press_event(GdkEventKey *key_event); bool on_key_release_event(GdkEventKey *key_event); Glib::RefPtr m_refGdkWindow; UndoRedo *undo_redo; Theme *theme; KeyBindings *key_bindings; int track_idx; Song *song; void _draw_text(const Cairo::RefPtr &cr, int x, int y, const String &p_text, const Gdk::RGBA &p_color, bool p_down); int _get_text_width(const Cairo::RefPtr &cr, const String &p_text) const; void _draw_fill_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_rect(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); void _draw_arrow(const Cairo::RefPtr &cr, int x, int y, int w, int h, const Gdk::RGBA &p_color); bool selected; Gtk::VScrollbar *v_scroll; Track *track; void _update_menu(bool p_muted, bool p_is_fx); Gtk::Menu *menu; Gtk::CheckMenuItem menu_item_mute; Gtk::SeparatorMenuItem menu_item_separator; Gtk::MenuItem menu_item_remove; Gtk::Menu *send_menu; Vector available_tracks; void _insert_send_to_track(int p_idx); void _item_toggle_mute(); void _item_removed(); void _send_amount_changed(); Gtk::Popover send_popover; Gtk::HScale send_amount; int send_popover_index; public: sigc::signal1 add_effect; sigc::signal2 toggle_effect_skip; sigc::signal2 toggle_send_mute; sigc::signal2 remove_effect; sigc::signal2 remove_send; sigc::signal2 insert_send_to_track; sigc::signal3 send_amount_changed; sigc::signal3 track_swap_effects; sigc::signal3 track_swap_sends; sigc::signal2 effect_request_editor; void set_selected(bool p_selected) { selected = p_selected; queue_draw(); } void set_v_offset(int p_offset); int get_v_offset() const; Track *get_track() const; TrackRackEditor(int p_track, Song *p_song, UndoRedo *p_undo_redo, Theme *p_theme, KeyBindings *p_bindings, Gtk::VScrollbar *p_v_scroll); ~TrackRackEditor(); }; class TrackRackFiller : public Gtk::Widget { Theme *theme; Glib::RefPtr m_refGdkWindow; public: void on_size_allocate(Gtk::Allocation &allocation) override; void on_realize() override; void on_unrealize() override; bool on_draw(const Cairo::RefPtr &cr) override; TrackRackFiller(Theme *p_theme); }; #endif // TRACK_EDITOR_H zytrax-master/make.bat000066400000000000000000000001131347722000700153120ustar00rootroot00000000000000taskkill /FI "WINDOWTITLE eq sytrax_win*" /F \msys64\usr\bin\bash build.sh zytrax-master/sytrax.includes000066400000000000000000000003131347722000700167710ustar00rootroot00000000000000. drivers dsp engine globals gui gui/icons drivers/rtaudio drivers/rtaudio/rtaudio drivers/lv2 drivers/vst2/vst drivers/vst2 effects/internal effects drivers/rtmidi/rtmidi drivers/rtmidi zytrax-master/zytrax.png000066400000000000000000005641541347722000700160000ustar00rootroot00000000000000PNG  IHDRibKGD%,;p>M pHYs.#.#x?vtIME ܞ<tEXtCommentCreated with GIMPW IDATxwU?sإIP鈀5vQi$ƘfL1DF,E*]Onev]6P,d&xe;w5D |λJWd4Og#y1T*| jSm}6S.$R:Y_O9iTmgWtOT2u* Nx Zja&]q2g6Hh,U=DSHiCepy("_yBo^RXCq "}P*j"UbS1bVQ2R!RZHD ޸A Gߒ"|Bհ^)3C2*6 DJl;֕xks=/w8WLV91|1XW>cO,#tXeټڥɋPTt Z_jZA̗dQcxu"c>ɗB0i#wH~T0{ȟvW}y`9\|س_KItP9]tb?ń^;y2'2dBd.y3zFm&g >_]i}ЄF,{?)Gd|3u̺a,( {Xy,0:<\{-KϬJ%+re{9scq]%ĩ/51if^"l.7sj4ICѷK;q˗/3vE< :2FY~!ɭF.gI?XOG8{Y^gnϠA.wt\5`0,Ivr^߃!8-tS/R2[34RT6\(hBW8X[ LgGu 61MxܥÅ/PnyS/'nu&cj,ݼ^VEL'Ύab?@-Id%,{o8~,#'Ǟ4MzJiQ_]Ʀ Hd{jn#pRu[%E}6m! P^WB9Qv 2}FpR^˻Cud:mc$+X'?A~P؏a5>҇hF7Q& >$mh˫1$*ǿF{< Cd}Yu3no{º 8, /G$Teq ë/brNg ٶüt=iNV-GN$7On~yhqfƪYr./G‡,MᥨWzЃvP9As7 qw,$c4'n7*B3z|K '~ 5Dt#/)ͦ{}q0}O#!a1dJwp,Pty' 1]>'Pa80=ͭN֘sEۄp.W_u5Pe_Z\ܹsٺs=`fӡ`Yej%_i)E֮݌aI|+ް@ \f [t*5F} (rT}˲plۛAT&t18q |߯׭T*#%i<nTUű~L&Db6ij7VVI:&wZI5MdRw_#!KJP7TIPMsՕTIe2S D(p.S#.cA ]N K8(>倝+{HǟNz#풮@%nzW?f=)eH]2+ !tIQ8KԻm@$F!!sB,0RHAMK" fc5,]>jȎs22Uk&ERRҋι-7mHaEr v㐗DS`ېD= 2FgQrrIB@qH ]BMaHf BЁ< ieJN?];',"H3P>po~6pj  smD"i^R?rPa4ݑٙ0|~K#99;ne֐PnjeL$/hwPjZ](2@0ݞcV&Mn^~c  Lv64ݝMLh$ݢ=2Dۢ!NF`ӯҫV6:PSӫf}ι{N4dh ٖNgO>8;`>dK*M`dbZѓ-J9S خ자83qܸ=huH= ȱK||;Ci ]+;Nf 6RUTwէӺMAC]oхm\*Jr=Tߦ8S(Yt(Dw|-+Z(g $ Sӡ.UVi(ǡþв qmꢵBHη.ZDrr&'ΥEu x|IiDkjp:yRB~a!vn&u5巘t r=GכŶ-qw=-@}CV[YFu5$q8R佥lq''7oN;RJ~mnַ̓li }ƍߴwG\"M 8L<3gl'=uuu<Bv5RluF8;+=eeTE{lAՠ2ږnvu@]@W<7cߙ/۲,ʽ{0$.bǡro9Ç G?!,^C>}袋8kx'3g;oHn.t`B0PUkqMtp :}{aē:ÇK_H{Q =VЊPcL2~mƲe: L4LBf͚nX,c==/:lĤ]((kGKm4g(q_VPqG%`qD:8 klJW5=2޹'宱G,p.ypgl*aM ߽G7 mh]78ٲHAu] [!ٸT=Dqnhm5n86-QU=vY~5od;˲`)'s-0a-Z䮃iSַә3g2k-Vĉ9餓yWxضcybn<̮jZQ6Wj !۷/\#h Bz~~?=8  7@x񥗸8p`$~ҥ|f-`eZ7]Ө'o~CII SNŗ*O&̝;x.aGڅFsk ROb|^wl$0/U6e@n)SO@/]C"LK.P#t &3w/m/.=CNt(_.*bo9UVwVA-ɦ߿?wWJ*^ߘbnHH$SU5ɋ/ݖ_r ~yq8Rjwq0 $>,?/p(X/Mh-55TS[SCuuuc;MM{55{_yRrɵW0*?/3xr H[⠺&8pl)m]dSaJJw0SCЧ9A[pTPHEHHeҧQ4MET]UєD'^~vv =ENnHKں/(π|dJ)ŧ+,0/MPUp< s#TA)>E|.*9yyܙE Cn9}?|K/(P%7yKzS:b| 3v8b wIAGJ,Ʋ[,s=^r4Mwgi,˽Eȴs`P zq'?薃'Q(![ʵ¼P|™4+1jts$" /?kT(#&yL9sFZ]|19=~-_>?.}_}^E=w8k_g`ؙ=<{֑;<}@TGRuXxkkn["d䰡[rnԥ)[)%cFWbEA5q: IDATwdn)Ĩ!Y'M^a@SYQ;bhs/9nR|Lr{H_x߼J {xgzo? UN"9uH!' o"PwR%Uw}ɋE;ܽKGc/?y? K9^EQgD>c]oJK8~0@* 'Np%,_Y*>#Gl8 0OӜp i]W=z4'v:z}ZTTW_͛  //3g2g\+=ΆMY` jyv?u0v,]vY?)%Í7(3OSA\eLO(O>۷&TUU._ZFLL`zeH jd g_6y+#8Kxk#S yӧF;_K!Bf]w$F PD`規 9\|0Äҷ4]!% !3X}볽e->3ljH,{61ivhFzKV)A~Y9"l]VP fx^͖l4P2kg](Pi 2MڝݝͻKG~i֗p&E[]^W,_lƮl cs3;8mlho;;ۢlmהr67jV]*PEHN; *{iMjg~kiɑ W t\1iuzm_`5 l<ag7~)m`Et7:IMPHnespK`wM%TMf׻ SWGdL8sNĬ3pObcBv6 /s>潷,`oeq啟bҤIp^^̀_dChorl6mU7YF+1{q%a#J^ `F) b[XvOGP\ϭ?/y>Yj5o]]͇Kߧ6ӹF^8@C2eR' SСif={L0}+߬o[lݺ7߂eY}'ٳg3iҤFM۷ȣRRRĉEdvab*$yf^z%ϟ}c ܢݩRRQQ?LAA3fh̓Z{9MK7yg9s&%%%#z׬0A 3gz#V*G!J=h30K"(4a4h3f#7Rњmk2l C9Gw-ŪwX|׌f#'7! ]ʓ23ia2NK?Fms/“zh?aT-]SSSғ?j*u+魵$ YUւ P-r5N50h9|ecծ_=$X̚l,t{FMphpK5Y_9`LB%F//:wI #Y ]_"ծP.[\\x=[ B5R$#h]5h>5,m?Dkܰ (C u5`;PT55_%0dAQ[5 /%05q@ %ZnW$6TpAL8q7GѨڲEEE[Qƨp4|>41>*r+w))ɒjiV)&/Qf5/>a4LgͥK`e8?K fi,}6:UػNቧ@ 7|+DQx_ aNj3qp?8xJPXl-'O ZUF8p u bҏ?q^SN5aWVVr}?hy+6UQYq?<_=SYYٳ7۶MHe { Üwy$&I~i|aMwifU:-|E803f ˖-zmin8] ?{5yZ865)&^[GʫrڻԿuoƩ(}X夾KZ=OW^bW3T"AiH'gaٖ*\49{+(XVHwI!nc㖭-6C6lW^a# ~:2 yc۹S0RJdGLVp"zy/~ʤKEJSQԏ޽ 8Nl2ԅ(fWPt8F+vq҈znFdDH@GPT,KΤbK*ImI>RJ sH)<{:Q؎駟+W@UM?@}s74no:+?nc۶a#`mg?7lkK Kj 8R)^}UnVvyWH3‘FcfۺޕBj)'۰{ضmկxgݻ{ǟ|J.*oAee%Gҥ>_5K[4]_}7k1d2yR >x9L$:`Pw̸Fٲ Qki)if%h2LVD.繷*yGfia&0 )\ciEsW/GY=%k|Ȉ+o~r9)BE1q:&Oa׷|g rS>xn<\3#o'/ ,~诬(7@씹xry0 @SՎnr_W(+޺}`*28Ѵe[/LB"^&~&w(0E Bwlޱ߿<͘=l-3~4;UNpظ=[%{Pe;[crTCA49D:RM_Yi3w}7~0BYa~;wy'D (ۍeYTVV~z|Iޜ7a I;NsN?򜹄#96ʸ{K #.gu,^:^xT$nGEu-s`SbYgwirw\dXsͦ[4ΛOk8$cfLwWI>?J]tQ]&'7:w`e>#5wwLgd#Ӹ5W-MӰ,C@t]ww5Jٮ֭6l?9 Y5kykۄ‘FS Ph4…o;<`|>v旿%7thSit]RzB?:K6{3MUmC rO5x=醖2i/HZ-7Uo15QS]AUU$ڐ= !3DNTG "mZMj*3}$mDdǶHTT` FnSWSC] mAmbjD[UXb%k֭G p֝eo4N+ڶ[1, | !Xb%lm$y ߜ7|m~>dZ@7y烢t_~o~#l&c-v|ŇVO$ؽgz7dm=.;yѡc%OjXc[=WQEUlv/?0!m_Bj2uTU` yaP][PliFaQ>4BrrPt]ƻtJ._=5bi8ܘuaqVҫLu$miZ5պ>] Q])BKXr5y$తE&9 X CtoZ#$hb[4({ARWWR[F@НPђZĴ 32os}k島 !]_kדp9zwۅqB$̬ڲmU Ż;7,c͟m ~F)9.^i:5Wm8;jYMѼ;=qjz$P!e>Xh)J4uj-5:qGZӱPZ؞(yV*&"l@ږnUȎwuyQ6t> HG"] Sy~nͣ#PD_0aYWP_IN!pH+O CǃWν% 8z@t~F LMtu##y$12LvVR:X]4+{cd[Ƶi2}>z2i1i*nyE~CQTMmhg制oRo 9m~mR:IpB`d2-`gبD(?etuEHUpl۶u&p׹ E2ͬ;mҕZ?8n^7iYNwh'"k3cݞޠW4x80 _vkq(lwOu!d%#ԬolB!AYY9Rf4 C(MrD6r/]."!faH;$W递 ñ~u콆0B|=x8m&A Zs z8DN 8FD\ SP*VP1AaAHKlF!~ٰq?LjC?NN?thƤI9r$@aÆm-l~* +.mpk%5Pb9i9T=im 2ҞSRA(DZ*JONt 0 t5B")Q!B&B084jMX)F ӫ(iQ^v$| ZV$QOuMMX y9tHIZrU4ݧb$V)By*EW$q$n|oh4}<|fDB(7"T"F*!;,Ē1<8k?{Fx"I("̸F1a8{ϱpWuF,>8+~q ~ ̺q -Ѭ4mݗD(nK[R2j*wq xv)1j21z(LӤ3!,vk0H("JBbY@2UBgysOp#>7~a.kYMuDVzѾgŸq 'έcy <)dl|¶a',Y ?bq,5S*8EE WQu3,\W_s$RJlr,"?MHP36ss8:ӛ7Vc! WҺ6k?DVK PfFpӬy_.@7•WN]Wl;&LK6ɬFK?%'GS,foq}4g3y'ؑ(JNl o)(;4neR  IDATZh _10,'u_ y .~SU#GRUUc}W54>Y(W?2'gkr[ʺ V#9m\1ƻxck:nGwJeS/0gVN+< EɌa'ʤ/qp<șq>Hx9<gŤF fp^amN1 n^;eMhr$HGRft!el=>͈bq̚/@V'|i-Ap:2K|΄}}_ CRRx٧4i Vg9SrᮺQEAxsj1V&P99[;Y9%b8DʥL08{ϬBPF9fz6,ƕ=oΙF ;c[?0+71'-Ğ]YyT;/klfX?_x/ J}'1%, J]SoL})gW]ܖ<>O󀷕_`3@dm \Y?툛W4]9L[Jq+xnJƖ2u".\dc6T f9?."@et|YJjT.zE5A-endHesKflYLSb#+'2Ãnʖ,H]4VYvw'`/nݟrs25g#Œ\O㿗s-<'M^4KCIL^=sL_*o{'x?ޢ7RGE'%:{|I?|+2gmśoPu?%??̒%K٫?৯ǻ:FB/ŋ$#G_A0s`֭[G τÑ" Z|/uqW^&;TϾKa|z#Mcs%ompfƎb'%js=QLzn>S^I? wV?Cw3/ՙ\:z^JЦl!&ݗD9M;Ls*mjs&8|[щNL#+@ji&4kOQJX384|~Y Hs4>e_L?}DRf.y}e'a%AN/OaT Wc- R0JϿ|SƖQDY&;;;a} k„oimMRNH7ꭥyL]0 &r!xZO/xD{V̠dqz2JshJYĚ +cںY,j3y3On[q$x}/~_|_< 対Ć|{??#-\f#99Wcv|1⍡e)V% {9D([P}:uL%Vg]u?55w&3uf!]VJ$Bme{ؾtѠgt:DQͽeF. ^먪62Pμ^!vcS{K'c X'|4T\(3?2K n]O^)>k,YhIO]99qΩ&-U ss*3~^ZKk&M4*]xc=T!l֛J]]G!ohLmIb9x^;wTx^8$= r0DQ7[\deO ׿}.0g.p䨏25fv?ϱzSO8DTkbWyxst6'O;BL,r?Oۂ} 3g VN-4o33}rf=7H̅ 榑mabn\‚,޸=%J8+YSIegxp8‡D_yK8N<G45MN95St**<(8j|-tk<˱CW毣 +^ w'ˮpy"ŝ<3X7;Nkx .mUJeZFn{?=TJ{]%u/6,\c*bb0ޜU‘訽aG" +`6D"ЅZ=(p+4šAd ;r/:&^:5,XDRk1^&MN@\}i ɝZp9~ABk[CKٳ&dyk Ǒc3Y%*Na՜U%\ʉ58o"5FJ ˨$Խ $ ?@MMM}G=^/55L84jkk:JK &F4wv M)gKt^Ro6DF:bUմa< !>,$s}'L6f"2aIp՝瘝[tL1+^0{/]1͢4RSj<+x}X,C=5geԩb1{}~D"a/BU<7;o|@; U$:FѠGQX^q]ٕ% M$Fls`aZ? P*MUQܹY4uqν+ :+' !^ŇúSfϞ^HUejE%5Q\j? r_$!!#6xnY&//~ M}2wcעSWWF㴕̞;}N|Eb0SLF@z'Ptd'8eSX^% &l*qAMSђMꊎ:}X`Sf.0TIu^^ sfP^/1nFnh`ʿi} ll6xM ET*9LɟT3 CBBgС22 (&-)p6}s(N,4_*Ñ1XW  īhZ*IMI!v=.$=FN-e#s>Ya`ǃ=%o$MF#?wr:'+t#d=6<74KU :@ʊĹ^i )o$fSOܳl )5`g;X&i= O:K$+~CȲ3(>[ /Z̿}tF4:ڇ<\鐘j NvL!DY3С\/ ?I@BOc̶撱~͔Q#mbl"Iz:Kx=y4…JRxre6i \)jF5|%V4H\֭@xg[ц9'ȲB,1.1g$@G+]akmhj?/}uImmB4$Ag]kܳ8)So^͜i1&[^d]~Ue)9ܴIVI&^#ܿf i_3_H$O|,kTe+ԙl}ez)D%' Rv&RJ)Yy^fnp˘̜($P~w}y{.`%G"y6⒙9|UHxbO.X[y@} <Nicq~ &m[ZWh\DfN$qWaf|1yR^/r_51 yY`ccO^7{wCɝ^m9Qߊqh bCu7q*54׹q8dkXbRq<"*txrP8!6>F'hΦ:}X47/'SRlc`zsk pr MM#1HX$I&+3p8 ~?555(;KyEsƞS d &jhJc'.0XIA;6֡ٓ1ź*;|CݴV vhmI{ch;=77lJbJnB-]r&$ǢAh1I)dH>|C9Ǧϧ@.MEr*5uFAz|F5.AÄxض Cbs lJkEIsps :Qa_Ux#L }s r7rfOdL{Z=\\QUiբf׼W ΑRO4$ԑ7_OR_:x?w~@zLcUKhZ"U뾓z̽#y> cX d4;1x\OXhǎ;]z=-C& DQlGܢ'F rS@A?7dWpk!a\=AZOBGJ\6~9AG%=7zw{KRr5M$Mi=0ƙҘäCPqv~j%%-}X꽄IMKР tWXYngn[qSC&z}N9DYļ3i4dE34ȅ# 99%0Cr?~ H2{ QYO[h5U5x#ȌV5$%Jl͝`.tF$ aaK7AB h7N5 IhcF#*k$wC}C#Mm#8q#HҠA-qHa#Ƌ[d%u7^,C "h`OKtxd+z{ʊBJZ:PhL`El 7To<ާL7M$(Cǂ )4w%YFktSӰp{#S_r (ȃ]p2 G}pbܯ#յר6b%@Sůu#t`[I,HSi8Ux8TqWGQTV|V[Xy7>Hg\46JҧA8>c|z|*&zhn o["'Z>?77>>{?1naP8(p7~kTdY+$uApo>4VϨq!%~pF~Зvp3;{&3AyF4In wN=VY']4$U_5MFQdWT.U̧Q Y4HYRuIj tȟW@l~I}ݞ-IW$)}U{j1wNzSP3&̾>ͭ3 [݀U;_ǰ߿ŹTgd2&{.}ZΕt:rrNBдž߭3A(mpu O~Vq7puҹoYwLN>#+(b̜Ybt:Wyp1U,4cŤ`$hۿw=nC7ܖIlWC93+nCTT܆y1>F?H܈VqYnmOSN=m\o3XǽF;c|j %ړ?7/)qQ MS#c\Д)>45Ύ XA_{Z`YdffHg_=`Bf&ӧMh4bX`>֏:!B219 yoJnxK l?hb,H[*nr"GK,d&.E˱G Lz&z 4$dH6bR_?[1$j$Z% ~E9Vk YI|_Z(yERd2,x%4>YI6}kn$e`($ytk7"}2R.b}H N ?OܸL45ܐD<{,#"!1hh93^^M |X#oj;VǍc0+X-DIޢfP#$ǣW b-J_P$ 5AŅE7Wd1UKTV0 HJ4{xh|^FzHb['z1 IDATB݅/z=lo H=ܢ5hg M ,ʂ->UC2!ŖG`"95蘠~? Ycv3h 2u&@xff!;b?%iyk~Erk꿉J lxf&]q}nL"s;(.FT  {Z. =B$rȠp,)]vK fZؿ}:  =Ͷ&z=vrx\(5E(>vO ]f)ե?݂pq#5+Nfϫhܛ Pg{#Q@ulLixZ8=47'3kу,5%̡_rk$XLw8\j CN@fǴL ]Mv _HazJ!ΔcdÛAMTٖo=0AoY_0.hڽp՜衽$Oc%؂_gph>9ID}n"$3E3'זnjmHFWiG;Drt.o_&L:K62s'8y]xP7AC|<$AgUWR#'v)#|ɝ줮݃#e .'U⡟+oKࢤؙ)6-.`TGNN>zeKzfL6;&2v`JRE}Gj-'%Ʌl`XAۅ"R@Gvуl^]$&:B4Kg癟?i7JF=%P~7:fn5aQDžf'/6ۘiϚMۋ!*Avw(xwx\{IS8:i+$Geg֔,\f x%lFrb&*h hAV 02n?>φU1ų<~ р i2>-V7yhC8jo"^0mkqy)&-p *p0fbI~:w9B>zSőӓ(=˔c{w`x桧b\…S[Y;@t=p$.a#¥oq$_Yb Yi ]O*3sHH$6xӘ;_4`/>b |ݿGcܟ> \ڳP4ȕ#qB!KDG c#9Y]J`dBl+f߾q=M#z}1e3,ϛi\nԳ M{#::i1jJ!*OKFڈ?zB:cO?26K{r^?$.mgLY}{FF68MӋX:o2MN>o]`Ñ^*3ҭ:.Rrj++pv:d 4Xy, Ѹ9Q23o~Zi:'"!, ^r$GS8WJt@bȼSVhjl#g.2zPAݘϜ Kwqrh`(zvMh(ȴSRؼa9mS1__3 LrtǷ#&ELƗBG k_ yƉOT]Ƿѝ˖0=#Pw*y瘗;pIy‡3lr&9_(͓_| 3J2unRu!bO7AqYDƑBr 9k.'aD\s/y)ـ/!I>cldd,HBi.cނ5Sbd~g<&\^BRX|\oWӭ#-k!Κ/*zgveb> VLĩLKQ?L{s0r`5pq}7&Yj 8(V&L .g aM EIg%oFFB/H'K'^?BSš;j AJr?—0ǎH#fr/j U"'}d/to\MVG )VRZ!S X @t51kpT /MȟNVUQgub.8o^"sHJ kR )d43YVஎ? 쩹Zu5^?;,MMv9KLHꁣk$rzLl~1NNsW#t{z}LN, bBWZ "<61eEot@ zF~ nGeR>1 V]3:(YJѣ<6-̱'sR89yu)z4GJRP&fciu@=M2aΓ,HCt vN] SE&,l.~LFфJRJ>s39‚e̚^G˙JXҧ1#ĵS'i e&؀~SB t zE IɔŒ2nAc\Q?H8WBތ,6V7q1`mB|?Ǽft4 33@$L|/aC7k:IĉZW^gQ$A,;^C15^!9@{t\mG*I|ӍMG\LNCm}‰ X/M.IJiQ̼iylb$Úuv.z%\E"iDcp8ћU@4 b2-9AdIgc-,FFtM/d N4i1f.wXW"UQBʂ53+#IS:G d(p,QHU};!+[ɠV`Des6K7l,bJ.#piZ ILMÕ\Y9ZԇɔG 1.fFHA8ԅ\ X2{Ua>IU>GAhX8<#tzqݏ$]ĥn5g7FHy(WL=p=D*(96[S0A%Z z1̦iLlzi>1ᗢ'%E3͑(L4ơ]ouP V]8rrKz=ʆy| :1INn@5u˘`=a}9qd.7ֱb)z;w7ɏ&ۡ\2g3o^ >oElෟ[L]Οo-)晔΄_̚bw?8k\cp 9pڈZqX\چ!t$=!kL)ȢՌ_JNr:e^8iJ0sm1PUzx 1w*M 8mŊ, ~KȨ2unvw-!W>{[LZ'VyN[3fec8`='yjm}5NrQM.fa/r>/ Uݹ!i gx$$:)1ekBA{k(9z-v5rd\Nn3ٴf[/sڠlhoR15Zs1+kxXs)o Ih)2D=AzW `&f,b20habkXM (zk v݆2@?P:L  x| _K%+ֲ,O߼sNW]A<k"XWi:2rǟBw(o J=b'nkE'8lL4FeHPh9sNYULgZSåT1~qb9x +`&D<ݨhAgO]Ky~zN8?ț$a`9nn$@gvtPn*!T.nș^׬E Yb'c'k: #䮢9,\CO. l zv9mٸ! 2ޖKi2c_uL6 E= ΒLLܞj/6= }GfJ e!g.QXk$L$l+Ɏ!-HÌْ$ 6c-\ʅ\ZGPi=kQ㩘,b9=M\O. SX<}59Lg,!xWѹNΌ9|N]wgF18iNNxK~ަn9q3#% 0Zby 5'N)teQ%IK{K h$?HBd`vpŔBVr]%4NYMJ>\Nw ESvr\_ mQS!1lIC| Ղ h6tMd$vp ;iQ[n:]ޛ݋NS.n=G<ġ?"pRTw*A%Ԡ.OgJcvk;ըu]F SQ>Uz<]m]Qw;{$<T!*藍,vL,ޅ tAO']a췱nJAIr]!NS & 0U5$wZA5'YEW?dmtzCx-qNSr=JgK%@pWG-pwM-XKS? ]m5tv!g]Sq t6Wх\|z#ă\>=vfj1Jdׁq*OWH` $=嵄8/F ~k50gnwk&, PD [!NU=&a x)MG 3CJjs+-^hk `nKsgsH-Ո{Ct64Is6}aFOngjoxUyZ?`fqv6qDC;DBhMK u !Y$%9"(ܝ;lR44a&wf*VOVpF׎[ @C8o]OzE}~o ”܍Y(+f`~$%𶻣 HhEq;[pxt08F_FW]BlosGnZ7QĠt28GFLj,D{轇>'4%MCMB H`$zOskwTSz6a6YbX8)A[ >zws {WqJ pvp.l]:1:;QP&pyKi G 9[" IDATtTUI=?I}N6ܺ NS/ka0. 1镬\Ì e8.\ϜB"6cVg쥫,cP ,OVA'%KjLmvXrA',E-ImÔKJn<©|7C.I~OTr-)0/,>a 3C,'q/7_){D}>6n ub.>!Pe&ivRbB_@&@/S^4Tk;m{ClJ?2<* KVb4 ]3⎤|FnYHSƴ,]y'% y_s43%FI#UtZrr1*ZN7IL+kb`FOW6"siDĴ30he"m_7l>vw{ K9 }6FX*{b˖u'y#X4ICBj) ĬyaKbj&lĬ{(ΰ ~VUceTi( o5a 40a \q4̣f坤D"sciqR|2c%ok`àY;-5:B"+Ӹ1NZ{@.Hc >$bbTPg(c2=i+ *"M9ZPYd\ZWJr^;MII(%/%Dd<9ѐMw'%22M8e?ՀDd4O?KHg϶SUUD^!헡3#}-oqJKN!33 ɬ9K|0I$lNIwSf|?6(mm=MKŴtſO.%EkWPh!iyxpvc>JCaD1ôG%%s%+}-Gb^GusGCAr=". GF 6$tVF,$2//zغZu8YL\7egR@x\X;[v60Mo~LE2<:jh9pcK~Iv+a)-͠2VȆϋ GOS8uyywan #,&Z[2Z?AGr+]Uj]yqSzfj-'>F&RhNィ}N,& Y lᱛaК̘"tS)4 Q1Nn =E\A\psP FۻҜC3WAC%difFK;Y!d Kwuq΅Nmf EL8%LRB/mq!&q!#ƔT{x#tn;ELyCytu4ZvY'5۸\.Ρ3ũK!__]|IscR +n.rNMoOyZ< hiVG/FjOTL2}g?{h8@A<"\+&L~ 7y{HJs@eᾯ!]/O&-N̼<0 jI;>yڹJQ,^9Qm{z||Γyz/'8*bcCԷĺ5 >gOqNI&bc(RH_?clJbU>?(÷rENKk `2F7<=r㧎 `%:%-$ šKXxw7VDCͨs+9d m}CDgF%t %& [Xp9@ʥb 'CH19.XE^zS,EM@cJ 3(\ *2>܂ydWS?_U"_HNqLETUxUS592SnoDt"ʊ),(Ǭa!jJcơ1Y⅔&ǑZBEQمդl>>|2jJ2o$US\\Li~qFRTPBzr'qŒWPAFR'qȘSYK^b40QYCvCґJҘp)yYXCFR4 (ZN~islJ (.(QT*ht><~'nQ" +|m~F/ȓdWS79.yTW8ӸӢ(XDNQey) iI[BAq9eIEu9QXCv0ĐW0qdΧ8*b#ǝM$ )^H$gǑ_M~q%ey+!|2jLڨT I/!?4ٶ؜ŔPTPJNB P VQ.iMdT]qT˜VJEQ9KH(/ H4Lh7yRÄkc)),"T|7SPPNF22'm\TVE#:bSt_XQC>ğ:S~b䲨,tR I,AaHdÂٰrZj9َP;*@}In_̒[`}t!tѶwIZNvrv7T6?U[Hks /|iғXCOf\{<_JSafzGFQ$mX-|38Lq☱#p۰m38U5ڇ6BSNZ- I݂NPguk_=O#I bn.bJٹs팎:> ˆagm?4Otf,vZsxYGrq,`;js$9F8űv|$TFvs>zlN$.+V}9svGޱ #:U#;hsu;hpڏ}δ@B~Al=qaIه8s8`>6 w>b; V zl4o˅1cÌ,!õ$N/,\XBCC 9*%DFF俻zl]>6(]ǧDchAC$MIGNI?%ح؆Z9~N$p߼# cLL+1_G+mjZ~<ح#X/Ih±ryW]k dVrYm ~suӻij2qCpbֶmKiΓs{9\7c{XVF:qt=cSl>JsNOMF G}΍p8Yi<Ȅ^oAOvGIaa$,p ֭AӲ*K?;`Z9S[Ggdhf ߀{A~6NK0ցNl1<ښpq\V#p @'}- ֈ~]]X6wo:AHcCԟ﷽CYcj--m:NsG; *'*^&)18=GjkkɅ(vceגn޺i)&kJS @buF勞u<;q :x"+'mnΝ՘Hre7!Y} b&$S?m!PYC!6 v }B W䑤L7a=x 9^ML *c&wcfBD|Q}``!6ns΂|F̅V 甤SҰ$6q _GԙIVtl$Pv 9(jp,fT(P0eYT ˥-o$dI ɯKHrE@`2B'"Tp˚UDN&XF, YKjU+~)tc4 n*dji_ IDOD׶^Λө26^s]TV]/%%DdwAԯ4|s2ȿIĘLj{ý{d:/K' 2.N4k]`$(;~Ͱ4%?n"kɈo DBg+Ο6GE5ގ$ ruE>f]cOPհpbN(E$um ,%75ڃ8DRQXBNݳ}T&K!%h=q_CVI K*QC()f$GyU2?W)zܒ٨YD`iD}cJoÝgVDVBEuG$QXI!"*m`SGZ8o呉Hʧb9ђ1:o9~p:Uh.Z9|&yˉ?4Īql_LVJ(AC[oypyXC2Sb%gҼ뛼}>G~:0O-}3׮ׇiܹQe`0h}fW$IFo q##ns,"r9p'^g Dդt\r31p5no>Iĝv Rмz/hO #hrpt];" Ii46Ğ؇dϻ ܘ x$28ڄ!!+K0$r;hrB먪*oyg$ _J2-ƭ8I~5A+EnYS}˥P'?D$¢0;7r%_'Sq8 "uuoCn.$o GDkr Nܕ +ETegbҁ.6aN/y7AM\ęD~|"ncc!IO1޲,6>vO&}{HYs@vb,[:{Hqvp8yEaY"l$5#p00w>o!:5x-8,1)rXqX'"!̢, 89Č3o|vxvC8idfΔNf~R@!CKfn'=L^J/!:s̊/`$>)5KʙY0M;pÆ($]ZTC IDAT#H_Ϊ؛vpnh!1g`V1/ 29S:v xLXYp̊Y1'eYb~ܴ mIdSx2p}H1p/$2KTG_Wn]" U|P"?J2Мfx[R2tͼ۳ ⛪<;JS \^(:̱9Te":$ERX(?p瓣'-dI!'ik^Y֡`ws $HXVOwqbsgWX an>3_hI*#'%vzF.K7T5D%<B\8ntD};Tĭ *ўy[3E!d2n-4>BhYr'y"PܼKܵO/=M\Ń[*=y"I!HT>vKR!BT\=grUA098 9ܲKl^<_B(d#|8ElDr ҒWZ3Vǂp2 %Uլ\&, 6BEK7ј2m؇[Qlk꓉BˠF\ }1R[tI^~\i`Ա>9EgQ T==(yd`@l">tY]s m,9^$=D6x6:eP0zndJnauj@6$0TG L;Ya0MN"aZºUk12i?8s 'dsNjD'JGVJe?=SȶaI`Ͼ>6(U0o(IUasxCMIIH~G,,EACܶuX<n i&nK.lpuM^o8V͸) ,Xr;eqx~Ltj֯^Spf/ȓEF?|]'2-Ī$Pӏ0zz2 hoGΉܜ"NhN!p($o$ w>:up-7^SLIZm~FQNj3yϙYq _kˉwwO'MSA[~w8fšϧ|Sx\~H62(,牣}/]'f[.EwlqrA!0ְ~qv36+֑$pt!$ 6zYNwtR Ob{gGgŬ1xTXff5z?Drc ';v X\mΝKrQ2Ϧ~R}#48YX}x @wBz1-يɮHf cmkBIcզ|kN:YT)r-v;t#akρ({gKd sNU-[t(gXBK泫i;x _9w]^4oVkIk"Bqy؆{pfr&S< Twc/P]4MOS죹K  U1!`cX {m64zuQً{=ZG >8Ȉ?@ Gn"*1ɟ;Lƒ{xYRhӔĻik;/EQqv좵و!OW>"4ZZNbqC|F1Q8m9{}M]PΛĜ<"}n:Z+?ӂKshzuͺ!tx{rx-N*1':YPwbmC$p;Q(=|%Thl =xQ_`r}4}q\^:?>ʑû鵻@X<]{D=.m󻜨(a9CA6Z3938yJTQ_}TP .b2V[ʕzK!#mAlܸ34\Ț"@|:>_q @OF<`?ϓf`'Նle6NmǏtg6>]؜8Ateq֯;_)</sgޣ'e4z<IwpŸM;|!u6ݿ'KH:\Yd_9† k&aN>DOģh. cr,oaq5p\mZW(iBc'{|VSN!1Ųg6!רU 'ޭ&xjBumI _0FFo|Baao47\dZFc<11 YK'IN3(DFi/|M%[(^[ QH =8><)ߥ^b1a{sFٌGŀ6$n-itY&<%iJ;9t᳡wQ9eJ^ &:ñ ,- c2dj>S 2WƖ~~?*/V:`lhאs C1$pg'KH:\R%rjǰDs C $I=kz㸆'WY\.]%@Q}0jsC`$4!왬Hް\_r5l|m A͵G&7PtєVތ[gqH%}ECY-S}khfTH0K1r3:NPz:2 7mWg/D?| 7߼7<2 ]MD$n" 9sGH"h &zZnA'{ 8IaN}D[O^ڶfuoنk%,W&!\scpi(^իWk9[8Zp(YիVc7D%fIs"ΝB 3L%sP$y9TRh1ߵa Og P4/I2~w+擙7eR!#w^}fAP?xIi ^¸H@>*E$`!Iςyp2oFF*U>'ۓs~h-d`_i0k,X6PDE'S$TuzeB%*o-YcTssqW@t<~gHQ?T<ǻ7PB i"Iͪ #CKscٞVH%+%ʹ3 |3XLc-4T@~V&N{k-}V@[LY,#.yi,I:=H1u19S TA֖r vDRrR6:ZN_HK+I0(=Bm[ӗ(H֣{ˑ=#%,ZYǠ<™ X_90&,0"SĠkhPHH-"%>9B[g΀Ԝ #TZ-YXPPPJL@3M]T2P|vZkH*XJN|$0s!4 B"2擛LN=LWw+KnwYnLl@"3U/`wp^wQq`G4~iJgO{n (&~m";)~e;}./gܵy ei(_agc)c'bS~Ώw7O4kWᄒ!aNJ$W24DL ǰ:U{GdiWw $bsIq7_g[}'>5~LF<e ;TgBl}|볼Y;tX&>#t ^*ۛe| ,c[{Xoe+gK9R}+qa”͢^ `Ϯm㗲b"b![gߡAJ"I1|.ޡ"Y6;?]+o6`=kfq]OP]21wȋ?Ns$dI0شw~j"H4##NJD8hO/">&f[4rH؜ΨE3bcdDWx񣸁rUV8Z]8^LFh6ah_|dt,.˨<8)#t~A7~Ff{%b mo r7doƑ\Hmdf쾫,2HJLeá#BBףя`atXEQh/D6տ0qqhL,ٌn}X%t0ck#цg~cB0XIyC#W( GQD3L]HlhʈW[95dאaoʀQCdb9󲳉6fYH!2s%V`{o !wSp|coqBrSref"0m7 K*C.tzO^ṟdÎ'3v={{H,Tlm47ux>+MgtƸ,K)?;~UH e~? *t8ALFQ'MgՐPUƣ{dO {k[ :?-faf*:=eYi׽(IZh:FOBsw~{G5,N)8G9Pː *6A£NOK,`^CРQd*!) ?~U&o,,tqTlݼx%H7- u [PmEwp#% :mqqZp`X n #iyD$'3a%*~1lcdgTl `;Bʱ)89l'VRSuѶY)Djfށ. K -zzQq%@Y!+:.,hd"̉$&&UE!  ~C$!0'b:ZLQ"KWҴ?"_BPECTzN3#(^! ATR*>8< $)w`YA#P(1xR`6!^NexDD,#"p+{P7I0-cCU>u?D d'Hf$pH~?غxN Pd_ "K>/ZՇFBbԽSưZlA\HB 6". :=F+gc侎{/MXA@A%*ؒe϶,Wg߫gO*Y, bE$ "g`.6z=.u\`o!av,H- mN^Rp>Np6ty1nJ!+v_O\(Ig1e? qJML\6( +IhlJlxLgi:AV;XQoDPD!Fp,&K$S4U6)EZ F.Pnnvq!Ѵg<}`ws!dV1bj8]ߌ!I!S\yEMX]ȝEyԜG?2Mq1rGѩZn܂=my#( O2KWLC gnٓyWhJ6Ug*Yl YjU#egP!łm.Ksl#S #%*B\39L!Lr'3#le,[Pa*ۇA  NfbbZ{4EQˡg8Åz//ˋ!1y6n/-0,x{> WeK)uV!7`У"+ IDATĢ@O$pO^F~^!s+HG/;( < Oy,= Y0ImĐ(Y3xgә0NΜOw, NRYt=&NTbє,N58tfMQ ac-˭=ujB:wS?q1wk)˖ zMmO9ypz_z K\\8vr n IeΜ%[(OY3Y6o5( mfM[=;y:d:f9u deK!TM it7KV63b3m:6bTlXZLsi ƒ nj3~MJe\|c:6L" Ьzo/@4i;[=dO57PVTBcG%n[,Ͻ|e- 6caJb^pHE™տf•88)DoK.Sfif$*rCT̹ )AWNR%\xHIJ 4Fl(+J᭪n2s{&p.֎_1-.&'N6U~{MS` a/]Ýw~;)mw]9}DA"[8W}hIuC˚΍7?jET2{l-l+b/y w]YôJ $e/m*R40j:,5&/{fw$DSS2gyxNzy,_ PC47wM̳"q|S|#wlޓx:s1̻sq>g^&Xq_fL9JcHl%s&-{Aq;Ϳ>o^#IƖ]gnTmH%zBXZ{0X ]O] OZΜO{KhG{  ;HZH>΅w3Tμ g;-aıy8.+HBZA׉4vxSԄlaع} w՞}耥`1;n.bߤ-mT]̩(pfIy1 5Jݐ<׉;\Ǯ3-dglc·Mcq,[y>sCiޔmxï>v(UۈtC3nyQdEީ {o矤7&PYQ6̴jE[ZKlVlxVL3GH8 }|o,ج˹/1fBѶHӖ}wMEDLs/1/'\:oZRqq룬fsdf{'z Jkhxe;};Wڑ%0-q"D~2e Ia-ك8\g+ MƒG{kB3}Ւi#8S\KoEh4 T߉JXUDU>y+;$"&V+N+V>)QV/:I݅@.1$yBlpQ{mj}SB 5VZ%iJ==^|Q +LAdffq_s>MbwH7[+Pی P2 xG$0tNPcJ!_(iZzƿ;%N}{`HNQUjBz2-xi ٠ښzfP;;$ E{Pm' zNP[[@u}t_;4"JMkz[rv W⋌yI,~9;Va/%40h©^bF~Sp:ڨiB3d:4:X=of#{8xxdYx[Op;& ,6,oA5Q,^9w.NK8@ڀ@ūwWk?`ÍN'"BйWA,X@ rp=XGZ(i >?tDQ4!3릯kx>8m3ԃ k H\&ЉA$@RQex z H2,Hć.bB!N,n`T֯[5{?ʜ_㑭˹pUmwƆ_1:3Gbd_IEsj>TYÇOUPU #>mԹi(DB7Ʃ/hLBׇ  +hHĉ\fpI>D|.v <zbd EQ1&s 8i=/as Co\*)q_=~x_?L`{Dx|w>c滙Z!/IƉ#ǖ'xNǐ6F95}ӻ ?Qۿ [_z=;"*CcB nP^^-l NM H$η6GJ(aĆ%IHH 6[.YY4jǒ^Ia;I3J2 0RsE.L~~L^9@ jϷ1Rl/% Dx ]Hr%M͎A@>BK7ST/c1A>0)>+I~EQ$iೱؕEq}(*2R~dHRX+\j>Ү]<6LFC-c#o{3/e%))B]ۚJX,WopTU8S$}֏p$=8 L*%V~"]qQ_ICXI&[oOJ嫧$`:>Hk& _2CtliqZvHtt6"x Z^SM(vI%I i )t) ^[+qبLPj):xJVd]YoW3ޮ4ULFƴ|=@><Mx:>}|dhdžvZ`KN 7,,@vzuK+99*99W۷DuZ[P~ AK(H zکq>TUCUOS~_ z]7G}m:>24qJ$zҴuYl2JrJH2&}e-v9o u21"\ Hh4Vw#CW'x<*+֝72_˱:ظF+^g %IbYD:fs44bƧaZElf3q>h- K"DXb|-MK:(~p"{IƪiH#>P]nT^BDJ'E5rڐ$0!>?Q=!rIbiz~7K+l xGt@Nd#%;5jEXjQkN ϟ7*( ߨ/H_etqƳ::G3 ]|"X$ i<-,#ꧯg+FGzywekbJYJ=OhyLlA(&l]Rq(]Wg>y))[t'7nY(Oxpnı ca窬›>]c%T>X;z?wByr,"Ly S+|Ȳ h%ͺ~ ~̏90*m,;Vo#ǖ˼w]jntc `k4Oo *ϵbh XGu:e SmByqoh|i V~%Ns$>**q`5alIFQ7J?rll[@{\Z1o+;khֱqn.!E>GsEEe2r,F'1"0sB#4G>uJKP~dNw(hgC_e@֦/ܮ!_ *{0o?/Xn+6"@<_~ʖI̿!.9} O(DTٛTU@q툋8^=f: I3RU]{JTI9\4~H)2EIa1%TV?a/>;A= !EÆ:D5o;A,Zro J"AMDHDhy.f" Aˉ߳\Kj Xj6Cu.ׂmvY*{~EalB+ |/`[1|ONo~qt% ${V@G;9kw}s_(,+)"$ID$ ]T[)n Щ9Z`eLU c##lZڛazw+-bq]goYQ˜+Y cgaҧVi1hYV1OdK  ~Z1tl{ =JߏVu\6qwϩЕ6't4ႯeNsf}5L'3k/_gl+[ >|cKɳz/^:@*laUfňs]2lhV߆Mx X f!|9Ŗ,nvc&sJ˩{t$ d): !Q8Fn&\D>7> 'MBSJVsg WQ{8秜jO;5y+zx~ua4st#o}ÖE v2XK$KN16dǢd+>H \1mr.Iɼ f/9))lfn&8tD#kC[o5a7߂"/$'˦eIȵjgT“fmS_U1y̘>3詢?KS'8ڇtu&qh9>++Án gN YZΡB⠸FO ɹyBBUUMLB}">-sn)Yv_$I\s&_ 6{)#ۍeBf E ǎILjΦ(' _o3sKqAڻ Rf̜MQ5u;4N{؈k2v6")CLAa JN1 f{!n }-#𞎏r\z[F[Cs9ZMR{2y6AgW;R̎" \&zZ 6[X:'D/RD &)%KIIݓɵ :;ۉLB uX}Fj asƗ&9v+'ۓn͉9q.,R<͇9ZâU7z WAŁPgp2Լ)ȺylBXTYLlI"4sʪBFAfL-CA 7Ed/| eh'նԎ啹uqL,tg|O~iwrӭfKYrx #êeӽEM9vO87ER]udͤ|zZ}krEu(OYUJdlx{IfݴïSMQݰyEvk6v[tGEeSnD믦amߟʪ%{/61 :5[K6=d{]H˳?;ԁdJfjB,~?ZsTWlJf^9GdBhY7{6˩=F-Wq~Ms0K 9GvrοdTz #Petc,p;HM;oxD?9||JK'sC|BЀn;} aɣ5nljWOJ0}u4yCO%!#""4C;Z،2mܿm{UT v47?/+QPn}2-zkDQ-$So:g[G:{ygHk厕}^hIm!iXݺ3oWNTlnLS?3|KX姿ɮ|Nw~gyf[ɜǚ&zh,Yen]^ľ;m~0!4#sc_R IDAT6'ywgQtc f $gRm`T~C-8'o$^*O"^&߽M"̚[W3,DWX\ 2Sn+w̫C$]d'~ԙ ehldW/4 ٚnjSp[e5`ҦShϛA8PKsW+"brvXy؈JPPD/7ySK#AB"VHēg'הҩHӶ3l#!\+PtTתhhtԜ4ۂ™Zz|7q'ql,!c$(S:־.gムo좡/BDD~|a+oD\aQZ 7&qv, rĽqB1HAsֳb 7F4k\N5sa:1$ ZkgB/[qr*QD<&#|}/@Es  +f'7Čȱמ|COʴdvz/g8S߁R7[!i僷~HZ&"~|hK( r#ӄN0  c* #%s M,Q%,85I O%#1"ض<Pĩ4G/j\ w8PA!Aܰv3z$6!7p(,14гq egDdyh6U`29YU =L(l?=I1aCqȊI% ʱIT%2mZ!V-޽RWYl4 ́US ``ƪDfM% 7$4 IA1B8wq$ńfFP^2pX` ){&>66K`4L0",$ch2"!;$cv8 & M6YpXm(`2* ŁbDE36F,Dif:hWvrPdYARZzh  ~pzvB^յ3 mqキv\\VX?46>gtHFFbb]N lf6]D),ry8ٳu|@GFoyZ4 fŢI#bGJ C2Ļ2afN-_&tW-_!dâr ij)D"bipRD5f X`6g:>lX>  !"1=Bx J!) %gy4^>zڢ#xp8d5$YjУK˂&%GuV0ޥ8P"8cV}0s̡yF"\8&dfA h67XG0:T"A/c咕Ux]5Yg!/~:E1@]j%zOǨE(1HDǎ-4aV #$=g"$ 48Z_7/1KF+ceCCy3Dd! ~)#|J4c"%t=@%ǂcS"N802Bo X予+ ; :,A$̤{:>%%qԯllF<8IGNd:>S N=6jg.+&OUno^=պ )+N2NjvP?PbI4Ma$H$>BB k}|djd`b*rȆ!j]h6Nj,^*cKfj !"4S'`sqs$&C(ħp$B\s>i>2''N7n <ߠI''gĝmeΜ^?tM{Ηc}/srbWF#D7'Y@4rEN+7:PjhaAS~Ef%?+ّe(C#ZIRzq[`Μ0Y__ff֬E8q6Ͻ(`24$z '@(PKr 4WIZF\34/&d&H<' # !r)#dFjI).VCMM==K4!ƧW!cїfO` E8MeC'cW2z3pۭj/-Ϡ%쥶pq2Ǧl;gka6+#"#v 1+8d I b;7#`X&8:w,Ů4_'1`M̟Z=\h2X+,2p S}|aBR)o7/h v˃ܶa jw9 00-Glk=&& dt`N_@)F8BiI\LP3 }sR"/3; ]L"9"Z̊UyPe"b $wRV/_⋚)[OO?rf:zQ]3X:A,'Y#'RWsb딛%w}ltq68;6ۆaĶ{Xʊ;F3Ee&IyP4ITϧ|Ņ:O9fK㘆PҚYv*+8**RsP={d K<>Ew59֠xv'Qo$OLS ]2{~Biw?Esl`iYBs<6.Lݔx=qǼhvl`ʙlÈ! +C$0 ,XmJ<:{Wb<7n/|2q^P M!34D;eUPL[Y ]xَb2#eTT9&U")wPRa}@K_dʙjɫa?&VL3yq#^ĺ;vȡ1veDAUW(dikǛ4j+ 4,s$#IDbrpJ}0f'˙iTMje\۝h7Yv%|h8M jrp},>K'ٚf%2lʣ~dGfYʨ+8Ddl\V(>dՆۙyXlٸm+Z$|iʟ`$!y7`:$tŽ(")_Nin5G )՚RRd|BI*C||y/~$NFW,)N+ƻxcX]Ouu-{(iJ(IF~ &ٜ6&$2 ]>8%G|8M>}^ˆ!JS@!Zp.Ԓ]e)vc2PdعzHM^/p5g f_|{JpWY65wB@H<Ԩa%l$8+iѐiӌ?8Ill. d[[x|wFPY1n[}=,XϹw,xh7) }OZY*$K 7d+ҭ|G -ϼB˺;Yxro>,[ܳf*<]ޭiA 0e-Gb<ܯh %6WssbVOjȔξG "&@oggz7ǔͳ^zn(X(/+/K6g//y6soZWg09{<P,[oqCNO?oTx"D[xs<?p,K;wD (R;=] >i\2̮ˡ GE<]1 Ny^M8w5,<ݼ#ݧϓ!S+x$N0N8$<ٽ{ yċO~#\+NVlw[~9{}oKKH;Q8y? $>eN.:tmz߷XD8S![n{NXG2B,Xkkf'fmfi]@R472AIp[>+X,rV*.kӦ専Å*Hh.,a>!#X\"If\ Rv+f5EH86\{/I1sqX5 }o-i4ijjٍIϮX]z6MՑ lebr>LɊnn`98V"',iTZqJ%W=0I i'6\#p/ =+c Tl#>@#Áa»<ȇ#UxQSv]Aw |V.-i(۸ SDj;86r\1d,XNvgΖӜj)a썔q:jserVGiea$|};N{Kj]jϮ+kӭܷBs^9.R;3Mw~S}ȞGB*31Gk{&Ğ .\8bgcQU"AXIpfܩ;eZ[^5a>Jfawމ1xUgihm)PyL).&SK?\c!S8SLNLrMa[ MsS66B CLa+'?y^zM4Mf"na:ΩS5K-[/~[(K)لBˋؾ}kٺ `jP8/%BӠ| Jgk~^LYa6}51P/[I<)p"nJKнyxϛF?{&g[nxx4 7ڇZ)+.$UKg y'Z!qi [Bd;% Lsg\9Xaak#^$yX<.˜4wIQF8w²v2!dDtǰm|?a̙C766s/Dp.5\=5x4pwLB?y6VDΎz#ldeٞEYi9NM^BRpRWH"OSc I0K?E>~0iAݣ=" _E$AW5^FN ? g`DBI+f"IDT.l µ:P,IoWKhH{Vu]}έ@4;c؉LdJ̼dMdz2KI)؎1tLwJW{$WBxsk]{Ypnff9:[25vvOG=S&oki vW3mf94;WnmEǍrAu]Ot>}誢94S㽆Np=hlDU$=k`$44 ζDU> /v;m蜢<)''wF?|gV8%5Sc^<7쥾wx 1tR T9q^e{7ZkcuWF?BdiCFq;S p7M7GuS7˥˷;-}ƪKe4 kGW'eOjs{8w^oEc!&V !l N/ǭ5m}9jf6MTvۍϧMilF~/@ꈈڵX#TTT9z60P/?Io3/@!Fh]z}8Vl-TulcslN75eALSG!P|.6XJ7([11Q?{ + 11I+7=A~ /yO #mug8pV1h桟 "A`A^\njj1uNu'\0ǟ6Z; |*k7ч 0$ ~^ An+\6`HM1%7DAH # 83|i1[#~o"یy֎z3mU`8]]/xԙ^; X޳ ެ+&%,@9bo@Oo;}mt@EEF""Lihk鈈uZɠL"4>MfB85;l LV y wgXU1:x{opS苄R 2c~ÇЌ/ɞ- #\8G/WG)!<.՛4?sxW4 z~jh vaZ&.5 9v{@v y{Zw~6yKNw^BN )Oq2Tw'O.trT%xx]*8-\};^ܜ bA6`L$溹|t>MɃO/;ory"3/ MlnVVlu^o IDAT7EPzyx[):9<@el}KL$aOP=XoyqwESxDȈm㙤]d<%En%ÞVHXDff N fD 5:jб:{%=$dM3m^ ni2D$~^4T͎5@JDj6E ɫtI>:iaхs5*nMIʆ؃zj6f?pvƹhi`Ԑ$BtX:.2J""CbQ+hc YBn "}lLR(@u\Nɢ-$ktÓ] #[Vvq&[8uR*$emdT/hu) c* 4kW%^|s?}v+6!nPM' گߜ`cOh&rrJ1/#yٽl%%Sq3˯=K\\,B@cCӤܕWa?6_ &M#}Ͼ"`pV W__:B+dlkths6@;؄Bu.yk֯,aqr6xFZNr%!ߔONf+)y4Թs :8].NiD:V+{^cO}b\N^"uh%T㥻%~;Li`^t ѫUUP! dF ci(YAÉO9t.3IR]GD H'!2EJ=4u*qq/Mh} f ^hax̟O8 ^)2B⽬HBЅE1DaF#t<DG>{{'s}[&iO"B^aRsa7ϝ_XTrd}d2U:9e}/5;&nmmgch,.#=0-8Ut L.~?{XnKhMKMğ<6b7X\_7X8qU1?|q bA ȏ6eEDIҙx9cZh_8u~@1*~!VͩQƒD;@pKTb6Qލv#$P#-x?ʂ,)Ѭ?xÉzlvϾoBGQ K6a;ĩïeSJJW6ĩ/c%L4tvޱ Yޏ;>0z1fS^_ÿ'%4$`ffP>5_g OtodF>u1DZ6ⷯAkG#?op} xlVj ~ɇذ|bͲRLZ//<<ܿE>IWKykt5Y=[e0o~ ¥"ҭ_O?Ub"×O~O캇(xی,)_4%n}R '=2rCy42vH'O?,!+"a|;cj`NQO- Dz>/ L 1ƒ}p:(:{8-x-u\Cz Yr9鯻:0脟S0S5BW~_k%>m1== ֡ 5r:6wvT_/OQC1#Mu0lDx:1DgQ@zViôXl 6iq,") rR"E"H/fqn#NP dd/!%7E\<ĪbrX.4\ s҉v")_!B- g{AG|WJmsyiU6D"v.TUϑЧ3ΤD<Zr"|)-bcVqy"ao;cWٹ~>\ý|"+ޜ0pC3GOq s|]yU{羍ZtTT8u1FU('5T67끞d-Qʉ4΢,M!'2ފ)BRO|` 4u)HrBbLjrr3Xυޛ?r$GBV YyT's /`dqN&):=iqhkO$/'4_~PXL9w[) 9L^Z$>e榲8#]}HEx}m3fQA h*Y+Cdp}D299$$<ʓa!;{19R#r - 7QvmN3 ɋX4ǵX޸؅C).KRH%/WA&Y=W(˥`"LySRK&bEv^-VSZt+ܶ6.t=|7+fwa2(;J wisc J^)1GtɄ ؆: Ѧ?4 P#H FO׋<} !G~pkhuwS@vQwQ]^yDny4z_&:;pn=ϴn.:Cթ>Cm jƱIWD$6k;Omy~ҷi-DqY|Iï;wb]Υ4ܴ:^7c ^eOM>P^4gvsqzMx j\2h*pHW&&nG좐n'?$?;\T^ȈOA(0eFp~w l4V/`Iyq]ڪpqSL6+ b4{r+Uc|;ȋ/:0X>u_mc̄={ɉ*OrI040A?fy 9Ua5ĝȴ(>k8;Vk'[cF 5 AG+sw%&iŤ5p}4>y>\@IH"<Ąg> :yltcsxAL0 04`P`ABp0[Ԕy& ,/(Ĭjt5_ M)\GNJ+NaZIזK\9՜"3$Gpc/bZˋ9xM*@H]5Wx-{Usr($&#&UU茒A#1qiR1T"fB๩cVADݾ -#>yn|SJJYRL@?Cͤ}E$#F!J.f":&ּ(Gϱ"H^ϰnV-: GZ~D Ö^|s~.D /VO8D&ASl-'>D048K ." ";ۉ;tDaw ŬP$DŠ3idyqfL:0=6,n3 FZAF%wd RcbKϲ(N!=u1x)3<^SQ渻&7DwOع@QutzЛ1FƠWb~NxӔĉ^y֎,;Z (ANDd n8v,OfĭE9Yq`<>h_ů~.yp859Qd29|s-?M]M(td47oOm}H!@hzRB]vOdz)KԝfEۉ^˸wJeuN gߤnEI'z6RI;xt 1H UuC^GʕGECQCJ]晇?DiV{ν+1L\5xœ<䆛)U2ӏ}q&tlgNpJ6|gzT:bS5YIQ#봈L\} (c:!E+7}g|d:}0)1BΜG΃kWc BoBD:Ar>"9~x62۵XuzzKI!ɠL-$j9<^TI`'yf 7MȮȆu_*oI ]$dl#Uc3{'^''(<^b&' r2|tGyWf`5G>B:!ܷܿz%y诔/~Av,(x/Pv>v? ދ@ΡVw#De؂56RǻG^_M@XvPY8LXz렳W"3߀lKd!}|߿Ƒ˝D`+?]tYHʤhE碿")/ D; :#)]4#t Q--6br5:>SVd-Gi<X\r/kB.UG/!XŖ6H) $!*k, jYًD!`'[6nf(KU85xKw 4]h,>{Y6uWef Rq4e=΀nJZuHP_~-#Htaj.54b$l֮ʒxG(ȊһYpZ.!Q q7y\qf7@Ykٲ~Hp=X@9ʻzel\DGmqmvbsYf+b\UvJ]l޸>s;jtFk-$)vrI c4oDwm@v(\)H0fLt=:ZH{Xq1#։0%afҌEԗu8$PnVD0t-H7ueh>’{Y&m'צDNm\JyvtLIE<-8ZF=&g'qګD%a=lްN.UW疨IPQZ.0Dd)h'=UDw݅q0kR wsr6HȥlHt-]W⭠&1nE;iI u)\lGhҁD!fvl؈Υ+|szPYn wߛKwkqHMKc䇍Pq*FŴ}]4-"sd &5 DqKr`>RӰ4FtZi8FC h߭ͧ8|?>hn9zJƲeާ78 IDATM 9~,)ڏ_BsRu!/~X-H;khB̄hnvk͍uXY>/fd%NİO"hBC .<>eSg2̓ǭstrIzn@"n<:"HxYv<Œsj0T~4!'R3Lc0} ΙuH Wp (*uWQ7?K R5;ix+WZQ'սJJ #--c lk뤻{`ֺx'5:s }FʊKXSwLQ5XhG8}p=V„B[i~ʯh\OgBxFh2z^vӊG)`I.[ պ+0ZP|rYڔ]NsƊ_(@CӠ(L 2XD]&L`m “ >p:=|_&$̷_xO~I]UmmKHH,QQ7pt,w8]9*LV \ I%4bzLw{~z<ʼn3z@bB7'u帅 (Xۦa, qKl.pBr2lVf]dNgg/bcuSV1~[HNӯ5.N-~|w0oEO\]Ezxo۴<̥PV0.r* bn2`Xt&x]ô63sEGL"R܃6U3윦=JD mX\Эtf*]tT5`'*XO;=jA"SV`u]Vښ*oLߜNPR:G1A5 Ȅ<Q|Vڛ+M)cTjB nUiPJvF&U2UGKo#!h4{?MUx4LdebRFiwNzJ7OVl,^0ɡ}]1<:2IKDsY8w8p Hx<q^^_rBƲ/;ϵx'XS\d]籵Q :_-jns*, 1IQ`(Ͽqe;> ik:6;њF+$mgGHCl\8#^;v&\TZ{d]ܟs @Oo/9t:|>>fXA _ؑpŮcCJc#~R29~RQUT`x-*,PJ GX*\C~~&Qgbw!r |ȁF~DBα<7TƨL6يIq)A!Ikؼi Ζ#+x|Nz{Y(NG=Kވ{<5#pF;kV߸`6Y{χY 4S`:Ŋô]rNv3U̹l~Q}jj>54Н .hA qbѽ\v23hmm¥>e!4]OHebeđ6 dIfj4=P7Pd11Qj^ʮj7!sGKꋜ\O(h&(q4/>'k#dGrH=k7OTpzg+A|}A%3 G`kEb=-Ghj :UsXA:r6>Kc.B+26Ekb/@κZLO`1t&FDa'f =ZL"L1:,`>GJ!l[Q 5[X^I\z,w'JۮnYaT~NJ|<0׋kb޳\.]^wN@yxǒi;^9.-{5YH}W{ :˞NC ӍBf_>9_S9٭;S|%5voSѴqpB#Mh{P!ZPP6M[Sl tlF 73qiʔdާ? @FzV/*k Pqn-l Dg47"VFFZJLt,|?dĥc0!IDt IID(v`Rz4ҳ=Xٌ>B`9݊lѭ#199`~)RsJ s9n#MT ǯ`:l4&r fҥ']5dtPOZf66Sc+A8+uczNݬE?a/NZ]K 9ӜlXIb)\8A0C΋ΈP:c"qQqB)"bYOxgȎN`eEVFJg]dDNѯUTDBNR%!} W#FNүVaEvul(L0;H_sEwkv 㬑($$R 37pϦݤ03j]k.ֲb }#8.s!^OfoÃӼ$-fhrgLǹoM&E' VraX&,ivlc?cوXZ;*EQ:M%x\PX++E5!F 7D.ۊ$] 5[.a%gGHWBVUsBLCl!waRu8nԢyur~0. tl$K<`?l8*8lzfBMU"m>B\4$u {|*NÐ#y+͋8/O]AKyχI=΀mEdeD;,V+ H(l@k -sgw~:6m'”̊5M5I)ۋ>dca2}8IB8R"Ln|a;8PՍ"27sGt}](Ek?U\yRC@C)%rBqt[ʺ+G*Q]'ϣ ?:TW 1iVbN; Ko ]#eo`g1ٖT5Ĩ6-^T3)+fd J_aSl3cHbGӐJ[wh" 1+M!gs't N|(h̫C]$Ks7iRt\dE$7%*57 e4 Q;aW_uM:/3&*jj;0&Z5!D!>#N;~v#ވ`Bq_shIbX;e'o!cwzHs*UcitthH]m, ͌ ^눲tmaJS}چ&tQNe*tװ:/ 쩥KNjRziDSsRU߈W\s5mۼ#rG2!d25uY 6~3,>SZt\ or VqD%0ƖKt 8rJࣧ" u*Ս3Nu}>'JT\_APm *1_axLE{C-}TOoK4AUS'ڸmT5"ut;/[uuD vm~r\kqxOi$CӨo&z7\5kZ*뚉H3-܃-T52t[Ax=Ns |wPIM}uBTCS966%@JWk7yVՆ!^7BMWxJ,Ǡ9TuxA74!Ky.a8~*UEEt^;ɵ1ckdi-@keUE|7Z3BK%;h꙳ dq F#KJL̛uYrO *vKyEcp6YBE$99YlsV,r( XgMӐ(bp"RjcɠGJMImfYU dZRB?H* > 5m҄y[ epvnai0i<nc2;v\.Ǭ6:ZU7$<%|vdCiƉK{)<:.4x$f003֭=uW6J94K91 '7s*0Odf@rW[͊aa m](Oӈtğ“ >p:|_"$?.lTt$!P)S⟅ wxGu|7_齁 "nw2 &Ј}0C2|wR4|hGqM5?{u2c$N(VW,Jlˎm:ǎ8bc8eu(w @Df0~`3(`}eC&짔Wp&'2L'B˺D~ ~WyE\FEb -13)~$XRPL8D"ytTjp2ʡckyiTz 6J&}h!q*e]N%.:. uU<Οi]1&8 7`8^i Y`S|3 f #h 7A1o.b,W,vNZϠP@Qa&V0r]q,D"˹x"f*jSTXp.I} A,_w+;e<'%{tM<"kUt۽lZ܌l~{RF r3E$m4,榛e!ŗNV[^@6b_ڄExJԹk'^b!ʚf,rSλdu%L!zzS؋*q0Mwr\Î{;/21lrتgYdϫ?!co[l{ZiZu֝ذk9%Y1k>}C:ޫ¼.ZͺuxsNSSp<ގ|%Ka8Êz7CHکrn<=[.P`T`dEptULsϜ(Z2cWg~a4#s aw`ȹ]?J8x>_8+CC hAK#=m9*^osE3HKnknֳȯsa0Zh,/csy=<"rCJ#OESK.&ĆQ $ErFY1J)R z_`nF/HxIRcRR/r{.Gʐ%%#$!+(Z&8qǃ'_&"JT ֵxW~7r& ;mǩ=GPİld/lw7Hr)}1r"9'U\yttɔbʭr !ǂl>e3!e^-Dmecw^Z&X̩!A?ŌlPT LsÕ,^{C=Og.r6㖈@H8wP)N!$ ʚio2nYR7$?3ŋhnl1mRu5E)4|taIUX٤245`׼GC:yp,fYCvE^:Q`T&QUŕ(SJ%H,؋H KHF5iaAI%IW&vVWc'_ W|o/_0 vQP`k9HMN2Q<*U,]bq\(YESM5iCb9uuT:޳P&ŋh寣 IDATn'w:JƵuvd/hijJquSqUdۍq-+Odu4OV' 6+iZzKkjf_2Eh9VWXikُ/.@qè933Lq!B︀ WC_ƵK?j6^gg7Y1'rv(͓[x?eٖ<3|;fxdʭ}#‰n4X|K\3 M_K/qǖu }N (o~}:sS'jRM~{wn'ڱvi l%[QOOnzI ,O}q8N 07q_y\Z-w=m#ܱvO(j{Bhi9B09l[y(ue$`v5=32͍':t$;`4kd9{(LqSՕWZG[]Yd*܋B:E݆mWPYı ӧ +f}̶e ayBk!I 7$0ɹSe&ef0bb mPo?tή6$G-y럗_J Ȃu vbD}I~/76{7@p/I {B}") &Ga̓/ <$Ri9DF^4rhi9B>;xɠ!_sp,?_pb䨿a_a@˭Ӡ#±c=zET##3N^3“p8lU¾>a<`Y- %07ix B2 a Khx"Sː/I疊SW*b'lꛠ_v$Q>/7 AB>89P Hl]Eɰ/HK0#Fׇ󏥨#L#) %x` !Im>=NeɤJ#{„^tFL#s@ai)dfZ^d1!#*,s__䱢f nUap4{ɨM`MxC(AbIqM05FHΝ+V LȒsJ2bsb컋uQHXTP84[SRZB H0k Պd +ʼnADY[7fߠgɌ)B89Tl;Z"@x.rla3)D"0[Z@4>ž kVB2'I2$#* B8(*xP0 b(mJ/G,[bn6V"dVQ?X$,&ɬ"]dS! Nͥ6a"f0}0RͅF*$I$<#gSeВ$9c1x3qN9#$%Y_D#hI҈FD#ceA`Q)g%8r%Ib31nE'U/했Hd B3hiGQfVʃ$(c%b1~iD2(MÓŲ% 3#'OrɑKf(lKH% wi p6-N0#VOE2g89P(z]T`r= L@40.g¥Yڶ9::t)TMq%լ9C:t\  ,UHoJLLW\7tСcaO/%J\:tСCE Y/˒$I{FBe$!Єv5B &Uf`Aq0J)b$jP  ^O/Hv6lo9ogGw/f]iC6o@fr͍}˓I1r>-aA}_ūo3 )-}9-!f7/Kod'غv9JW^k^ɷک4u(2 _띧eg(ye[ټj 6U vۋVeK]ye$$3Kp՚ [U[ .?pRP-mn/%" \KNOL.h^.D Fٺӑ}2=w(Zæxfo*Ԕѐ&VW)Zf o<|Y-DgbF 'Jn, D ~bD"B{ e ƴ Ɍ S'*zeTխgiU>|岌(X}ђ 7?E rv5:]N?r=*d~PI*ټ7Z.(,s7`6w@5?~Vrjz'O9ȳHRBFqlH_^_^KUXfДyW܀9eޗPj%e.WߙI,n)G8o ndCi%%!t˲6 kj##2)b|)o7dj:ds>ťe=/Qņek(+vs"{Qk|^?HQ| ~q;9vxo[n}%ۇ|gOb+EXqߟSaL0:/3,H>Io$olg֝4P`̛>qP()~vz:_]{zHEUS4e^Q0&H$TaBL $ƕg *rzk&Q$Ʉa *{zT/YIq< +d-n|c,%Ng"gnVe~/inWLN,FB>, DkbH>YH]ۄy-],yt|#<6TK)k]˪8Jh3?u+P![(1@$8DIy-}煗γ`ʳ`cxc)$LA"!Z&[!6%p? R 咊Q1N,-csaHE2 6!Gf" Cd- $q:IE<bK:hF"`HLP< Ä́Qag[vi*Ю):NJPMy,[usO h\Yg y_@`AC=iSWXvrˑoyf䐰؋&$~el$ϑK1Orڭ<,ErdI.='DR,‰E<3L8bߝB E6)3GX|%IPT?۩=HF|lD'8pe[(gG|̍8-B~& #Mϥ6aq{*c)pU 2╔^TZOBl&"dYuJnxhFZߡgy/X:׹`ϣ2^[y̍%qb1 IP'~Qc/HO]V4 4BeS%^虞IQIRB@~f679l+Q(YRyeXʽ;u]c)@>yӔ&8I nn+<}5-{ 'r n,e_ћ.{y⩯TTT:68ho9/!#7=&)Z\|;VWyzC\I;*l]A_^"I@y ԛ%!f?XY,2B@lC7 Ey8 b_ڿȺPVfʝ.Ip*aZjBc#d;7ɻǏዤp*7hײzoH;lUd<ѧp(&~k|xCݧ%iXy~,b 䟢d9O`'s. <[-Ot\H&5v ={s`J%x&i梾Oy2*\e;@~ MJ0Jqawu%s)ÜFٞR.;זjm=9bۜl"bPWYFDEW_G='*P^Y"= Jٌ,cT^B1۱Z` IR0v,&Hc1ZlX̳тlĨ90`Gڕ~ _ຕ ڰ`Ꭶ| wcZHIҤVb87zؿ~G5pu#łonyl0c5122K,f3V9XLQ&IV-,F,bis UMtLR f&e,uZ9NuSנ,19TŔB$v^ٳ(çT~_p/n} ;[/  \?&-q*䐏p`!"H9j c80[,Xͦ~QM~1;1e$#|b5+fVl0_9{S$c01WQRrLmXf fV:_Z *fŀդNg1m-vfCFe'LjSE|7J r]5_]šỔHlCU`fyOLV 0V#FI(1y0;+XjE3orkgCm#Fn[ o`"6Ν̹'?M{ن*66"Ν+V pRb91)*0D<$ moқ@k2 gQ-Ŗ$]ݝU$FEy Zo$ !_MC<ᩲ QAesG<0PR^55DGKױukb6PPSG6LDžAn\NA_O" /JQ `LaeeNZO_O;h*E(XEN8J/.-GJDpvlu(pSj"iByy-ri6)-ǜB HhsQYol"W=|tMsEQxꤲ^_hNQMfI$46Oß?jnbժ26m&oС֜Cr9:\NH p"[4KYR4@OS- ZJeqv<ĈΌ2ni{ݻѶY`/e 2wBy%ÅQ| ګ2"GUq>sxc/\ 8%[x"*+I:3vQvOr⬢$[ZwM;BEy-R.Ϙmj.I 6>.VXA0\ƞ8:hɧN u#زe>`7Ҙ"=oP!FrN;99fTL6Ar1NUTJ_Gv:;+شi[f9|unK[B0"ߣm P@a.~-ݚY٤6!c.*_ !!pKy v#eBbRض8ꌜ!Tn'iΥ\!Tŝ5ߟPIʹ,[rv^gVYa6̬dtN [,,5XsZ+E:f1\~0_RLum!H$H6'ҡc:С%%_t0R%>1<Mo:jчe I4M } X 5e^o畭-7SlBAu[YSb!Nd`cTFj4( 4a"p!N 8- : eh88@ # TJpU͔Os{ftҡl3ԔyJ&a>~TW_3m(ukQe"YN54*" ykXT^]CSW&u\ pc&f@qBF's *aeˉ"-eROݸ0ZwQDD%$+EyĆ;ɐ%G^) IYx6 :i TU#4 H*0F Qt>-!dE }䐦wb3u'>+əM/G!2ecDfɲ,Bf" Myչ.Uikњoc'3lSe qRdկh׬P' >Ie|ff9x>|dvHJm심*ɛd#K=>56We>U,OFyVJ~'?b)*5۰I#rN6\ ,rLSes*J.G?>\Fi 1ͭ/>cXF }~+PGgK;sij4ڍS7?O漵Mzf҄45ձo)ڼ? k?{UFJ|y[)4Ώ,]E<ժ GJLrO~E]88xg'\G|3x?c}yOKod:n7x#gx"(?~ܴeSC9;?iX S]-||kcKz~[<|8XQk6_{{֗eRm"nbCWmti2=;$ wt;ȹ 5j*K12JEe6⚥;LȲW~ank3:&ѧţ7ïM(8"txZ6%O08x)MʫYb3y${E :h)B5u̞֮)^Jjja3NUb\˩̟W3e6X]<2XIJX䮠̽|ォj+ԌQJ9T,UʼP-4SQƪz+(w8oj94(u<Oz%4T33^PvS^C6E9ƴΑg%lM,- eT\9#+@I *kX jݕi9 SEM{ 5yU+aMe_lrPY{ HX˗]NU2j'ijzTrGWP]@cm̯]M{ %IO*<_UYST 5͔.p9UUHc89 hIE4e/]K{55eLݜUiJ nΣxJyRkPU8JX\,½b'>{5e(V][KCM%JR5WSN|jj(un#|_cʉٴ=NYc K*YPa$b*.dbKꩯ0鷎xO&}G8>n\UHar=WSl7EU lanz 6;暕1JӍ&r|iڶ4mBl{-"A"4T2J7AgW'Ѭ{%BdYui1?C~zfɳD"n8uM IՓa>c߱RZ.b t@?Ν|cz),-2Z[y,HE} yiŁm9n.p[#8E(%rˑ2c0H˜1k#wKH$CC z<[/$t1Ƒo$ }-n$wGZ = ruzL\1kG$$f$M_2ki&9UD N\F@ǏppNkGK: MnIIя^/0c]*+SUB}}YH 00G9=-G;?ʭI"`)P$$-sL'|;:;A1?~&Fk3'^L'-G*{!;Iv9<= vxr 8[ѯ5'iwp[tzŃx}i92< {8cG $r} xi;'Gxn ~:NΑ.RHH"E`o+@({x'or9Sx`oęM78?wp8OsVUVi5~~aR[gׯs $$jk˸v~G;N[ׄ-$&-O$/hm,h#I3W)IP.w,Ot\F|kjW {c(N\ҺXE.m3lΩ}0r$I$%Sd̦px"bd1&G/k;yy_~2Sh|`6[0[{ FuVF@1,#$2.$]įꍬ%W2D=.,D&Rʡ% !0 ȲFK<$4)YW"`S% .ȺZxm^n[%'It[QC(I$>8:1cJSY+0:K9񹗇FMO4ڶ3E}1b1cFcR1Zs6.9:.;3]uuLLQfӇwsț[);XVWI*šw_{8C%l~.)ʑA?q8Z ;h(*F!ș/q5n.r3fD +fEI!jb'Ɩ;fI h^YAnd 6puhZM\Ūe~2I^M }i9]Ĥ)Esc|8g\ˇ,ZV]:#?A(4m04G5(vVenX٠"jɳLI2Rڰu*P$dSW{Ci޶7+ KB_5^Æ%8V.8r lN(D~E@ 'Lu< Z\*m?)^bu ,el[#oPuVoe2;߾ux2I8o47>RIFZ[Sjq:xdp( E_*& 'le_}_ W}۷mdmO9iEE:OAsݬi=gg;P_tx"J(:vn`Ɓ_{:p,Z a3 ;/`]7ʺ<1pEEinn$ 籔:=KF^S߽A$$}q"FLث\DpC|^[?bвֳUnNvt̩Z-wo'Z[Vasp k8G: Oc~[^L_/H}#ͥ`q81 HbAʒ/w'P}!QTs#{OOjeh;{KEu7>{#'ڱ,3઺3ds{5!?ªwP^W`|C3,}-K PY\D[e` r v.-FU,X*"q?r66+4ӿ5.Xiɸ_J8|`h xS IK158OާfҮ^\ŭ}.dw͏ϗs :Miu V%)dx& X8$hB#F Wb<3b$Y&nry%}BNL<6nx7{O_`,nf 0$u+pīm:^zR,vђ1+pC;HԈ^mvDpMzO*AK"I&Vtp~O-4ȦCS6#WII1>ps^IFA,a5qQZTX̖?˖u:'7BF5PUdJ*ݜ=Nr߸X;&aOmnrT;luˢ_knٔ[#Xt8{J0rdu4 _8)Wl4kK$дqëc$b`ًƾ=]wn5S͔5n4V!9su;y}؋1Ywܸ|WxkVS SVoEy%l'|x$!hdiM>o[?ۍHE909ڇũ#o p?× ?`–@YEk3 R g{N>wR3_ dɲ˜&/gx8CKۯox66K?s6 Hȉf08pIf .?0T-@S[<ǏbS+!ϛ?*>P^VUJ85>#;wN,Yw?f [T!'tiz#0%fe˔VOR鰍ɖ{>7I*/'%3w~U#D6>%OLh W<}Rl9O6W{{nu$*X`eᨚɂkɕҩ4ݸdq65@k݋n`(66墄jG*`U-x5‚2ϭ,4}T͉Z羇6:)UÏE9[!dQ5"='~#VdO7rpN>hjK/3-'3\u|b s&.\ $ճ Zl+֕# ,EKٸf H6g΂ 6͹&""Z|44_^ ]deΟ|_@4v.5'q)r-ی9z 6uk?ƃ_d!>u$Xx]8vH/Ϙ {?U61T>$mWXmwqZԲRZ<݇06ĩ l%+شv A+5W`|.V9C47kQ q$z©RV@uVoaz̭\lj"8+Y7Yx晗giMo@ 'Zc˪U"(t<ʉ_r00!v+l+,@DC@X 8Y{+)w.V\+ifS [YV58+ֳi(k je;X8gRz%CP([U`j}Un: 94ԝP0YªUY^6o69tp_\Gd̵izκ v`Jc4kl%+ٸv fWsKyt9>֬#2Nq'DѢ]]ej:JfWm*+BkkYtTo|}@f&6[K$DžkWpb Qb5#7p/zQ SN![6-V˗L)l!cwOC14uζu`URx.ɓ̿kfY@-\? ~}Z2KXz˫ \{dUoe Ě|dpݬZSD5K.}s_S8qЭu8rEiZ( _Myל>sOTML쬌/?x E0;ol襖$j6m8kll>ADik;Oٰa;m YQ7\@ɊQDi^+BKAzdpghv'_o 2IhG_~M;OƑU^Nqr-E us|QB^LZCܐJ$q; ՉH 7.?!ǐ}ۏզI+Jzr>a_AX^ t5J$j8uzDB1t3gΟ7,G'Mo$xmZOT7χPs8=M&}xF!~ w4Tuzc$t C?Ҁ [rr'W$ OҸ;wnB1)}8b5Uxx(vs̫\ÖSEW30顫n%sy=v\݇( 1?ab>pOeP0y=袉Դt!`hQQ M8u?K5 (,bǎ;ͥKjdLI}:r 3ܙxj~kOHœkd"˭;"EEkdd2OҸ>nkd9y h$BÕw-6[.YY I|v"x1Ež1q &x֋Oi-Q $*gc~„ѿ/krY!EMva\&`jצhc a~Zw3}0+6;,_WÑ[}⭘)a70LD[»YCarLnLnO&RnNM)4K#}[7nlrJMlaزK Gxn(uqᗼQSO4>j&T9P=x]$(Z(,XӭGSqLQd#Anak'}v 0f7=A1@dfԶ΂R<=Iy/d.t`TU Auֻĩ z}NLd3,ou9e't8u{}C3z? zڤ1 ")VV(&>pndW_mqRT"9Ekسar2Fv2Xq_}R[oF8@sپIV ~ al6܍귏3$Jws`><1d>e0%l1OR+`%zٌⵏo&2fS2 Q)d֧غd!(9FxbdVg#Tf/K=6'WCؼ}O xusG]blYS"[ٿ e"񈹬$V.<)odO {bK}Mah4n4 ʭ1'Bp

KVިh+b#XwgηCdŦXNAz2:99@ WMYZ{Px[CPX?74>!]݇YV'{)el~-vZ؁H{9t fߤկA]BO{=5Q,Zw-A+'P[j=LЁ2r#,  $С=i.,fiUņmٵ${*?>J <_΅oR e 9#hOqU6Lulx]C'T@kPL٬|kȍY iidц9TpТD[j+W.J :Cu)Nsv1 ":z+Z?z0m'ibC``B5o7;n\"3նD}Mo).'TlXwc6L,ם]\ޝ#Ykxi:\E/nJ_1jkho :}]!v{RkWS7mvˡk9Dž5(gL8:.s1o,5btbC{Hoh M(sNl\!j+lǭa\Y<==zT Vjz 8NYu'ѣo5,7aͦ>"hQFDGmȑe>VK(#d!VJ9$21U +K(o~ <1ꯞCQn}Ȋlذ-gpaF몒ڗa Hzk2}MuQ1tm+/Uө,dϞ;;ͥK'G (Ce_x2MM7s*COa4P$MY e(" QՠrN{q"Q>nkd9y+p6!7QCOG ۾~F[-,׌4xdg(3N65=ӛ郞ԩ _v|ک=ٸy0 cΪ0!nrЌ}M Gݐct7a \2V1]̛Wi!22YٴǤ%ҋ0}MuV2rW*LSwT{>@ 71,iLfʼnfgHq_4ҘgyF ]s,M^`ps- #S}.Ӿ:zgxotlXf>&B7^.zId-`D/{]ՋPls6z~6yӄN %k |rWb)| 3>84ҸSF[UUePYEMt:;c)w k/'ׁ! äɶlLvM 9Gs95#Ew;'o6ʋ`4pGV"bf@6c5TABҵ(i򄒯kL0P/p,nL,,&'#%yh?WO|g%}:U.daԪce,i"=}yZGzx j.J(0n]jHmӁEdEeiNgg7wsۭˮ<[w~ QA7# 6 %شBͱi] tnCƑS҇ 0,sev^9q _pԁB$WhZwD*n"+VI"d1h Cj醎,%?Ax:UTa0~MŏS\¸"FYT[.%% N_چ *qQ!EÙ:L[dp=xO.ݔ v{q.XE8 H~Q0U%a=t`XGfÎƨ}#<ņmad RoܱӨjs_b+\ڞONz"߈7f:|v'Hdnl=wtK('L O#Qg ȭcsbvqOa! o~n6!VWV gϳ۶*$EPU#e&˻1ڧ g"Ǭ,-9-gochMmiL b(5VnS_M~r;V0l:?6gm x /M. PUJ~:f9h"XQۨT^yd,dłd⤎Ğ= WgQFB,(C(ݖ% JX|ɧ(+22"bNrq4ο|o??gjLI׈v}~5@0Ԍ݊a?o?#LBL63m4Fr ɲY0 U^ JK1'jKzػ/-ڌ*V/&&+So߿>׉TDwkFf57Q2gK+LjW2%S ldN^S (iF<iB 8ᖲVEQ_5CV -r2EEI!˶|?ԧ-~ e RhjBqZ͝yVc?,bV_s/v! *V,\IY7X,[uf@eIp`^"$F Y~IGPi*Hv_m 4M' /x~v\DP3.v{7Z)R?Y`ݟC`H]g֢|_ŴGp8)//I؂Z(Cdclr9H=ϼuA٫? lxlZҹL4M}7ߓYܵp- oaAӯ|fU  Qqin׼_R`+Vn&ᕗFS{O1+Դuau<ԗپt=ަ+, E<oJ<ߢJfu`*bCM:(--O].Odƒ~~ IDATK)ΩȹRsMxY^}N]G7eat `t{/oW`fcG5PҎrWē_aʅ6EH6SU5'q_[['i/4-E{KSi Rwn Js<{;4SJ6[~#JC"Rm${92J [}x?י[^NFFFڛce df C'`!5% v?"o'>^PQuC Mq]~z>v|A[@ʢbiƗ?=G=ȻṂļ1 hr5Kgsthz%K\.s|#kpM!/8ԓaTDE*BMyNЎc5at0w_#=x?KMt^xWΜMeH~_s`k Խۧ^j2 -_Ed_=\ (o!x:'@1ՠ>\`|BhnodFA ~XuV_?Wvx`D}YOQH_Ě8Jwnkd9Ґh$BÕ%LDC#D7@4;,U#HEGl\\76yط1oKdgLMX N_ѴdnU Cܔ14aKj1$Ţ~bҌQphc$DpxnkHà S|>;Kg?bp04މ1PpPra,k(N\{z,/LDC(p澎0] 7lW3( nj0 &i|Bu]% L1Cxlf-*$|׉DSHr4+D Mo 0Fsr"qBx2_T3d1,p4ҘzkEԷ\o& gW[p:El6q4Hc qhiqiAm_SiF4UHw8cWiF>үH%3C҃ ګɡ!CW)N lPURmbf"qPR@u" 1UG@Ll6D=N(tA~VzF>;YA1[ȂA4GɂlEkq@DR% qEL)63ʘ[>L-bW/iP~[/1๣;y6i QEj83Y<ћk0 (ZĎ=}z@?_֕$G̈Vly~J[W/p7^zt9i1N$[,neÊekor %,]WNN\)`r*;woj{UCd)r.s+a},x.!{Q%eX;ew0ll+Peʤ'}\ǵzhҙ|'05Z:6]qu\-CDzBG:?-vt|v'i$L0KW~ Sd RRRPe$ F@/h둤d{.vE ^tQ&ÒI2-mMCa U^I gƿt,,-F t]G44]G7 YG s/?ÕORY2 Ͻ+?KkHx %5Wv\Ki94HwgLJqONK,i? =q=#w^3/v=tbrh`_|5AA$2؅bD $ށ߲pBow`ҜLLR6CՆi> |rW9!~ Wrځ}+@,X3AxrǬ(miƔd#O:ۗƻ?1; fGȵbadY"mow{(JKꑤU5 (-u hkJw0LFa9en瑟͘suEdY# ŚC~v&qLs ƽqdt&!,|SO4]D4F5\)ȑE~N>qr83 uا -#dk dر~Y W?@u7W~/&?w~Dk H02@48HIy#a )Nr\Xeq3\e8'VG>Yc!rȰ(wn&!O̬"r֑4JJp 6e^ E^FƬ3`ɰ6N%I*9&>,0en&[y'l'7I}xJf[.Y9|BVvY6s Sλ͔eiǴ1#IY4ދ Fk.a_NiQľ-O6K20[`C,ECh̬\Q9l'Z@4Dav$ZK?KCVo#6OGRr| '~hg v.0 Ԕ'>M>}CT:Y{ǾJ#s+~Ґt?>ܻtr~7?AuizG<[BO}w)ߡC˖{xqOyǾG|Bp_K7x'zUaܧK"e,'/|,6S~'FQ?m>|\r8Ym'۫&Ģ+,_E<$ &/=|_dy=a_yاw!avc?G+ə?e뼒Dg[Ǔx} $U䌥Ƒm嘅ad?_!ֺt{p8B0"fzXfN [vVn s50E '8O||a.RwfL֜h7M-),DAGw#0^<=xv H( YQ)H%łIQ LIeS>Y1#+&LVLZgsC(cR$ł2* *fEFQLJCIʐ#2 9FG0 Sf[U`O̫L`JG!I(P)I\w( ]XQF+Dق |4T[VH"ؾrHH_&3b<-XP!=^;|)ٌIHwibIK}I(\{oHbADQLH2FT29Z/s"2䤾ϧ&fcٲed:1I2 xA2cg c!rU#I3x=l#\JWO'1rc 3QLCD;R Ea w_+s *jX-['n;z fP^SF9NWO׈‚"@'= °==D0; qeZm#8O!榣߃֌RJ s1 XGc&%eL"6:>$ JM&Bz:k` `sQJX5&Xs;$a)voS %C #,%BWOwN ;(*($Ql ]7fȊlذ-Lܧ`U=)x%,|xꐋ5D+R6={'n;v4.5H<C%HΓrX(7`pDV^NOg_ߤ-'[M|4]E0UJ͠sr;(0?7$%D2eQKӆ;d'azz&l*%jAFD+.W Bn Pv*x hê((#Gruy~r߇u/GZ9?>ԯx_!>+D㕯s]lڴ#|r]ƶY.>2Y~r|f~Wo[H+ tgXW 33R@ 9\-<.e q6 Zs~!@rX>?5ɲDbսɬ4V[Al8߯h忋%٬Y~ z9ީGNk¨%KO72Xl<(5_"|6s'/R! &Ƽb931#w|axsI٬]u> `i}l)%}n)f[ v{ O? 6!r{%w||6u1fq=Ml$G8 u95ܵyǘL=`v#~L":\!X~:{2 C yBdL$!dp^o}s߭$l0Cs۾&wږzr6]DݫܸDBü}7O.Gh,߂1ʭ6ⷾGhd$Сc6|]%bUC IDAT{|P,n5#S=M((ߏw q׌gV^}c#x8si+ń,`j~FYvľ(q÷K14F V4ݍH=ǑLQB01%t$i^0-jlnOVZ['Y(KOQª.XkΞey%)OT~W:qI'HO!l2}ASflM©U6s-LҙOս@UKĶĬ4bl'[]tHBD%$}<LyIQh>^_:5ldR]efd&?8K?zߒ(s@d$Aeuv $4{c4RRsL$j:u!]0x)ĆJҬt_5~)r̊ʜ@,!PgIEYF ɡZebXG[S($@Qwcϰ$z4c˺|R c~qWCMU7YTog4#ϻ.9”>ӣ H8*vHMv22?"2:, yToL1~ @[OP !-_8ц_:edC XI{iAf"d.Lq$''#4۰iO C}$[%7I"b'm?-Z} vre}hEuTD֩*FK-)I˭X2+I@zį ed]O__/J2;i"idst]5 )>|Yd-z[QI(IlHJF7/h Fb4.$={(qM 6p mJ!EQD s&p!!5g{+ӹO=*}'ؾ$.u=ufUNhMlr ܼi1_n%2ssw"׆Yr@?XWJ nEAFzw/2ͣhn_ &MONo>@DZ ڳK?BkW(,[)a{mz|MTcK|/h ֔Ct~GWoǏ:pwdҷq`s!:69Io>@t}41Tly2[?f0q~FOOKl(=!;wexkmƩ~~9"O+ ^)3Þ(Hm'\ Nͮ[ze)ɻo3qd%-? g"# No#22AHL-%ԝ 7g+MCoDv. G2O#&~νԅ#=e^n-(1oK.K?LM:Dj.Bi%Q.\@f#1AZk 5f)ZUhHjkb.@օ"@cr99l$GDHt`wqJˁHà&bb{*L+DW Iܚqf;NR66iǢ!+ JB aJ ŕiȉ^OMM"ʩ$māQil 0h߬cg]@@g+g?R;2,\ɸ6n8s)D {~euCV`dV;)._iRvo11D;֙E+Tg3`'5 6.̓GV W? y0oeV_#ĥktW7jvpV#QU+3 Ʀ.I3iOY3y 4ͯ޹W/Fm(tt$?ҩ#81A_EƣsYk$ɕݥi7@C|N~o*zM \)c5LИ0kqhT#*woR۟M %gl I\.|ncc!2u2Vuk?3ȆRRptpkxٙGv/}F'HZ<RNs{`GVQVVm`scϲGS $!1Dw^&aGc`<΅$-JW+}IJwDY+rj''cA@?EoW3JGH0d@Ƒ^@reM_OC߼fe&1tx#69p:^;ZjqI<(HOu179%d`ʹ`V{:1.~-&RR&dOGppޔHPw }wW6(= D4pUsͤeq'k^N=vt)6Fz-4)ZW'*|x~?^Q-`3Lt"@wMzF$Uaɡ,h8Ǩ&uyX 2Futp>\ XWf5+\S\Rf[(d /d1g9U%;y͡RpK4bm/ LZlSbbW3\K*?RXjαȵ6ܑXFD061rju VL"&6!<9'"k <~0ȳ|N=ffp7iYsML ѷLp1bP@Zx 2WBB\%Jh/ ⡶KwTuu!#uRxm#[z!NJdXȿxȟx-ˊ |c[j]>x,|TxeVA0EŲ29Ē%XcT6HZ|_xfγ.k} IsqJ*r`5j95׊o\P|(J`LNsgIMl1]hX݌ξٳ !24H .Co\2(.\'*~":k B,X59T|z~dzB)TVOQrj>o9T|{ү`PU,ꕀ!NXĨP`_T`ANܝdc5&5q瓨Ն*hMam]lN!9d Zdc3;ogaSݴTPBgpDR0-\JyAs[w&r[-w*5tLBF 65س@ru8L*dN TPB +XߢdKGb,ZH12xk}s^ ϔ{)Wv157#rTW6JNAgHb&9оs[,gPŇzZ2}^r+CP"k+_|eWA.rl{[/BALozuYats^ؾ4>C/؏Mv2iګoQ| Ů=_; Zi~e;1|eRgaWq?ЭD`5l\>ܵx?<Ѿ,oٔ:chq>S'&J7}o1Kk?۰̒O#55ɄsxlcӾ7 _^ɉ/|ÕrS1ZU#'q6Wl9bKM囱4AT*ɼvPk%l/{)@yҒ-UT$3ɴj[..ʐe9l|#qF9$);(~ yI C^jQSÚ-UV$=8茮B6nxaR-3場$狀f35%53r3P#f桐@cItvkSٶn[7~vyZ3ZUIi~g *GE.6gsߝ6ټ M6+>>93cgdI*l(ª/ dc<;)lb7 \ca]^k(}FY=%AMqn1j(p4ɡ1I~RXld}NbfKC^y'Յi a$d5;)^@g6TSTu$̓|gbEG)Ɋi!`kjP^M>˓y.b/Mi~ Sfl kf5[*)͓(0%*u3<hI)OuVjJ0yEueH=w[7P~wI&jw:rRMQD*יPbQ?;[G㈆9w/F:z({/BN<{+9M8Jr$@|glBW?EE&rf+`ՆX vSZs`MX4o?v'#(2(izߏuI<x&p0@RNOKL^앛;:L@}HCB~ ۜ|Q0Rx<(6.^:ϠLJ$ |#L =q[uKXbDwo@b2c@PCuC8{*cyxQZ8ocbz 3Of+^4np50"m aI؅{iEX'\$1ܼ~Q7H!x<]8w~'c }8{:|hξy`IeZ)8{dѱAFzhK"Vzs:޹>Ngh>R&kst^$DЇ `Rkk(++vd;yh4Is ڜ* *NĂzvNq~%!Pf}3e+Cmg!߿P⑱Ecc[DF\hrz,vEnN]lڴ3_GD-J.z  ]:+ٳm^&* lcԯ¯~|_]N=.j $޳* YԨPB:v%=dF=aFPaApӁ:ёf@CEFc ܻ>n?ZR G.,+=$8r <#4՟ydQ'P1ɔlahA}c~!awRVP/m.s{HHTPZuQI^vFPwM= [H(ĕκLw_#::TK Z37ӢF1v6SlF(6w2HisC(X\)YFjw }Ǟ"?Ma#mjއ7:'_#`3Iu?gFSxwRN|o?qhJeqjUy**TzXi}:w$= ,pGC6֕K5A[g C2e"Ӊ^#Fs7j^{#oRHwݣu8HRf1TFڙ`3EJ VfiBҲv. r#8]Ug"+<䛼d&iU|w D4jM)z/UO{`LJ?O8px!jrMP}Œ֢~PtD?SyQd;bRzt4^9r&9oqgb޺ xjm 42\1ԾGèLiY 6k,@YHPg!AF܂t^.0JWwyڏ_y\U1Z )l4whkS+Fa:{iwA4WiRҍL#cQ IDAT x'ԭlټIl ޕO`'ST`5(LMFM-5ڡBRlU [:(cE!a݈ۜ/ oaLa޾57#$=€"Fs %$dH024T@YAwJ.Yv~*~>ոO$rpMBn2OABB6n}>,JʶSawUޞKo gbmEDÕF0Bߩ]hlKZev<>N-jcMr۰R8%VyVWfφV|^bH:3V[:eeX;DxL^LX,LR B|l|f$Z|P AR@vFޡN{Pi-ʟ$Y3ٵu憝#!4dc[;"9qp5x6VcVGɬEf'P_K_ ՖF+!a'Ͽtj$#ts:3a.:ƹߜcSQCKWyz6bgeK8x5*]a *0RPwšcG8Fmxd}*ή"tb *16=0AqEnBB 0QTugv!^c3Xsٻ+l\&wGC<9}Oj҅et.~KϞOvp YXhl 9 EIOvI)==O4hcرuv]ltr'2b55bu#ch;{ db 3eBe b]%T9n,WvԺߑm@cHeUZ;IYg?KմCv;~&I/u!2=H2a|,g} =n;R6;-OP&dpqFڛ(Ȧ$j;:~|%7kc)|fG۩(HX27sЋymÿb%Ӓ ^@IRɯNUɇ+.vhuqȒInW+Hw.$hni/l߷)L V xTm?791tdV>Oo=v{ݿdJؽ$~7xpԵXSؼ$hkٍN28v퍴QdN`-6u )9EdF :"R7Cү@(`{#4$=ñ2p%Hc2_FkelrR${K+dly8vEJ.McY<"q7r9XmM?`ts` 2Ctޠsabӎ*E?r^kG cϱ#''v(rCӉ5F Orx)rM!xpg-:&M='(I g8v)Cc_33Z~~+ݒv*Pu ;$bwpG< l*6&;O9Lj4t_ q>ű/i಴XHJw /Qk`N ɷjney$H}ps} X1HI[y.)ڛ &˞%$(W[vd _pcGOk@&lm ODI*㴵Bf6q##ʹwu_zתD,wӒ{mdQ@Ba?[kCȑOPi/78V #~cw"MwjGڭ:4NΜ[=͛40=vr:fGwy'U"85ȝ${y(v-r]&Lv8N=2=ΆNpsn{:kꙔ;.fMNvܗCa n0rIƭd(Fn6 Dz˵UX{1\i SԹ|6SL4qm'1ZVIa_"էi0,x%V`:Dp!>]cu/ Ά'Mq~ :Z{&}?Am LfpLg ܾP% I(jI)vtNn1!\Mi9o]ݪ'Mg+wOs릞ƆFgoںmb'Sin >r+7Z/12Xd! Zor%+ȥL+HӋM(nLCRۨxu3 ?7>s>܃ I|c:ˆbt>n_./NG֬s"WndKy铸~o*pݫ8h4ey ITfyHD` $ڑA "(-f?){XYЄ * 4 ,$K9G~߄G E4&$w23YMeYPfͱ)zD[;A!) m#8|kAJr<"akPVYŦM;nh\V<.zȲu]AAvCv|%jkט+ѯh9m#G ]~3ƶⵓ$"#2/:*j-ܿ"$ʺD~y6>9V_x"ѯ(ۻW@1906)RSu*T,lUej"b4BZ2ZKs#P1?'T;"VFbA%3EzL8tP><1Oy*֞ZN[⑂0Š@Kbzqz C]LUB*rp:]d.&& &[*|# ^4&LHB'hc]`$/UBNJ*d?m :#8vBZ]y(DrW_| ^!G|_wCDXG$uQeS 7bCA.&"VME~)z{g>$`JrQg`i ?˽Ft%>:&f`F2{ vG"LٔΗ^kF+Lj!RsdӖ爛dVBœr^J6Ka#u5%7e#՚ґzO1%+k=Y}-tݺ@َVzAu[Iš~^>EuTX08 Y_[1h5L$ ,y%Z>yGS'P *bg9FBN6uIq;mF -h (E2kvk5i211N eqD+(i1 D_JF%@A $\\mCy}P~>9?]D[(ܱEn;dI~2ŮRƶ$#+ &F @FUv]1]#n jGBku1ά.EWSEwBP 9rD]oHsW}WŒeѠ3cTlڇǘ{ $&N@:e;Hټ.e\mz tcSA$E ]u:GT&]g|IJ,Z;OND ]TWb VG>J6.$`"+o9YVB3b,&(*AO.V^HGI(rHfSzm27ìuK3+Wf UE_ GJ9U뉝%#hEi.&)NYV9lˡt3.ay@ƺT MKrF 1ͪX'QRuI'6Le8:$8F9A# #8{fғ]&pqZ J\,*Jk O@OZf*תp*2O$9%kOգMefYj'Q4DJJv?3<{lbU%8f1ÓҬt4 ꭁ܇x"zKKn>X;%9aDՉTfNA Ra[9>[ugQú\vU&cQAXB-ݽtOKL9\1_GՈ_1Ew\OU\p,Luͱmmr7|~*AxFYBgqh~#$X `obYdYƬ oLeH$_n2HF'7xp?޳6-W9~&HRv/1R{1ddknncddb'Y66g_Sl݈SZG$_7ɱLsn-SAlJxa]i,6E;p݃9m_ylqq<`j^y-vS' iɭN[MMl6 Ȥ_| K$#LONrYIK,;;{^.=󶳡 *͍~ӿ93.5Cbop-ކOvcGB2P/7H]k `H^y= l<ͤ@nnVnG)ٔg9<;}RƻW Ʊwyhygk/ f.um]!9yշUITɷxjscͧiѯ/Niw*9 ݉ O:;;c^:rMU d&vC^% IyŽzg2M_/u ē+i>MtX28ܟr|vgi$ x qWN4l:|?tg`L9s;2譫cH?xقN̹sr`K0`xpQ}7đV'1%Ygͣ㺮3j.@ LQ%kٶ,Kc!Nr^n;yoؽ:I;=V,k$%  b 5ý*QbI[ Uν>}e;oՊ^E*a,Sg,Όhֶl@Q|/DJ9aB=wV&"z>X401PnZ,EQH\ ]O. )Ϩ;2AW/1 Ѥ)3<Vx}xF] CN/HPb 0t8< 99Babr3r Ra?3*q0C/B_h1I\ ;pɰERw |:ѦxsM$JϐÃ1@8q{rzIyz3rI|^܎RމI ;=]Ze76i$ DJAPUIh@TSDKg?VmB Ű&s&^d3&D,@,1T^hOoxX/m&hޝ쌿k.l*Ho#Ͼiۊ}+.)sM~ϱwh?Ot((|x"$"9y8v@| v>=n[=?rn$4}S4R^s%W߽;R9[w=l@8x́w/P>FKKoAl'% >ȅ4ֵmXd79t0JjZѺEN|!jWud񃌸#KGdabIEh$S[{e&&&X(-.G@0"6mRUAI’P2Z0QT?#6}͍bgQ-y^z=p}c(!`,bEM<*@DOYC*1lrmF.1$Y F^n4Fd5AoEr6=c7.O_! zL}Q?${: K6$E Xv[mM- %eg۝fOiwD5{4k5)3tw/5>BE!ڝI?I_ ^UӠ`m}%펫=њdCװKdS~DK~!$Y2]ggyࡋ/'ѲIjȺ _\~ƚVTl/;o#'ieOSa-B'$Q:J9&Ap lј ܢI>$#R )!*#~I[oy8\@3 ,+Eu⟰js# 3H˖C(ܥ!>C= ¹>nOi\ƎP<'9za{.}Ab(zU'10v[a7PS z_,n4m1;~Ë/n0$\xߗXuqzLeyQ}dfG,lf2ZJc5p#Vq LJ&1!F8˜'fS e%U eև(2ƉȒíT0.JJ(D"xYp.U*%ƹW0TlkiZGŹt@EA$tE&дn#).NgĪbO4o euynb/IE0`hQ,A39~DD݊TQVނ(tFOYyNV6RQUUU(j.s ű]9?`]f-/L^*fk>^+D5jm4YQ1#orvxso-FnٴGqv^gaN^?!&|sOy_;cK"- ,F"Aщd&K1F!7YT0XH DdT 哊x K>8w0c˻fdJxa2Š1aLl G-yH3DLL$&<.E@pQ}a/6?zI/xf=| &Y`F(Q@F}k*1o@SNJ=9{Oo[CO9Oa&) :3fĽ~=V݌c"<- =-><}baA!ljcq$";HPTT"./L6ZԮz}DfuП E*ll|o~jd /Pd0[2f3|϶ZUܐ_(?%Y#{W%c/RtMlUg܁Yv𥧿F[yrKfn{/~l]^6!޶sOrJ=ijrF:ggȂ:!m_ϓ bUWY[7||?mwR`IEjVk==k&?){iv4Hĉ&RT lsKػk?1±|ZVAĉG(.pNt1`Ku7{y_AJ$//~ݍU ȂeN )W| {IuqRl ^eEQS/ 'JA 1u&]( ;A@t})6ĥg@ycR?(xcʒTU;Y*3O82Eix-xY) iN9!(H$]g 5Y ]gc2zxm8%wV[]qل#M9;?Yn2""Lt^&c+L\jq89[ƍTI;pqM[)4W!GTu %ηsԌ \g?2zu\gm?n#==@2F, ˛f+ΌZJ~%AGQgJl,~פ$ culٲfg/|97XJPQ' BY"I`52[f[65uN)9UUԙ).#sᏤ&lΣ!v,®3QZL 2yE%R QEym~,zݤwyx.L.D^A)TO x HIGaEdb+HĒW0nW %Xw[dx@4A4&۶RaDIG"شi)ro4oj%QRo!w]FDWY2Sf/`M9tgvIɂ^gDo.(IAGAA B܃7f}IG>rhJa9MLBe`_l"vh" g~Nц,Mc'~<03JQ2SXO4$l+"|n61X׫x|I[z b(DHdnkV l0D$#<cYh#M_l1ݿkd9͎Et^IʊʐgD* ,Z=%RZR^ b|bl) M*N U7PRZq'2J]CDrxF9zu #7bɛщlo?@Oy Pnf4hh>ޯ0IIs(5hx'PuAk~n,雄(HEyK9_] 6aB4r;rGwym4hРA ̒x[e,roΙvz֬ʚJ9quI i]2WKc(PӴRJ"zjHeU%@V^4, B# 4hlֳ($Ѳ6>?[iDJo:X}n*%YP2Qhqd0\@}K+"—`Pv5hX\x@3JJtz| 3_QH5hР$^oEʚ+-b/o>M}~b oUrni]=AL"Xi,BRHoЬj^BQ֮#|a۶mjkw,P@X KQV Ԕ,1W|T}3@0Ȥ`kM~|=G)n^oD)G* (,4ru1p{."J:4]7kWUQQo]':zR` & nWr( Poll S aЛlrf% !әK⬌uV9Dވ$Bapµ-Ya00`0.`NЯ<X “Iސ''3]13Qbҡ:|^b|怅(JT"R3q{mW6'Wwa Ck9 v/QS~)Vk yFh,F2!̒Z(teԾ 8.Mx+ɧ3V" R>[}g)CZ6f|GQ .OWqS+'~Ջy)A_ξ'1zXZ|nz?죳*dc_>H(wU0Ҷ|I ߗa"[VȓM/'UDE4s ]jǛTe '4աa@WpW1q oȅ<'' uT'yM%qv_!7u =Zgy)OǟBv]w,]P45'muƛ-`d+8X<U駿IIg_y )MXlqIvډqZa?W78quرs~7P:<6krpx~Q]' yb,O-)vuQH4O<ɥu2k;_wx`2:g/}w<->+Óx'eBa޴~Ivذ?À?tO t]@0 |oOHUGy/Skd4wC%SХkp:cc؂Ct^ %ZE֭E}^d`$ tfF$V;Χ6w~0iT3v~ `-opFY Py+)0RYj$ǣIFJWTR]ϑѴ*fyE1Vc#6 |/S]QB!ODބPEr;$U-ET),2a/6r%]ѐ^EaLesPz<ꩮ(&bARʚRA*m4RlRD6 "WӜeڻiPSqaÊ{XVP_Ď<#K4uXQ*N_y ˂HE[)&K\JEj͝:+Q]^Ǽ ===5k;#)(,,C}E%;SUǧ3 I͝fb\x@&(Yb :GQU(^Kx4/w$xԷl 9+Wi^]Ν[&~䧪DxAE t)-Isy1!GIeE"W:&"ZiȧIDZBC&}"1iL|IRz@PUeu2p!e&EJڨ+˗No$-+!;72S_ Wlז&V,/c$ChiD0W8248Ke먵uHksgNpF Kn=y9P[0Sj(p]dAo6}'W-8W9Ư] 9Yض}[ A?֧8gmi{V[=@EQTEjjʸ.p#0uAbzfض8 f*ADf/-ve1`rnVsga((Z3㻊 /]`r=k[zM,& /i d۠T}a9㟋M>XںrlۻjB (ʡHzQDA"*洮M74\U6QhpAQT呪/yW$I hh _pWڒ,wO`cAM"-iT(i1QyEuoLӬh7Jܕ.iXږV%o2TU5kP,l b$g*aUSUV0:x41*qʑ?9ݟ5&g [ؠcinCӓ<*U}^w2Wd߶gw.h1Z‚SYhZ?[V@;893:1-uv$xvٚ!bza aê5ec_b(yf^ԩC*XߺA\&d9Defl&Q€Oayp9©(J~V6VmD7"^HɲFVzčиuYL"O}L@`# oYU7;y/XCW3}!@G6THgy#ET%>w_$Hq-_o'LBCOpià}eOlm2"UDcQ ZӄYcٲxn IXVbFPrd){iz:j[R`)ch2#WtcYR I$J8i}ؠ?իvQzhwwmX| Y\GGYj-x򸗽OWizJ|mlmw "5,%5h wCl(G2s*d+dfwĐT (t\러c^6116'Ԟ$lN%GvSN4.O|?o܋ITu%Vz{cMx̠_ 7ׯ|/_m dOf;໚~Qmvcp<cʖAիrU/*_I\. %lڱ IHʶݬPC -m[8ä{?O% &*JH3L^i=TQTZ(,!`!sOA l~}ʅ/~WqrQq#ؗJ,6Gpūٵi9#kaC/7yزj-d#3lg_g;/TWU0>@k,},r_M t^]S̬ٵwyQYZo0Bw_3h;%zQ{xއǮ0f`BvR]^*b_k꼀?oᇿDgdT-5DTbvv+UU5 3::"4Cp/j zPH;N3<9}[.ޑuT?ŠחB.c=_ \ovwC8mO-xz&闹C%; ]d4]ƻxG=W_v7'YK,d_棛KFz\ B“ec4>i;./s<<6,luw"!o}=O]Ɨދ7Lh$s'Sۊ:ۇjTh0 Tm:>|Q̾8L%]i<2WRivH4֮?pH!;݅] r)F1Iضv%s\8{`,ARI:3Fu,~MP[[#`-LO_ƶiY8ݱoM"z>XDJx!_ Eunk@Ip #QiĈ{eKo /RUE_5iY}7]p"RԸ0pI"iY]\>ޝΒYhKu `\uk(Y~5 ]o[XٹY5VK9unB+o>ពHĉ8;9I{W<͓knuz ty4Ee:Csٺn Jx]+lnk\(8BT`+Ŗuk7T'_dӓ{Im)c՚4?kg-p1By R~l<|7λ``f=phmŝOdC 4CWx N__%0  uo]#%d_HjHG"8ɓo30_=;g.cY!(# 3>q r~)_|}J $ܝ9qw,wJP>aN<##!v4Orڭ~s„~E.b(ϾUGZxp,It'yp2GÓ }cW^s(d=$Μ:Ȁ/TO8q=OrG,$O&jX"IυW4Ȳ1 ao_ǓH?D0|+ԨE5 $c^$%ol`Gzx~ ڽIFN]6ۣ'_'Ҭ 0+'zx6c79-2I"/zn<gSf'7~f^AX 9WUNR|ڟBN4dRNgp:Z|dA\ *߮ϗB˿Af5,h]9 $*bOj*:e_ Yf4d^.РA 4|Eu 4hРA0hih()/rep$+5+.+"ĄVnnPqFzXz@4P 4h¶F6~bt"+Q_jN674B,䦪H3jWeuUI(_s,apP KhiAw/)_Fk?!ճmMygt>G"Iw^[kFMln,'jԐH,n ц(J\׈z-/nkvft-%b0xwCq]zd˳ td2Q^^drX)z7x/d]6*8OfWL*9?:̅Մ^R=+EEVosH!5QPhv&Ǯ/@ /.F.&JkF0[-G( wԓ$s{!{՟799 d&~IEQOrAIzRAUMXC7<@LnL<[w8v<|NSRxoE5ƹ^-6Ǫ [(HPwv;Jظq=陴NSR*'?i1ahqr!IBm.I‰t`׎#+2V(F1Qbp8. 5*%g2iЅ9kcT-\6új% 9s0w!_ t~;-n n1ƚ͚{ذs$/]wO#&]a$}UkXWiz 81nZbgc(F1ѻ|> @cb/v"́E {k~:P3Ó%WNRf#thLb'HHֺ4i\ltY0a!`t$<oԏҀ]$Qv#q#9gU'36z.ºEQ!u=K`(8^`qmL8H7Cb*Vg ټsWŝm^:l| ?2WsoK⤥D6\Grie>j/a)erRGheF+U6Pdeh^8F,^Oblus/D{rqqdm:ksb9V H.\ý~Iw!>Ds|??򿎄UX@q+*qBBh/r%Ք/@zCPqpb4a6'dn¢z>w QvD+TAIO9WD`@V*H8-(aLE@2~gI3 HڨU\c @gk{wx*o4ܱ  g^,V޻p~Aur 2+@g?IM'=V<t4UV7O@KKbڇ4c=S`\˞ IDAT% %&3r! sܵ^ +<mM3OŦT%sp =y{"zaZ-;صC 'nZ4Bk\ݻסh#EV06:W-X fGC]e{hQq\<젱 Mcg'E%>>Âv}z=k5/CHI~+R2Q}.Ƨb8#S%!o=wT]?`l1Lth oXѿ2}1zabwi \kÃ'y7ii=sX|X\ZUg"d řz$E1F]W PzK {IjgaZ֬Yφj*$f/mM{m>6ea̕Fv HH5Y|&.vOSLR&vx$Ʒ/IJ-]ˎ xOrc=# zݻjBP?St'FvդK2@X҅˩X2s ۺѰtV^ƈn=~[.?'#Lw֫:-%)$ࡣ䂥HM !SY`Zm+-HFMMP@v*cݒBQj~ƙs9t(b'KF0$.>f'+a8O^;_H^?|w,l̻?Mjf ݇?8^@~ >IޖQ}8Z8rͷq=Zvw9ԓ4t Żu xOn~5a[=A}E?*|& ܃MZ F 8TvQJFa7=zk/^fkgxpI-bAPtYĚMwx*^S1K +xB*"VTa՚ $x SOװJM,@Ua`7g^u T=aX*n;4zq RAH/BCFb~cʞ$[1[ T8,EtI|nko9~ .Ym̷@TYXsô!YYܿ {2>Y+>H/q6?@URz^y9-Bho{_d/[%c4plͶ˾e^?׸9|C @g-`՚x;㊭ 4oە6.=_+LDr$1w kx(20[ |̙mq.6ZsYǟPA~4$e{0б}x`&tZ >JR{âqʯxFVi{͋H19ϑO0x)mw\f$ 6^L_2s7 '{ gY#dc'`wl"ǟsMTf.G|69YF7 ?ct+ְ. mu9p_"7*>:m:+o{pƧwa KTBوV@yF9ҷ/׳pE5A ڃ,,'+AVQWkabI=N8{(U(Kkw7Fߠ``}ݘFxi9.8?M!50;ز*+/ö?2GX17s}FaZƳUԜGIMB,ȡ? H'6Š(؅~qEe9Q9x+<dmz!W^^K&G (]Rȥ+_%PƪmT>!"P唧9#XOQP ,g 3?L#itHy+H&ؙ5.UVl+e\獽I}n~v}t?LU~~ߺu$ !4X,y4%uO>s%??]rq:յOSOC^1SsNbFzE ^$ί/P|g&N.\pgvD~,<]|$"Ϡ yYKғ@RZ%+/!S/>{,-?!F2PF?ߜeh֐o0EU%~0DBn2Z_>Ǣ5%d%gs Dٲ 9Ғi2aROst^(:/V¡0!q38)Rgy% H2pE64aZ );Q$Eѻ۠^1%e `|"?m7/surI~N?E@G}Fgbn6u%`? TYxj@dDzRT']7@q<߬RExU 94Qtsw|s/r0 n/Qpb{Ո|;53,|oR@>~H\OvQnIIC#F, /3 el=iic}H ʲL />k. AY FkPmHG(5>Oay2sXe;Lih52z{ZXS } uVPAhkk"$`1Й-d.@-46D-; 6}*Nʡ eԜJxhʨq_|w LYLQ=Nͱux޼`>\8' @Q`?I,}~$s EZνȕZvm'ݺnF܏FG(h܈/!'¯lBRUT]vѣhE=]ɄN) ( Wo%1@kE\v޹ W.e pЋ'wy἟?ddI_O5=I8yVe\UPt((ԣhfuP`:hPQT篖|1-غsACJ$ v9tfQ"<ʛ?{T8JGΆ"t7^׆ 4N~nAyrGn()lBRxԴM"p҈̞Yx|s|[_mBhL2(Z\Bx :TeC>7oݣTl(_Ys/F@3Ud攢"'aWQ?>襶aO|WūHV)BC$R6,Oȡ xvɿ;0eKħ-`tgl<+H}~!SlAQXR#h$wDhDe|K?b 㸽^V3ߣU[ )BEGδz.48J=ϟ}6/( ݬ #Idĺtl^ oݛ%kthN28>뢷+B@z#5ͩ,g5 n η>t^F_7dijal}#X[nt_U!+02Mhyb~h?+޸bםciK:e )4ݗxMX ;(Z# u3ye"x«P,>Iy3Y|4FyHQUZ˙F=l۵t Z9Y{H|F4I ґCqlI:rM$5@۹縘iq* \ę.E}4yv6T.ETvc =@RIa,柾d d <4~b ,h<3rx~i"{}& B< AU}6 DBwq֓^0tҋ?Zg3 !'.\4C*FZ0z+SLp3`r ^ǽ۰NS%:.#[Z)!:qpAɄ}4 :${fƝ^$vYο$ :971yqzjeF!@2b+t!R&-o!B tI;̓-II0 85gQ8USÑ;Bӿѵ(Y9\]{1. !a/$j6 {_jc %m'"Vl<މzxZzQt4gxҔ8AB~E3Wz{:ʒpac#Qp 0C9( Rxf=Mxt:=:vn7IOc6ui F{|F{K0 A8#} pQsY̸ 홎0Dfju=޾< e3}6S.{G'HϢrϤxP1<<o4)K: ,P9G#͇"8nE .g?eLfQ<0SܢD"Yf6@7e-ZBc~9J5 ߒ*ձGI."4 re)/1>vI#ϼQ|Fzsw og _lϊa(al'џǸA3`,r c41vn'W6-377_O֍76x!$TfFoF$݌'X8(!B!f&E cXe ~!Ic׍N̼8V$g~$ σF3[w T9+?LCXj7 P̺w['c/qb ;b޿<D1VO}60_/06]:`&5!'bdXdL S  L!Ɠ)43N?Γ81W>!)gvSGwk׏}*f!yGZKXѢ1T[Νo'̜MD$3&<* 535'CfO՚wbSss@{?W.`TLȚN{au7Y*CbJJ~UtlvQ3cG@᧯u,Yw8A3?B,&hklh6.gse4gj {Hnaf=Y;YR/.%N| asсB"zF1,,D/c.Lda#NZS ) N(<4]:t9K*WБŅ-3&&xB f5EG3jIb$NG['|ҭz:5%H/%79-vݎ/$+Kn\wϮg+ug)+_CrQ48k?pg`溞J3A13ۀǧ8!3<86ϵ*GkJ#=9!zM.GwiIK4`, Iu28NbIa]FjOq%Մ0 D#/$n~a>k҂p-ޭzx=?zkƉCLr6ˏ4T$?K%PKk<_>rf+Jlj.QcJ J/#6?{w L77/hyW8igjN EuލW:hl<Ɣ>@v"@~pkt2b3gx"1> ɡZq7 r$:7^1msZw}K9[1Kkn/7,1l`VaܵԤ.я~c_}cW~oەx2;W2PP>tc\*Д}(Y<#s'K/r]WD ݻӣjȯG -_n-h {>! =g9 >M.n:-, &&B8ě<.<72YemHs'ZSf (Nr+HVk:5qVA7cXzKz)@H1`Kx=ƑeKj%4t6F)@wilb`ll\0$5w)9Iaښ[qqAYIx3{l2*TI9~BOHru\qzeF!γ ̙>@䱤pTGΜLvz:B؇{tdRu8 248dH +3 be1GyeҪfYԝĐ1 a7N`.TDVP/vO\R?cU1X2Y H?6d$X@@zF>S\rpLFαy- 9dedTAE4dfL B3B!89xF#4+O0~1.ۄv:%1m(l4!)>!xYxqzd,qX~'v1o%3$ +. ,JۅL™؃a>F=uHRxDUuo#$ =珠[0 rO  DjJ9߹v:[ou4_Ŝ&3|Zxgxyo >÷/Ad7#¯w';k8Uא6yWYx'>da;į!|vm +IHP|qD=CnMOAiiu?Ja?2^ V.[FсG/./ӗQ⤯ GXKؐGJ"-:ع62LǺ8vIzi,[y!MM9w8~W d b7ase>,OP u&:8DBETU-'}> dCKȌ=9;3̥~ބ٬|اUo׿GZ*gկ_(0ͭVVήmR<{O#NINXC;EьǾ~\VN҆Fkd0`2PBn(CUy䑇(_;ij3')]JqA8Gziu rCՐͺ{>LgCJ9uh/-v+7`i~YfY<5w=[ׯ';5teH˯DkXECN(E+ϴu%-&K^A#/).Β%F6e$]8F`ԭ-BZwMfs/Ge%!$ NwUl\XP ٽnӼrMA=lXs5{9g`}y.9@7-M6˸T&BуYCIa bVk9uH^G՚|e#K>s&e-eE=yaY}Y%1MbdYFrΟ? )f|#h4ejXKx<55r&=҄E5HIxj,!+-v0c^A fhAR\_#K:pӥ,"/=3\2J,.̤#'#ـd T{=C=^Ɔ0t+*0~~?ng:'k32 0:OB fj42g}A $7{|vލ]tJr6Ҋog{i>ݸdu8~i>FsWjC^Ö`(FEV.eDCv>WkYSqwǥ\#E%MTHȲMv)Σ/pʳU9N"_ȁ+Zz%k ^yr7L]$r𠓬wvy'='5W9wVPN]sf0*j؋'fj4+#O. :9_%KD`Hg]ȋ\llti.yg-ӣ5i;GƢ*f2Ӌq^kl>T]:+WvsMbJ`YNCs J(|K$CKsG=^TWhSXl f'-Cx*fn+l;,YyoXL8$9yf V%sBr 4#s[Y>ڛXy) ze31޶s)[Y0qU&0ZKX(Icc@DV3?B8'&ИDQFiGXoA8Z9${2R!!{z4d,q,FA> +?/ V@zhyIjE n ͣ~x JVFEӃ i ."aefpZ{*sc%)̜bnJ۽1/d$XAy-bAn:ӓ s5 dY)N EY F_LG9w57\HKoٽ{\̡3P#XRD݉9.ms9~'J0k8>Lv~AWKdƛWN D0E1pE^y Fs\Jt,Q+XlD3a1ze4$@հ"G>Ah_Q^`AgN AE:?fgp"#/ܾSaNnƋG:TM{:{9{BBK^2R# ǐ7aB ˮ(Fs $U8cs-F BTK@{-;\uu2 T">=+%oNro;?< ^2*>c3ϡg ݑ˜R20fm?YO~ў5pM IJm<#d|u^xq0ܭe }(C~^w$`}|*%+?4]3[]z:2'(8:p0b oKʉ&SeyLI۱+gkLCI@tMPI$)>x3L?!U%lRVÄBhYP$kXF| ܡfQ}8GQ$ 4V*W!r1rWGY͊UH=:WDCEܷb9ɃiYqbWs'F{S¯I">qa1R+yE_A\l&6:FFI}~wE2TB?%[M"*4EFʔS44`_AX'"yEU O%xiX]ħNq7[|VrQ5JbrmMH\CupelrXS9s"T/$#΂T;8uS"RD2s2 1Y-|= A;6g% gIV emDZ q9LQ69VX\܄Fզd1p564'Bw3f icg >΅ $jyسs8pp?AԽNmrb 1c}{(J/qӁ W.:ĒHVGY_Bё/`: % 0tKt-T"7zٿEzSK6 i B g4i)D^C WRY3* d;қ!7Dɮ(Xcǁ5ڎMoz'b暜x\`r0uTZT9MsGƶ9L:mqtSC’#N?dqrrkuunӐĉ_ί~YŁaak:"!ZS*B*zӴO D44ۆ@0D($[tmP% #0m `844 Ox%u+K͹z"'ٳ_.&" U%mS(F"SV(m[IoU9J* JB-ؖB۝\1-*P@({.k˲дIR!\Sr.|5NwmJvee.&.\ŧ2_җx챧7nEp(e8c6Z&!3:*PUtB8AU45 6>| %@(t7m8U50 XziH5L0 049L  :90wt\6$^WOX,o5>x?F:}n7ۆ.p\vxǝj|!Rwg~(ץ>sɣ֠)fy!\5@ Td+n4w.ÿpO4{|0G)%`p=BEi:N` ` \ XE4 }5DaY8,he BCəC>gT$a@H UT@&@g|˥0bGJ%kcVwM9h {5`;ǻOF#v(EsAArȂ TQ1~mȞ}_g~.d3Q:Qxvq:Z`ggʥEQ&2RS3c \QE4t ض 5ޜ8+qӠ!˫^# ۶Un L-*6+w-ĢAv H$ٳh41 77KO""FRbY*ʇLP!&ͧNwPǰa8~ns-m:W#ʒ q?rխ ǎ}\7H5LD4~ 2_#ty&jibd(JH\H ΐWy@]oZZ2fSF~Gb?AD/# T׸wiCDJWE 0|50,$X=e>@p  p!@U^zimeefmi)4TVBeU链B8u$J&#mMmBVuJ ro Num8q &OuJJ#LbҷQQ'V?wFQ$.Z{%|aJWUP#hk1 ݏq-v}jq_ v8a2 DJWȻ|ݻ2._ۮ<7( 2, zm ,"ҼͶqhM$t@Q|;@!v4" YCჴweJ@cqV%syVh L]ˢ{ȡʼn25BHdy&aفBmÍ N2L;w+8ݩӹOu, eH7Œ?}0ɼ]R0ذ)%HB;^P~"[Kz~ND;c{9a8>|\.(jP "ݝxKcaCɪ q/~9vz:Ee兯MqhlqK &{ymTn)O\g%Z>РyoJWqr3Ω:K&DQ#MNt-3ɱ-tLjp$Nt J,0[G^>㧮ƛ?ՋgCMO||̞$6,B*.$]q_sgAr~M$ /-8q Z?φi3PpW~oαRRYg%c\(D6o1Mڏ!>aS'h9v>T-Q*cj H@Uv^N`0T8a%BUl,KXJy^s;}{>Պa|sB o ]=ed{wHY3|)3gl2_qGuE@3M{z!?3tV^2 y Djqӣ,5PbKr<ߣtBx(\*)Qe.v' ̺2%^VO0"  F(/rY FJA8# `jYjH4Qe+K. Z IDATJf j@h,(XZwR0n5OQ;pBߤ7ߟ D!9Wh )me|q&`xW| ֱ}w"Ei&T(tPpq b_x KOS4uq썗I/ Q[;nxOqWs2*S'WҲU@b֯YCCx|ݯ~]m3g+ _aCL~V8gI4Ùܾv)pe{]ۑ%\S0=}RA7/cX>vl{ Xr 3&ste.no7wţDjWpߣk9N%@(e@/DFS8r-M;{Hs$qe8/aݤjN}~H8qU0䑙7^zS'9_?c|\NEwFbA ?r9Y%HxK%x5rխ̬rps=A Zc&`pm'JLZt%^5u"N̙e+R /޶6R(ӘK*k3}5{֎꧒n$#J?玂M[-S\ ^)NtsV̿ =q΄l׳a];vn=DiME"CabF^yf[ts]S*~q)Qzg+yIXvӱ;uN{!Fj$aJH6g r'Ȃnf}.?JetSޭMDӸf]ֶVϺu_?'JE  DTb53Z8zf4C|7z_+jEڒ.]w9L…gJ?,'vЇN)XtZ",n I:P%;x丈[>`| 5Q[ͳKd-2y7_ǍLZ즥j5Bydj+D5;J$ X‘wš8[OҚWj 7Ӟ$&w1L#CGG?"8+0z`4f7vl[(M= j ޠ7wQẅ́J55,ACN\uh莎`g$}@Lk:FhyKg?eDp:B( q(W1BZ/=wxx]Ԋ{6}|?7PWCXmyw2c^]HPBUyәP[+TG^hfΪ Og0=N.A,-CƇJTT‹SbN%\T@蟗@W[gZʶ2nX7ٻs3;%Wdz2f-Yrj/I[CGЂUYuO&>0kLv!ϓj?g`觲AbYt&H'I@tb7ChMX=ûIĹK7./NE4,[iAm%9eio*?izp<de!GGTQy՝($yL[ǝtu#LmxOe|RT54 )-^qOݼdY觽D_}j 2nwWO~ hPwS7m1V]۷ӑ&N-XE/ R{ Idc ?||PLZ(m5ǞGsȓ.:(jKF\վ=m\j%rb3kXPhn ˲ % ,lĖa`Z ǎ4 I\x SBAl8m:+FCzhfUԑxukpKaqvej "ίp=M2c7oڮ:q~2# +_f5cgTLĒNz2x׹;X?Wp ^tPE( 9V};Kf?x*adY4:GYHIxBw5~2s6Pe6}kj!߳_ʇ˺o^'S=2Kw_-%ZϠE$|y+']%%ZuKGG˂Ǡ~sP2hAqI(䷆2DurڸPrcjb] ;Uuy{j  bL]sT@thiУ.ݏ>D'-Tw뀖&eރ%ܐߧq\ LyB( w:E)%'aod *CbS٠BqtSQY #.3' cscY0/-"//Pt z!+wn:3{%O_W#}#;?"zZh@4\ƓHOD(޺Q:.>) >FR؏>&I+zZ-xD7ze`wD,P{ϕ)AUWgUv>2.Έ~۶&}qvUT4'[8M}.(:ρ ߔL8R2E{H'C4mkB{8MYpMңnSѱ B,^Zh+j3DCq˫G8 z*vtCti' +>8ދe`*VpCEb7Ȳ "^/;F>2&Bu^^4v,A.n{P@6W{#ҳQJұvs:v^Wf4 Lñ/pw4 c[ENYEJZh:蚷m 26WqXPu:'8.f@>kdo/1 E`4 >> WHR3q &}4;DODq&'=}P7ŝfvLn ՕY8YuUv]$cj;qvV5ߛ9c7&j&O&"ɓM%m21գs;ȏ[Ap)Je#83VO`ҤiTmzێq M,?E(f[p-hks}\bj{aϙN䝗NWe Xjwr'L[}F  KbqF~?z~t^R( 20^2,20QX V&WDH-(XJ02= .uA8 HҦqD@{EQ (j%,QYsuLYz~5kv?\V}#Z@}q27mO|fׯc^gm%`} 8 ` ?%6k?0Hx9kⓏ–xhUѱgR"tf,P:P^VOTrɻ(8rI޴0>|\&(>?eH-"Ӂt9P>-?Q(었zhu}/l zE-;K(לAt!M;y>FB@=~ Ote l;;:v+*Rj.Ap9&&;ٱNsVNe5u4LXB.$i+>' 2e-Z8QWp QQ3kuA$6+[ QUH@cVjVl<˥Dk}_>*8~t vAt],[q»þ1. {~ 2HlSXNtUkBY\)_`1qe_9' *?=hK kY!W]ȍw܊?[vQm9e\`?/o?NaꛩQݯ56?-9bV.n'̄+Wb9]_Ipn2{ԍcZbAR.mjV&N, I+{^DnTvc>[9mRfi>>hB%zb jg^AͱuI.{;_|̛;S'*z?(6^UDqBVaQvr^Zѱ9^2@`J%k H$ 1gyeRީ",hyZmaw,9r=XI(2k MSεg2Og&8uVY2맭 S#n3+A_w3d;; '6Q&Ǯ죽.;%yZleԋ4ᄈKhhn&W\}+&.I":S) UOdҵ+a ;gM'IT8l϶iO',v[YחgP,<-S@I>AZKo j]hr@I4Z((z|g(CZªY r='jПː 'E#%f硇x [ʫa{oץY(+ ;q "FYE?'>|f3Icܺ^Ͷp=PUoa7{1U䟯"p]L3s (hw|EOVrC^}${qm Xps$T> ̥ fDOr ŽF %`+zBsO|d'Cg}9 [O7v؁ i5xџThĮ}xkW3.x3I'a1 o>|\ezR9>y!@DbLe&MT֧²IHiJRn/cE~+mr6G^d>F&ة{PV~)%^CFG@$͜1iTòp J!Bj Fϼ?>ШtJ ضI&#T U A۶iFf{=O[Y21Lsr?h% ɞ<](#Κ SA1-蘢UUN{{bcT 4P q"˼e z@c>βŵŐ20.2>|P([І1uiAS-AD?t xA8 Qw|>4(9 #Ô$r i@/gtP^0D5O/NK'>|\n(bLYkWS)2|\N(ʅ IDATNn.,q6mo;X|'}*yv,)Tor8cY'cԨ-EOs9Sɋ+X-'x9zsFɱcq~2>Fo?ͬX)[΁@aT<i>gNpձ8?SȇIEnEӻOt;6~zshӼrNnN;ixh~tu==ehbf 7mOs=,¦^#m3s)nN]Ue|8zzQ6 ׌&l2$Jlg ]so(s %12!]p?xOG"ej~t2O*ӲJ4E\&0OFL१;ҚH f씞t\E:>|\.(,D3ml @C- 2©6D(}$UL6om&`´pei>/02e`FͻO?^X "<\a~D*[8Oq 8[8x]`YDqöm c{^7 v0LP<&`OBhW0tLp5y"4sBQ"m.&ܷw2`03к hj=LB gܷi:%B\U\Q#uU˜/c d< };!mzrvT*kaO;@K;z/= fb~?/$:Nrd8"ôKuJ=Z%чQMI.=rVy^2cl@ BݶTL?{LC t=U(%N$c 29_u˜7NɠG?ćQNQ$; ռOc"Xб){N$b /TB ȸ؁c,@ -1aiyNw8 xlC8KN8ۻz - [{%Rb/lzZZ5c=;mOƠ^c#>Hgsk-ÇQL q2f?{(` Pv`d> ~1`=J X-o|05F\NiC8 [=mȝ‡/@^>ƾGd|%(@ǘ%҄\Oa$#d|?a Su0r]oQNC`62>|p߭ v2:~1 ,lx\> xLC u[-:)##/6([zՈЇ1@ Jq2_aO!'5",1y=]a7PQ*?'NeYg5'qX*B"e^b:`{X~=Q fxف)ciPMorgϴ+%#/=Y;'# 8LFC&p%S,*@#4hm=x_^&DŽ }E0r'?;ȻXJ:? uS?~F"A2geܷqବ!{Wdy|/ X ') p!@~ OD!QJZyȘYP@WCU/,S-wױvoT5D01vpBG+86N F'.m( NFtv@H¶M`)fxr~}E0 O(: c+;>#M\z/Hzzd"FMK4iXN[xJ02)̓J )H)R^DJPTi{՗aI >vEHE)u!@jMJ( ضU׃X{C<36R6 ~$ECE"qlc uEQg~BA>dYv Lpƻxdͤc1^?ޤD EK>nSYLv>;v`ξP¾>PrsWHdc12B7)mPTg,F88M=๢.3#|.)gCP2 @2@<^mRTq孟SKOG9FWCkye,mzhmhűM/nlqI0._]$d!d)+js➍`5I0yŝsz'ӛjqkH7IOF հbקX9g c r~)<͝=4,x0N!+h#L_tܰztqW _"f%;bj%〖e%PBqNI WEj߆VL\>;l\UN:;>IbrG98#rFMBr2Wnx;,sUN:!r7}6M,Y |\{R) pWmUHgq6ٟ V{G^''(+> &̢ n 8 y,PĠ3hۖD*j:{ɷ˜2,v-5)Q> :q7 ="UJx=VUT9UE2g=<HuMy9&M~sPپ cZЙw,Nui'1qrzJ05L8'mn3˰.ha~ r|DUa Mf>h7g%^z?QsUWpaV8Nʰ'[v{24Qq]5}OX:tK~ok[ԯy}SXr=qC^RVб[d(J|fΫ M1k*bMu>cf[ʁ/(@kI1xP sd15B~AQd!b>z}8a='$ iCݼ LbA :ns0LI7>ė3rÿCV²j6b5q7 BڠeSz R>F$EiiB8QaQؕqW!l Ѻl9b`CAJH\uR}BF+Wի趠~Ți&P:sӢȧX`W6ū(H}C@[8.Of݃jy."RM$ċZ2L3Xdž?E} "#3L^V_CHmmLuy=E=}M&дeݲ} essѺ'OeV[b\M6hFgOl\O*I4:9Tq};ʙ1*b}_ ۗ]ˮaڤ os&.ԉV*\Es`M4ܮ 9%qHɟ'&*f[ϜM˛ly (VUWG㵭OqfѺ?Qm}|[v6@Q|7K'IVǺbzm5S[yͭXUذa93u Ȟͼvdt֯ X$6mjǩ3]'8=;~]KxŀDMdd>6b5իXfQŎC,2f:h+w&K[sױYH8ȍ`B(DOqT#E5ܴrYiv ԱX4s y8ca$8y6{ r?t-gGj)7_+Mojܰ{,Z#?m{62\ ^{ Egqb6?.@>6? + .dq |_1s&dɴOqX5e}1LO_Q6,%=6e'b jJNxx^Yk ae6?eƏ1uB%G6-Up{Ave}+キ(+F{7==^Y%. M(B"]RBCFHcM;4F{SPޛ 0CQw=xߧ u^ݜCo3bA3 UD)s{SGJ<+0Kf6 z"Wv6a4gc6lAp8<(L"'mEK6amγ's'Pz!;%+wz+ּ <4UX1E |dU&=T[Yc' $5$.t-wYI.6<(_}Bo)~~jqTQIz,STm䔴`E@X|)@@" vmጚcTķATmV Z3 [$  [Ǟ;1F5.\+ WR#7bfܗ@Y$>L1dD3cT7젂. 7j4&Q!OïP[ɦMqDžхtXzcPK񬐢^ط|.6r+ZYnhDUIw=,$PGFi@@tT 8i9fӋJ+m<R*!QגO?T-47搝k# K^ ENvvBB0^UbJ  .c}l+0l;u/?EC6!?~4;_ 7АȠ @qS#ۨ.4sSIm&>/x)[_11EπfwkN~?ޥ~sh8Ѥ_P!¬مTaFs3;7>IIUSPG9}<Bda.yوg^iѧk /c=7/sN>]x"D-ȉNv}_%].5Ջ?aS_^HIo$-duPsF.‰ijk '3i@*@|>Jк= (IY~+'yW~3H(a%l!&]TNw3/R<'. UUFU ߾NtxdT9ƺYƺ!"S9PWwO>C׉LjŌeXx+*AοWx;ص)v '[.ï~ƚ563&j9RsOQMU\L^zcd ̶VΉ.Cf7SSP Po[+nE/~|7KQ4"s?ӄgOLGy-oOQSM28˔7"/f]m)y%k)$"&Od \yhm>GKy݄*Z6QSMe_[ǶrE~& 'Wp_iѩia ˥ir2\u:;f^, U{ qU6>`jEH妰<ɩs*ݏ(*L e~Q&:?11k3O64gAUQ~2sJ)0OQ~ lYvj@nF.=҉XJ(DYi%?Gy~ɿ n@VN A K% z)@D9gt8{-8g\<{Cl$x9RV2m-QNB_G{vV]2HBpD*y牻Hg" *j[7K8fƹ0`ʢiYϔ_BQA&$Z(lDo9xn|̍P}FN|`XK .j7=JԂa k27>J\SUwPIUk-yL,ϳ"$ ʷ{6HDL]1$5ڎ?UaOgl!6: (dUSҴjd b8ql㇘ Oޫ?' |I@AUhB^3ܷҶmH|fDU M]UE: ,:;{HZdwV6 #~;< 5a5Nvf|//|L1:GYBv^)Y!g}k/u= ,ϊ4@YlЂE 4(@E%yR1G<^Q@40A4bKFԟ$Ax;Mۡ̇)|}o'񐇑;qNo6!៹#m*K8ٹ ]`W -sL80\л*AиLMF4"m@jo?r 0=cCXٳDL| Zٲe!:!,88s/|kC\97 ޘ?4 r-_3lZAػB2l*R7MorQBGs[vy)~Ae:I^7-x 9(<ɇ xOw󪾥>>Qˆg~g nބd]~`hw7=3Fw kػ( SqF<6ʇϿO9xbs{زH"_`k18NTȩi%TM;~$e7_͠ w#mlAMoo@15/or,lϹZ3G?F* ф8JrmI{B|9"l"♦)Z Ilrix MJ5uvGy`c= )9k>ɱwd=ZB+ttPQȸ>.9*ŧbL]9ć! h_9KD 2ϱ_c:ɤ65ktI\`xXM#rAoI0} 9/58AyK ]o"&\˘#J<癉X0i^+7s-]xVJ_Gא(ٱs?#'t-.jˊ΅lN]Ge^&v0&S6s6spp8%^{D[Axx~۰ h.F]x80^®᫸&T5^GxMP/ xg8Z{%,G½|SW#I_Ԇ^2}xef< 7$C\\xQkLto%<+ ~FsY1dɁ ;ٸo?7@Yl87#щr #S9xWmwΐad pؔq&&y8ey]AL&f,k{3aQ"0>|Q{:뿙dalEX@H2C9Li  0_DiǦEbCU=+' gcOѫcKc5.F"U.jeek>yvŧmoX~ȀAz zCE$sv "%*RNf:roҋ&iMALj 4ԇxx5UEk|cU|9}6$Ij5 7z\[6O2/8'r u]z"bZ?kZ 8-ߋ2:SO]Ex98 KB<d׭yET#'|MkqDQ&z}nPo.)0"v]C0}@dy1P[||Ӷ,=Eﺍ\UzY$[, bɸ7GE 3r0앦ҝCe[45ԗQR]L$pOFljr>qo@^f@mb4X2)ss#٫bpHđO%A-+$LL&=ȁgsQf!3sP,MwYUVÿL~"sIݛM@Ctw{b-G:P[ =V" ?vz*'6Zpӻt7-󝮑 I{}㇆SJM[ Qn82-Jtrӧz.w.<o5*ot9NݍA4wYeM~jeinY,-tr h7@߸no]J@e;ZUM zL酕HL{a0JyS+ED3 ㍨de #s(9fx84 -*(CʶV2B$($WSU\$lÌ- j[($exrGmc-1 Kn#3nkPۼ2o nk*ɏq)fJq44<4:c.&@D519Ȍ+dΧ\r  fжr v~r@8sQ@TȚɌp3ojb{pvO,q]Ո(fI|M7ϴ'Ϧ,{;RFU9r*xꚩk08&]pnu 3s;9p2I}V4QLMeQ0ʂ'LNUEB!1HCF1dda֧#6ۉ:GS6Y@2@ȼ M:6nFb4Ff+bߎ*1 !֊" 1hLjFƯ0O`*bZJu)PR\ޏ=u47V1m@(dab9w͊9bՏ8;ŒE4ڏR[Q 15ygPbM?bverx_2󫨯(ÒS^v-zb#sW?\U`,c]g>6HVeÆvlK}oLmx!s Tghfr%̸3򩫭'dAj*)fb'(mΡ(Lqc-kZP_URcrwvlPsopyAqJ-j(-bd|AmK#@k~>Vsϑ Cf)7?IcIb*rp(iJp(/n)Q`սEm_ pęwymi!jD+" _Na Ӵ;:(ʰj S%Yme:Lg0O=뷬#憂|.~.2Dm'~mJEm};6y>?:S[_fmM. TrqZ(E2c28uǟ1'NBB9r,9E8)7hwYwXAGnv۲!1Mwȟ*<%e4M% )ײ~ Vf5dȉ=!,ڷ p?@Ķu9=e[h7 L 1 QcNm;xɫY`)Zˎ]PW$o,zݼ5:H,oe'F$4e!%b[y˗1#$>Ő- w+ṷ eo}-~W?h MS$MS$eUlioco;1WXٱ˳$ MEyZ#+$/LFUlZӀ ԯOb#>?rG0^V^Y# u'ɭ};(ʵ%BVfp򬛖g)K$0X3PdcvJ2SPjѰЈG}DMl.I YeV9Ա߲<{Egh- a`ڱ'  ;8wCv@Nv!z{Ίo1,QdC)ڪЅ%z7EX&Lr89n#Sr죟Ҳ4'9{ph܋$38cݖm2=C/5gΝdZhFtf|N:ֶuҨNEav c1؉^֯ia':\# ݹ߯elonX iˢR|j k,LEP<:g)#8 =X ;lb#g;AN]&))m/LiX?-E-b\-cCz Y9u(젥i܉\RUxI$%|ʂj;;],_5=O(ps_hH{k;;*wpỶy%msG?z)j*e#{6W Gt߆ydԟra`Om+0REMGxG@߅r:ؿc9y@BN/e$FEl*6]B[QBp{')im*?ծc$̍l""ID&cPlު56g浌1#ߝd?͙eVZjٵ d75+q l!|D}Kcv6GNvMF9Yϳ1"q :[wL( gf҃q>]=X^$Ѣvְ '@ԡ7Ӌ^Bo!$ŒK Q(W.ڦ4Ve;4A$+ N\ST3v3< *" z E;hRdG.YNI@5ea♻D+*G 093)+":chNCf@ܙ@c&-d6mvڧsrm2k $rDEMy:ִ7QQǯ$ы"1 KevTEdji4MA4UPPXR5fQg#yUp.F"Ɉ˛i'8+1!ngۮ]t5N]b lǢ7PJkOrX HwmZL$ACsL8cP+>ŽNRM2$e%֔8}|qWrRmGEӆ).I5Rb6QYD:lz=fu A $oG :9Uce,aϒr+7pIDATN?dp+g߼M۟Bo9=fy".=[l+k}-HH=a$Ԕ6h 3rzV"$D}5ݪ1i*ux ӷEEc1>)%w* "Vehkw^! `naߓO ٯri|!hN ظ J☽ȑT%0,4/KOu:ICئлDF쳣DxHi!wRUqyģ '=(ZX]=Jy+YF\N޲z^ )):>/.sQ7l%7 Ci& Bq;cYތI&&/2Ôӟ§45Ro *|9rSʼnţ\3t xD^yBIf{ygM ^&uU-gY @H|q#`.FO`T1꒸nr VLs"ud^U{6=f# aR$r洞R hl>ɨIN c +~2cr:zF\>ԨCA%6l؍i Ę4 hLM$̩QDMjƹM}=MEƘ0nhN{DM  ަ4w\YIU98I8Y\NIm dd"}ۘ"5ا&*DHF:swTT"j|=~rDU?^'t֋׻7"qϏbұu P\`@hb2eއ|dYꃥV:[eRd2J$ r`s+was%,3O_'j##H>(//j\!Qs V.BCQ"4Lr;@)P#(1LՉHnbW_*C1:: \;| 7Yz'P- ŒsJw+wY5M%=ïY]J$2Ѩ(&`^?- 0iLL$ɰ\ϺevE!%huuIENDB`zytrax-master/zytrax_logo.png000066400000000000000000000023221347722000700170000ustar00rootroot00000000000000PNG  IHDR> J˯bKGD%,;p>M pHYs.#.#x?vtIME,tEXtCommentCreated with GIMPW:IDATXiLe?-в:(t+ Ǹ{l cHܢD^#1h+CF,fSq7219Q!0 eN Jm`)S`ᅣ?o~8,K\tk~Rfb&yg?69V["VWĻykvhmqϰo_ښ&M9Uos{1-|}ChiJiEy)3zB!H||BXX0߲On"'2tu51<ǁLbbRՑqs~Λ!)houOϑHj]Jw  ry\w,n*O '{Y`ggos` \xrs(LOORZDQSR8C&SDKk妪慦 'OGeE9QSVSioai)pj.-)4(_=?@}} RN]R&b0.'=1]#(x/cc#F4Whh)7[=5uDG@pENOw%#چIP v ^盧Wx[g >>H%x?pmnr֪IDF&" -OY  wtC_hg8GNi.ݘE Kϭ;wmy25 @SsզL+T*?NzN}߿rdR*ډO7703[s;~2;RrF@Yy>ScxU׽F,elLP("55 d}r@&s@0Ш)h4]y˷,..)!6:7M]TQLOMO}{ ܙGַmLQ}Wz{4Yr)Fx{2?-' =MLA~R6D=H`@E TW_$=(~aF%\#'Pk<˸"m'foid`c?R>/[}`57+q #IENDB`