While the manual is informative and explains the concepts and related modules, there is in my opinion a serious need for a simple starter example of using TWO with some freely available Windows/Mac OSC and Midi applications.
A sample walkthrough setting up, say, the OSC Pilot free trial to talk back and forth with TWO would be, for me at least, invaluable.
A written example walkthrough, or maybe a more “newbie” version of the above YouTube, expanded to include an OSC-controlled content app – for example an OSC snyth and MIDI player.
I definitely agree that having a simpler example app helps a lot for getting started, and for that the AVB example is perfect, though sadly windows-only as it currently stands.
There’s a lot of other OSC-capable visuals software and platforms, but they are all much more heavy-weight: Touch Designer, Unreal Engine, Notch all come to mind.
I did link synesthesia.live in my reply to your other post, which is much easier to get started with, but unfortunately it only supports OSC in its paid-for version last time I checked.
I see you’ve also found my YouTube channel with example videos, that’s great!
I’d love it if you told me, what software do you plan to use TWO with in your practice, which will be receiving OSC?
You did mention OSC Pilot, but I assume you’ll be using other tools too since that is a control app only.
It may well be that I could make an example for one of those applications, which would help you get started, and which will also serve the same purpose for future users.
Also don’t hesitate to ask on this forum, I will answer any question I see here the soonest possible, and am sure that we’ll be able to get you started with TWO quickly enough that way!
I am a software / hardware engineer launching an integrated “virtual reality theater” offering I call a Realto – “VR without the goggles,” using live-audience viewable projection mapping, ai-assisted 3D human avatar generation and much more.
So a master OSC/MIDI “orchestrator” like TWO, with the added benefits of overlaid interpolation, is an essential under-the-covers piece of the puzzle. For this purpose, until now, I have been using Osculator. Not as feature-rich or as sophisticated as TWO, but decent for message translation and re-routing.
Using the OSCHook YouTube as my guide, I’ve been able to get basic recording and communications setup between TWO and Vezer, another control tool I use. (I’m quite familiar with Processing, btw, and one of the optional apps I’ve built for the above system employs it.)
So what I’m suggesting for a Simple Startup Tutorial at best would:
integrate some free / trial, cross-platform, easy-to-install OSC/Midi tools
maybe include a “sensor” tool like OSCHook
maybe include an audio generation tool like the simple SoundScaper-OSC audio player – this basic app just receives OSC, has sample sound files and some Processing standalone apps for demo as well
I wasn’t aware of SoundScraper, it’s just the ticket though and really a perfect pairing for TWO, thank you for that!
I will experiment with it together with TWO, and make an example of their use together.
Vezer is an application I respect a lot. When I started making TWO it didn’t exist, and I’m glad that I’m not alone in thinking that making an OSC sequencing tool is a good idea - of course also acknowledging that Vezer and TWO have different strengths and weaknesses .
I’m happy that you picked up on the “overlaid interpolation” feature of TWO, I was afraid it would fly over the heads of many OSC users, but having used similar features for character animation in Softimage XSI and Motion Builder, it felt like a must-have to me!
Do please share more on what you do with Realto when you feel ready, including also what visuals software you use. It’s exactly complex integrations of multiple tools, and their orchestration, that I was envisioning when I created TWO, and I love that your particular combination of tools is so very different to what I have used/seen this far myself.
I’ll be curious about your paired solution with SoundScaper. I got SoundScaper to receive and log OSC from TWO but was unable to get SoundScaper to respond to the OSC input commands accordingly. When I run the sample Processing scripts that came with SoundScaper and compare those logs to similar OSC inputs coming from TWO, the only difference I can detect is that SoundScaper sees TWO’s OSC commands as “bundled OSC,” whereas from Processing it does not. Wonder if that is the culprit?
I think another reason that using SoundScaper as an example Startup Tutorial is instructive is that, as far as I can tell, you really do have to go through the steps of building the TTS parameters by hand. That taught me a lot in terms of working with TWO. Yes, this topic is covered in the TWO manual, but actually going through the process step-by-step in a complete example was quite helpful for me.
Here’s a long list of 3rd Party software packages and protocols the Realto Virtual Reality Theater VRT currently uses (depending on various configs), most of them OSC-compatible, the preference always being open source where feasible:
live human motion, pose, segmentation capture: Google Mediapipe (open source), outputs can drive OSC plugins to Node.js and browser – not directly OSC compatible
3D Rendering
live 3D Web rendering: Babylon.js (open source) – javascript API, no OSC interface to date
Indeed I can replicate this - I’ve come across this before, that some OSC-capable software, don’t really support OSC fully. That they cannot process Bundles is not entirely uncommon, though it’s been a while since I last came across this.
(Edit: I forgot I had added support for this in TWO :D)
The solution is easy fortunately!
In the property panel for Location, there’s an option for switching bundles off .
Funny I added this and completely forgot about it.
The project with the small recording, is here: Recording.zip (383.7 KB)
Remember that you need to turn off “OSC Message bundling” in that project - as soon as you do that, it’ll work with soundscaper!
A tip on namespaces:
If you have a source for messages, such as the Processing examples, TWO constructs namespaces automatically for all received messages, so you don’t need to create them manually in that case.
Just:
Create an “OSC location”.
Send any OSC to the In port of that Location.
You will see that under Namespaces, a namespace with the name of your location gets populated to match any and all messages received.
You can then create an “Address” in the Scene view which uses that Namespace and Location.
And, add a lane for recording for those in Timeline. That’s how I created the above file.
I could add - the Soundscaper Processing examples generate an extremely high frequency of messages, resulting in a dense recording.
It’s generally recommended to keep the rate of OSC messages at 30-60Hz, and if need be, smooth/interpolate at the receiving end. I don’t know that Soundscaper does this interpolation, if it doesn’t, it should be easy since it’s made with JUCE.