Rawdogging a Jetson

How to get the Orin doing AI without containers

The nVidia Jetson AGX Orin. An absolutely awesome bit of kit. And I am fortunate enough to have been able to get hold of one to experiment with.

And I hate it.

I also love it.

The hardware, it's absolutely gorgeous. Really well designed and built. Looks really good. Is really well spec'd. Second to none. The software, though ... not so. Or not for me.

nVidia have very much embraced the container, and I hate containers. I hate containers with a passion that is almost fanatical. Call me odd, if you like.

And if you aren't using their containers, then your life becomes instantly difficult, because the only OS that works with the Jetson is Ubuntu. And the only flavour of Ubuntu that works on the Jetson is the flavour that nVidia themselves provide. And there's two of those: one that is 4 years out of date, and (only just released....) one that is 2 years out of date.

At the time of writing this, Ubuntu 24.04 has been out for 5 months. nVidia have just released their version 6.0 of their Jetson environment, which is built around 22.04. Better than the 20.04 that we had to suffer before a few weeks ago, but still, ancient by AI software standards, which is a field where things are moving so fast the moment you check out a github repo it's out of date...

Oh, and installing that "newest" version requires you to use their specific SDK manager software, which itself will only run on specific versions of Ubuntu. So you first have to build a whole dedicated computer (or maybe a VM, I haven't tried it on that) with Ubuntu 22.04 or 20.04 installed in order to install the software on the Jetson. I mean... what?

Do what now?

I have three things that I want to run on this Jetson. None of which are available in the apt repositories for 22.04. 

  • Whisper for Speech to Text
  • Piper for Test to Speech
  • Ollama for running a large language model AI

Whisper is not too bad to install and get going, since that is all in Python. It took a bit of juggling around to get the right versions of some of the libraries installed, but since it's Python most of it just installed with pip. 

Piper also wasn't too hard to install: just clone the git repository and build it. Manually downloading the voices from Github, though, was a bit onerous, so instead I copied those over from another computer running Arch that I had already been using Piper on.

The really fun one to get working was Ollama. You see, Ollama needs Go to compile and run. Not only that, it needs Go version 1.23. Ubuntu 22.04 only has Go 1.18. So download and compile Go version 1.23. Except... Go 1.23 needs Go 1.22 to be able to compile. Ok? So download and compile Go 1.22? Nope. That needs Go 1.20 to compile....  Go 1.20 needs Go 1.18 to install! Huzzah! We have converged!  So....

Download Go 1.20 and compile it using Go 1.18 that is in the apt repositories

Download Go 1.22, set the PATH environment variable to use the newly compiled Go 1.20 in preference to the 1.18 in the system, and compile it.

Download Go 1.23, and do the same as I did with 1.22 to compile it

THEN download and compile Ollama using the newly compiled Go 1.23.

Whew. That was a chore.

But it works! It actually works! Piper, Whisper and Ollama all running WITHOUT any containers.

None of these come with systemd service files to get them running at boot, but that's not really a problem. I copied the ollama.service file from an Arch installation, and used "systemctl edit" on it to add an extra Environment entry to set OLLAMA_HOST=0.0.0.0 so that I can connect to it from outside the Jetson. Piper doesn't need starting at boot, but instead I have a small PHP script that acts as an API interface through Apache to make SSML requests (it's running as a replacement for Mimic3 which is deprecated). Whisper is a little more tricky, as I have my own python script that listens for paths to audio files on an incoming FIFO, transcribes them, then returns the text through another FIFO, but since it's got to run as the user that installed all the python modules into ~/.local, that script runs from cron at reboot time (the FIFOs connect to another PHP script running as an API interface). 

Conclusion

So after all this, what is the one thing that I would say to nVidia if I could? Simple: Give us a modern operating system! Instead of forcing us to use the one specific installation of the one specific operating system, make the tools and libraries available as either open source, or, at a pinch, better packaged binaries that can work with other flavours of Linux. Personally I would much prefer to run Arch. Or if not Arch then some version of Ubuntu that's not 2 years out of date. Something which is current, that is capable of easily installing and running current versions of software. Maybe tracking the latest versions of Ubuntu, not just the LTS-before-last.

BTW, LTS ("Long Term Support") does not mean "will be a current OS for the next 4 years", only that "it is going to get security fixes for the next 4 years". You're still going to be stuck running software that is 4 years out of date. AI is fast moving. AI requires bleeding edge software. Bleeding edge software requires an up to date OS to run properly on - unless you're willing to use containers (ugh!) or manually compile.


Network booting DOS with iPXE and iSCSI