Option to use the server-side implementation for creating instant-mix playlists

Perfect, thanks for sharing!

1 Like

Hi All,
I’m NeptuneHub from AudioMuse-AI and I’ll be very happy if you want to integrate AudioMuse functionality in symfonium.

So ad the moment is present a side container (that contain all the logic) that could be installed from here:

It have his mini integrated front-end that integrate only the Sonic functionality (so no music reproduction at all).

Because all we know that having all usable in one front-end I also developed the Jellyfin Plugin (off course it work on top of the core container, the container is mandatory to be installed) that is this one:

About InstantMix for song it ovverride the actual implementation of jellyfin BUT it also provide the API that you can call from symfonium and integrate in your logic:

  • /AudioMuseAI/similar_track

Today I also mapped in the plugin the API to create the song path:

  • /AudioMuseAI/find_path

If you look the readme.md of the plugin repo you can find example of call. You can also just install audiomuse-ai core container and just play around with the integrated front-end to look at how the result looks like (audiomuse-ai because don’t reproduce music directly, it create playlist, but is good to have an idea of the final result).

My vision is to bring the Sonic Analysis open and free to most people possible. And I really think that having AudioMuse integrated in the front-end, making it use direct and easy, is the way.

Let me know if you have any question/issue or suggestion.

Currently quite busy, do you by luck have public server where you could create a test account with all already working to gain a lot of time ?

Unlucky no. For a decent demo server around 200 song albums are needed, and I don’t know how to deal with copyright (the song that you usually buy are for private use, not for public streaming).

I was in search for creative common song to put in a demo server (that could be also nice to training new Tensorflow model) but I didn’t find nothing of god till now.

I can only say that the result is nice, the song are really sonic analyzed with advanced technology library (Librosa, used also from Spotify) and Tensorflow. I followed Univeristy grade publication not just playing around with home made script. There is for sure space of improvement but at this point I think that having it integrated in the major songs front-end is the best way to reach more users and gain more feedback to improve.

The things is: do you want sonic analysis open and free ? This is the right call.

You can even contribute to the algorithm or to the plugin if you like because is all open source.

This is the right call to have sonic analysis for everyone, without need of any account, privacy and selfhostable firs.

I can provide access to mine privately if you are interested

Yes if it have the updated AudioMuse with the path stuff.

Still half in holidays so will need the account for a couple of weeks.

Sent the credentials through the app support email

Thanks I can connect, will do some tests next week.

Please just remember to update at the last version to have the path api.

Audiomuse-ai need to be at the 0.6.5-beta.

Audiomuse-ai-plugin last versions are 0.1.15-beta on jellyfin 10.10 or 0.1.16-beta if you have the Jellyfin 10.11rc.

Really thanks for your help on this!

It’s fine, I made sure the new endpoints are working on both jellyfin and audiomuse UI before I shared the credentials with him

1 Like

By the way, AudioMuse-AI support also Navidrome (so the subsonic API). The only ā€œproblemā€ is that I don’t know if it’s possible to create a plugin even for it so you can have an unique endpoint also for Navidrome. Any suggestions is appreciated.

Also if you support any additional mediaserver with a ā€œgood a amount of userā€, that have API to talk with I’ll be happy to integrate it in AudioMuse-AI too.

Navidrome now has a plugin system but the solution is:

I’ve federated most of the OpenSubsonic servers and clients to improve the API, so that would be the place to propose an extension for Sonic analysis so that any server could expose the data.

Then any server can add support to AudioMuse, personally using GitHub - epoupon/lms: Lightweight Music Server. Access your self-hosted music using a web interface. that had something similar in the past and may be interested. Ping @itm

2 Likes

I’ll gladly test this with my 1m tracks library if it ends up working with lms. :slightly_smiling_face:

1 Like

LMS for what I understood already support the subsonic APi.

AudioMuse implemented the subsonic API and is tested by Navidrome. It actually use this API:

  1. ${NAVIDROME_URL}/rest/stream.view

  2. ${NAVIDROME_URL}/rest/getAlbumList2.view

  3. ${NAVIDROME_URL}/rest/search3.view

  4. ${NAVIDROME_URL}/rest/updatePlaylist.view

  5. ${NAVIDROME_URL}/rest/createPlaylist.view

  6. ${NAVIDROME_URL}/rest/getPlaylists.view

  7. ${NAVIDROME_URL}/rest/deletePlaylist.view

  8. ${NAVIDROME_URL}/rest/getAlbum.view

  9. ${NAVIDROME_URL}/rest/getSong.view

If the implementation of this API is the same in Navidrome and LMS, you should be able to use the Navidrome deployment of AudioMuse-AI connecting LMS. Maybe you can make a try and report if you have issue?

I can also add in the todo list or things to try!

EDIT: I deployed the last avaiable LMS docker image (v3.69.0) on my K3S cluster, and I used it with AudioMuse-AI with no problem. I tested analysis, clustering, similar song, song path and they workerd. You just need to use the navidrome deployment/docker-compose and the only attention is that you need to put USER and SUBSONIC API (instead of password).

Off course for now you had to use the AudioMuse-AI front-end to interact with it. Will be nice if LMS/Symfonium would like to integrate the AudioMuse-AI api to directly propose InstantMix or Song Path button in the front-end.

What’s needed is the OpenApi spec to expose the new endpoints. Then have Navidrome and LMS to implement them so that any client can use this solution.

Good Idea, I just open a discussion for new API implementation where I provided all the useful information that I have on top of my head:

1 Like

@neptunehub It seems the /Plugins endpoint for Jellyfin is not accessible for normal user account so I’ll need another way to know the AudioMuse plugin version.

The health endpoint returns no data.

The config endpoint does not return the version.

And important: The config endpoint leaks the genimi apiKey and Jellyfin token to any normal user account this is probably a very bad idea and they should probably be redacted or removed.

Anyway about the first issue can you expose an endpoint to get the plugin version to easily see that it’s up to date and know the available API endpoint?

Yes I’ll work on fixing the config API (it was created for easy test on local network, but off course now need to avoid this) and I’ll work on an healtcheck API that shows version and available API.

For the healtcheck API do you have any specific requirements/format or even a ā€œwell done APIā€ from other plugin that you want to point out to me?

Thanks to highlight to me all this point !

You can just add an info endpoint as Jellyfin.

Just need a way to know the plugin version with a proper versioning scheme.

Hi,
just updated the AudioMuse-AI-Plugin for jellyfin. version 1.18-beta is for jellyfin 10.10 and 1.19-beta for jellyfin 10.11.

/info api was added, it return the version of the plugin and the list of entry point. I added the example in the plugin readme.md documentation.

About the API /config and /chat/config_defaults for the moment I commented out (disabled) because I don’t think at the moment are needed for the plugin. When I’ll have more time I’ll fix them also on AudioMuse-Ai container itself, but for the moment we limit the issue in this way.

Also in general only Jellyfin should be exposed on internet, AudioMuse-AI container itself is better to have only reachable on local lan, better if under a Rever-Proxy/Authentik, because it doesn’t have an authentication system itself.