@Neptunehub has just released the plugin for Navidrome. It is still in the early stages and I have not tested it myself yet.
This feature request is for the integration of the plugin features into Symfonium, similar to how the jellyfin plugin already works.
Problem solved:
Makes Audiomuse available for Navidrome in Symfonium. Something, many of us have been waiting for.
Brought benefits:
Smart queue track plays for Navidrome, and, also others if the plugin in future opens up other API features from Audiomuse, getsimilarartists etc…
Other application solutions:
I dont think there are any at the moment.
Additional description and context:
See this post please for further details on the plugin:
** Im just making the feature request and take no credibility for the actual development of the plugin. Hopefully NeptuneHub can assist with any questions you might have.
The navidrome plugin still has to implement the endpoints used for the smartflows and queues that the jellyfin plugin provides, so that ball isn’t in the symfonium’s park yet. I for one am looking forward to when the plugin achieves parity so that I can get away from jellyfin’s awkward music handling
I would add Jellyfins inferior way of handling music.
I reverted back to Navidrome after the plugin release.. Mind you, I still have a Jellyfin / Audiomuse container parked on my server if I want some smart queues.
The ball is actually in OpenSubsonic’s park. Symfonium talks to Navidrome via OpenSubsonic API. Unitl the API is not defined, there’s no point in implementing more AM integration, if clients won’t be able to use it.
You have not answered there about that part We solve the getSimilarSong stuff but there’s no point in me building the specs if no server is willing to implement and validate what it want to support.
The API is relatively clear as already in Plex and Jellyfin. But I don’t know how you intend to build that on your side.
On Jellyfin side, there’s actually no Jellyfin API for that, there’s a plugin API where I can request if audiomuse is installed and the plugin add endpoints inside Jellyfin that I can call.
I think a proper sonicApi is better API for all other servers, but this is eventually a solution to just add a plugin endpoint that expose plugins installed in servers since you now have the concept of plugins (but don’t know the scope of what they can do)
Catch-22: There’s is no point on me planning the implementation if I don’t know the spec
Not sure what do you mean by “how you intend to build”. Once we have the spec, I’ll implement it.
If you want to know the details, it is simple: By default the endpoints will return not_implemented or empty responses, as I don’t plan to add this functionality directly in Navidrome. I’ll introduce the plugin extension point for AudioMuse-AI (and others) to provide the service.
I completely agree with that. I plan to allow plugins to add new endpoints as well, but I think this should be specified in the OpenSubsonic API
Same as transcoding where I waited for LMS to work on it and polish the proposal toward something that works best for servers.
I have the API client side from other servers and it’s mostly fully described in the OP post, that does not means that’s what server prefer to implement.
This is even more true if you intend to delegate to plugins and so may require additional data or return less data in the answers. Like size constraints in dialog between plugins and navidrome and plugins or whatever could be
Would probably be better to not publish the OS extension if you have no plugin to handle it. That avoid network calls for no reasons.