Testing SMB Source: crashes when encountering # in folder or filename

Issue description:

Thought I’d put the new SMB source to the test with my large collection. It crashes and restarts. Took a look at the logs and whenever it encounters a folder or filename that contains a # it spams error messages like the one below and then crashes eventually. The path in the log always stops where the # would be.
Examples that lead to the error:

America's #1 Recording Artists
06 Song #1.flac
90% Blues 10% Funky 101% Pure Morblus
10 Resolution #1.flac
Cadillac Jack's #1 Son
06 Unfinished Song #57.flac
11 Down on the Moon #2.flac
06 Brazilian Suite #3.flac

Logs:

2023-10-18 04:48:10.831 Error/SmbStreamDataReader: Error opening file
yo.c0: STATUS_OBJECT_NAME_NOT_FOUND (0xc0000034): Create failed for \\192.168.0.5\Music\Johnson, Freedy\The Trouble Tree\11 Down on the Moon 
	at yp.l.b(Unknown Source:72)
	at yp.f.e(Unknown Source:46)
	at yp.d.a(Unknown Source:26)
	at yp.d.i(Unknown Source:5)
	at ho.j.f(Unknown Source:0)
	at nm.w3.f(Unknown Source:4)
	at yp.f.q(Unknown Source:33)
	at yp.f.g(Unknown Source:14)
	at cc.b.a(Unknown Source:9)
	at cc.b.seek(Unknown Source:0)
	at com.genimee.ktaglib.KTaglib.getMetadataFromStreamReader(Native Method)
	at com.genimee.ktaglib.KTaglib.c(Unknown Source:25)
	at ic.v.e(Unknown Source:1452)
	at ic.v.d(Unknown Source:242)
	at ic.v.c(Unknown Source:182)
	at ic.e.v(Unknown Source:363)
	at ar.a.o(Unknown Source:5)
	at xr.h0.run(Unknown Source:109)
	at qm.l.run(Unknown Source:26)
	at ds.i.run(Unknown Source:2)
	at ds.a.run(Unknown Source:91)

Additional information:

I assume you can easily repro the behaviour if you create a few files or folders containing #.

What SMB server?

And I need the crash eventually part even more :slight_smile:

Oh, sorry. Forgot to mention.
SMB share is from a TrueNAS Core 13.0-U5.1 server, I think the highest SMB version they currently support is Samba 4.17.

These are the settings used for the particular share.

And these are the global SMB settings on the server.


I can try disabling the SMB 1.0 support as I don’t think I’m using that anymore (stupid Nvidia Shield didn’t work without it) but I doubt that that’s the culprit.

You’ll have to find it then. I started debug logging while the scan was running and stopped it after the program had crashed/closed and I had started it again (the scan also immediately restarted).
debug.log (1,3 MB)

Ok so the crash is a native one it’s not logged :frowning: And it’s not reported to Crashlytics either.

It’s maybe a file that triggers that maybe you can try to find in what folder it crashes?

Else I’ll need a crash dump https://support.getkeepsafe.com/hc/fr/articles/360033924731-Comment-obtenir-des-journaux-de-crash-sur-Android- but don’t share here, send it by mail as it might have unwanted data in it.

I’ve recreated the source and started a scan from scratch with debug logging.
The errors every time a folder or file contains a # persists but it really starts to crash at the point I’ve extracted from the log. Doesn’t tell me at which specific file tho, if I’m correct.

Here’s an excerpt. from when it starts to go completely off the rails.
debug.log (245,3 KB)

I’ll see if I can create one.

Ok thanks something to fix too but should not trigger a crash :frowning:

For the # it’s fixed in the B2. Now we need to find this one.

Re-testing with B2 and so far it’s not crashing (at 11500 files right now). With B1 it usually crashed around the 4000-5000 mark so at least a file having been the culprit is unlikely. I’ll report back once (if) it finishes scanning in a few hours or more likely tomorrow.

If possible that you OOM with your number of files :slight_smile:

But also since the folder with # are now scanned, it’s possible a lot more media are scanned before reaching the bad file.

If it crash for not oom I’ll add logs of each processed files to find the issue.

Well, it crashed again after some hours without any indication as to why in the log.

I don’t think the few # folders and files make up for the difference between crashing at 4k files and crashing at 130kish files (that was the last number I saw before the crash).

I’ll try again when the next beta version is around. Hopefully by then someone with a more reasonable library size has encountered and reported the reason for the crashing. Testing with my library takes too long.

BTW beta 3 now keep tag cache even when It crashes. So if you still have some motivation you can try again if it a file issue it should be easy to pinpoint now.

Good to know, I’ll let it run when I go to bed. Let’s see how far it manages to go.

Forgot to mention it earlier but now it really does not like my samba share.
It got stuck at 963 files with loads of:

2023-10-24 19:34:18.266 Error/SmbStreamDataReader: Error reading file
xp.a: f has already been closed
	at fq.l.c(Unknown Source:40)
	at fq.g.b0(Unknown Source:30)
	at hc.b.read(Unknown Source:13)
	at com.genimee.ktaglib.KTaglib.getMetadataFromStreamReader(Native Method)
	at com.genimee.ktaglib.KTaglib.c(Unknown Source:23)
	at nc.v.d(Unknown Source:1598)
	at nc.v.c(Unknown Source:250)
	at nc.v.b(Unknown Source:181)
	at nc.e.v(Unknown Source:353)
	at ir.a.o(Unknown Source:5)
	at fs.i0.run(Unknown Source:109)
	at tm.u6.run(Unknown Source:27)
	at ls.i.run(Unknown Source:2)
	at ls.a.run(Unknown Source:91)

errors.

The only interesting errors were:

2023-10-24 19:34:06.015 Verbose/Logger: Previous exit reason: 2/300 - 9 [2023-10-24 05:52:05.569]
2023-10-24 19:34:18.276 Error/TagParserWorker: No extracted tags for smb://192.168.0.5/Music/M1/Bondage Fruit/IV/06 Old Blind Cat.flac (audio/flac - flac)

Which is weird since that file clearly does have tags:
dbpoweramp Edit ID:
smb
Mp3Tag extended tags:
smb1

Edit:

It eventually overcame 963 files, now it was at 1033. However at that speed (let’s say 2000 files per day) it would literally take more than a year for the scan to finish, which is why I aborted it.

I need some more logs :slight_smile:

What happens after the “Error reading file” it should recover?

If you have the “No extracted tags” without the “Error reading file” before, then I’ll need one of those files.

debug.log (30,8 KB)

That’s the entire log of half a minute of the scan being stuck.

Haven’t seen that happen yet.

Ok so then it’s normal that there’s no tags if it does not recover the connection.

Thanks for the details will try to repro to figure out.

Edit: One last question does the provider stills show as connected in the filter bottom sheet?

It does. It still says scanning with the file count it’s currently on beside it.
I can recreate the media source and let it scan for 5 minutes and leave logs on the entire time if that helps. Log shouldn’t get super huge in that timeframe.

Why not, it would help it there was another error than " has already been closed"

Interesting:

2023-10-24 20:28:40.002 Error/SMB: Error
fp.c0: STATUS_OBJECT_NAME_INVALID (0xc0000033): Create failed for \\192.168.0.5\Musik\M1\Mars, Chris\75� Less Fat
	at fq.l.b(Unknown Source:72)
	at fq.f.h(Unknown Source:46)
	at fq.d.a(Unknown Source:26)
	at fq.d.j(Unknown Source:5)
	at jo.a.h(Unknown Source:0)
	at com.google.android.gms.internal.measurement.z4.h(Unknown Source:4)
	at fq.f.p(Unknown Source:33)
	at fq.f.s(Unknown Source:31)
	at fq.f.n(Unknown Source:12)
	at lm.e.i1(Unknown Source:88)
	at lm.e.j1(Unknown Source:110)
	at lm.e.j1(Unknown Source:268)
	at lm.e.j1(Unknown Source:268)
	at lm.e.j1(Unknown Source:268)
	at cc.r.v(Unknown Source:77)
	at cc.r.k(Unknown Source:12)
	at nc.f.v(Unknown Source:88)
	at nc.f.k(Unknown Source:12)
	at cp.i.r0(Unknown Source:4)
	at cp.i.v(Unknown Source:9)
	at nc.g.b(Unknown Source:182)
	at nc.g.a(Unknown Source:8934)
	at cc.t.o(Unknown Source:139)
	at i9.h.v(Unknown Source:149)
	at ir.a.o(Unknown Source:5)
	at fs.i0.run(Unknown Source:109)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:644)
	at java.lang.Thread.run(Thread.java:1012)

Apparently it does not like % signs either. The folder is called: “75% Less Fat”

Anyways here’s the complete log:
debug.log (670,9 KB)

Ok thanks should be fixed in next release.

B4 out if you have time for one last test :slight_smile: