/t/ - Technology

Discussion of Technology

Options Max message length: 8000 Drag files to upload or click here to select them Max file size: 32.00 MB Max files: 5 Supported file types: GIF, JPG, PNG, WebM, OGG, and more
E-mail (used to delete files and postings) No flaggentoo

 The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 2.0. Please be aware of the Site Fallback Plan! In case outages in Eastern Europe affect site availability, we will work to restore service as quickly as possible. WE ARE PLANNING THE 2.8 SITE UPGRADE FOR THIS WEEKEND, MONDAY EVENING 6-27. DOWNTIME WILL BE BRIEF, AND THEN WE HUNT FOR BUGS. (Estamos planeando la actualización del sitio 2.8 para este fin de semana, del lunes 6 al 27 por la tarde o por la noche en CST. El tiempo de inactividad será breve y luego buscaremos errores.) 8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Board Nuking Issue should be resolved. Apologies for any missing posts.

Hydrus Network General #4 Anonymous Board volunteer 04/16/2022 (Sat) 17:14:57 No. 8151
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
Is there a way to set the media viewer to use integer scaling (I think that's what it's called) rather than fitting the view to the window, so that hydrus chooses the highest zoom where all pixels are the same size and the whole image is still visible. My understanding is that nearest neighbor is a lossless scaling algorithm when the rendered view size is a multiple of the original, otherwise you get a bunch of jagged edges from the pixels being duplicated unevenly. It looks like Hydrus only has options to use "normal zooms" (what you set manually in the options? I'm confused by this), always choosing 100% zoom, or scaling to canvas size regardless of if that's with a weird zoom level (like 181.79%) that causes nearest-neighbor to create jagged edges.
When I deleted a file in Hydrus, how sure can I be that it is COMPLETELY gone? Are there any remnants that are left behind?
>>8156 yeah all the metadata for the file (tags and urls and such) are still there. There isn't currently a way to remove that stuff.
>>8154 Yeah, under options->media, and the filetype handling list, on the edit dialog is 'only permit half and double zooms'. That locks you to 50%, 100%, 200%, 400% etc... It works ok for static gifs and some pngs, if you have a ton of pixel art, but I have never really liked it myself. Set the 'scale to the canvas size' options to 'scale to the largest regular zoom that fits', I think that'll work with the 50/100/200/400 too. Let me know if it doesn't. >>8156 >>8157 Once the file is out of your trash, it will be sent to your OS's recycle bin, unless you have set in options->files and trash to permanently delete instead. Its thumbnail is permanently deleted. In terms of the file itself, it is completely gone from hydrus and you are then left with the normal issues of deleting files permanently from a disk. If you really need to remove traces of it from the drive, you'll need a special program that repeatedly shreds your empty disk sectors. In terms of metadata, hydrus keeps all other metadata it knows about the file. Information like the file's hash (basically its name), its resolution, filesize, a perceptual hash that summarises how it looked, and tags it has, ratings you gave it, URLs it knows the file is at, and when it was deleted. It may have had some of this information before it was imported (e.g. its hash and tags on the PTR) if you sync with the public tag repository. Someone who accessed your database and knew how hydrus worked would probably be able to reconstruct that you once imported this file. There are no simple ways to tell the client 'forget everything you ever knew about this file' yet. Hydrus keeps metadata because that is useful in many situations. Deletion records, for instance, help the downloader know to not re-import something your previously deleted. That said, I am working on a system that will be able to purge file knowledge on command, and other related database-wide cleanup of now-useless definition records, but it will take time to complete. There are hundreds of tables in the database that may refer to certain definitions. If you are concerned about your privacy (and everyone should be!), I strongly recommend putting your hydrus database inside an encrypted container, like with veracrypt or ciphershed or similar software. If you are new to the topic, do some searching around on how it works and try some experiments. If you are very desperate to hide that you once had a file, I can show you a basic hack to obscure it using SQLite. Basically, if you know the file's hash, you go into your install_dir/db folder, run the sqlite3 executable, and then do this: (MAKE A BACKUP FIRST IN CASE THIS GOES WRONG) .open client.master.db update hashes set hash = x'0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef' where hash = x'06b7e099fde058f96e5575f2ecbcf53feeb036aeb0f86a99a6daf8f4ba70b799'; .exit That first hash, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", should be 64 characters of random hex. The second should be the hash of the file you want to obscure. This isn't perfect, but it is a good method if you are desperate.
I just updated to the latest version, and there seems to be a serious (well, seriously annoying, but not dangerous) bug where frames/panels register mouse clicks as being higher up when you scroll down, as if you didn't scroll down. It's happening with the main tag search box drop down menu, and also in the tag edit window where tags are displayed and you can click on them to select them. I'm on Linux.
>>8159 Sorry, yeah, I messed something up last week doing some other code cleaning. I will fix it for next week and add a test to make sure it doesn't happen again. Sorry for the trouble. I guess I don't scroll and click much when I dev or use the client IRL.
>>8159 >on Linux I confirm that.
>>8159 I've got this problem on windows as well. Also, am I the only one experiencing extremely slow PTR uploads? Now instead of uploading 100 every 0.1 seconds, it is more like 1-4 every 0.1s
Apologies if the answer is already somewhere on the /hydrus/ board somewhere, I hadn't been able to quite find it, yet. I'm wondering how to make hydrus be able to download pictures from 8chan (using hydrus companion) when direct access results in a 404? I was assuming some fuckery with cookies but sending the cookies from 8chan trough hydrus companion to hydrus client seemingly made no difference
>>8166 afaik there's no way to import directly from urls of "protected" boards, but I'd love to be proven wrong.
>>>/hydrus/17585 >Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing? >>>/hydrus/17586 >>notes >I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. I'm not either of these people, but a filename namespace is useless if the filename cares about case. Hydrus will just turn it all into lowercase. In those scenarios I've had to manually add the filename to the notes for each one... painful. Also, somewhat related: hydrus strips the key from mega.nz urls, so I have to manually add those to notes as well. More pain. >>8166 Have you tried giving hydrus your user-agent http header as well as the cookies?
>>8174 >Have you tried giving hydrus your user-agent http header as well as the cookies? No I haven't, however I'm still quite inexperienced when it comes to using hydrus so I don't really know how I'd be able to do that. Using the basic features of hydrus companion is pretty much as far as my skillset goes atm. Would you please kindly explain how I might do what you had described?
Trying to add page tags to my imported files is turning out to be an even bigger headache than I expected. The page namespace doesn't specify what it is a page of, so you can end up with multiple contradictory page tags. For example, an artist uploads sets of 1-3 images frequently to his preferred site, but posts larger bundles less frequently to another site. Or he posts a few pages at a time of a manga in progress, and when it's finished he aggregates all the pages in a single post for our convenience. Either way, you can end up with images that have two different page tags, both of which are technically correct for a given context, but the tags themselves don't contain enough information to tell which context they're correct in. If I wanted to be really thorough, I could make a separate namespace for each context a page can exist in, but then I'd be creating an even bigger headache for myself whenever I want to sort by pages. The best I can imagine would be some kind of nested tag system, so you can specify the tags "work:X" and "page:Y(of work:X)", and then sort by "work-page(of work)". As an added bonus, it would make navigation a lot smoother in a lot of contexts. For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter.
>>8183 Hydrus sucks at organizing files that are meant to be a sequential series. This has been a known problem for a long time unfortunately.
>>8183 >For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter. You may use kinda nested namespaces: 1 - namespace:whatever soap opera you want (to identify the group) 2 - namespace:chapter 1 (to identify the sub-group) 3 - namespace:chapter 1 - page 01 (to identify the order) So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. Done.
>>8190 >So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. At that point you're basically navigating folders in a file explorer, just more clumsy. That's exactly what I was trying to get away from when I installed hydrus.
I had a great week of simple work. I fixed some bugs--including the scrolled taglist selection issue--and improved some quality of life. The release should be as normal tomorrow.
>>8192 >At that point you're basically navigating folders in a file explorer What are you talking about? In Hydrus all files are in a centralized directory and searched with a database. I understand the hassle to tag manually, but not software is clairvoyant and reads your mind about what exactly you are searching for.
>>8813 if ordered sets are important to you installing danbooru is an option, they do put their source up on github. Last I tried it it was a pain in the ass to get working but I did eventually get it. Though it did lack a number of hydrus features I've gotten used to.
>>8183 Hydrus works off of individual files. It can adapt it to multi-file works, but the more robust of a solution you need the more you’ll butt up against Hydrus’ core design. The current idiomatic solution of generic series, title, chapter, page, etc. namespaces works for 90% of things (with another 9% of things being workable by ignoring all but one context), but if you need a many to many relationship the best you can do is probably use bespoke namespaces for each collection (e.g. “index of X:1” “index of Y:2”) and then use the custom namespace sort to view the files in whatever context you've defined. I guess an ease of use that could get added would be an entry in the tag context menu to sort by namespace. That way you wouldn't need to type it out every time.
>>8197 >That way you wouldn't need to type it out every time. In the future drag and drop tags may be the solution.
I want to remove the ptr from my database. Is there a way to use the tag migration feature to migrate tag relationships only for tags used in my files? You can do it with the actual tags, but I don't see an option to do something similar for relationships, and I'd rather not migrate over thousands of parents/children and siblings for tags I'll never see.
>>8166 Looks like you need to send the referral URL with your request. The 8chan.moe thread downloader that comes with hydrus already takes care of that, so I assume you're trying to download individual files or something? I think the proper thing here would be for the hydrus companion to attach the thread you found the image in as the referral URL, but I'm not sure if the hydrus API even supports that at the moment. So failing that, you can give 8chan.moe files an URL class and force hydrus to use https://8chan.moe/ as the referral URL for them when no other referral URL is provided. Hopefully this won't get you banned or anything.
I hope collections will be expanded upon in the future. It's very nice to be able to group together images in a page, but often I want an overview of the individual images of a group. Right now I have to right click a group and pick open->in a new page, which is awkward. Here's a quick mock-up of how I'd like it to work. Basically, show all images, but visually group them together based on the selected namespaces.
>>8210 The png I posted contains the URL class. Just go to network > downloaders > import downloaders and drag and drop the image from >>8203
Any way to stop hydrus from running maintenance (in my case ptr processing) while it's downloading subscriptions? I think that should prevent maintenance mode from kicking in. It always happen when I start Hydrus and leave it to download subs, because i have idle at 5 minutes. The downloads slow to a crawl because ptr processing is hogging the database. I could raise the time to idle but i still want it that low once hydrus has finished downloading subs...
Is any way to export the notes, like the file and tags? Something like: File: test.jpg Tags: test.jpg.txt Notes: test.jpg.notes.txt
>>8219 I get the impression that notes are a WIP feature. Personally I'm hoping we'll get the option to make the content parser save stuff as notes soon.
>>8212 Bruh
Are there plans to add dns over https support to hydrus? Most browsers seem to have that feature now, so it'd be cool if hydrus did too.
How do I enable a web interface for my Hydrus installation, so others can use it by my external IP? I need something simple like hydrus.app, but unfortunately it refuses to work with my external IP, only accepts the localhost, even though I enabled non-local access in API and entering my external IP in browser opens the same API welcome page as with localhost. Who runs that app, anyway, where do I see support for it?
>>8209 Thanks. Yeah, this is exactly what I want to do too. I am in the midst of a long rewrite to clean up some bad decisions I made when first making the thumbnail grid, and as I go I am adding more selection and display tools. Once things are less tangled behind the scenes, I will be able to write a 'group by' system like this, both the data structure behind and the new display code needed. Unfortunately it will take time, but I agree totally. >>8216 There's no explicit way at the moment. I have generally been comfortable with both operations working at the same time, since I'm generally ok if subs run at, say, 50% speed. I designed subs to be a roughly background activity and don't mean for them to run as fast as possible. If your machine really struggles to do both at once though, maybe I can figure out a new option. I think your best shot in the meantime, since PTR processing only works in idle time but subs can run any time, is to tweak the other idle mode options. The mouse one might work, if you often watch your subs come in live, or the 'consider the system busy if CPU above' might work, as that stops PTR work from starting if x cores are busy. If you are tight on CPU time anyway, that could be a good test for other situations too. You can also just turn off idle PTR processing and control it manually with 'process now' in services->review services. I don't like suggesting this solution as it is a bit of a sledgehammer, but you might like to play with it. >>8219 >>8220 Yeah, not yet, but more import/export options will come. If you know scripting, the Client API can grab them now: https://hydrusnetwork.github.io/hydrus/client_api.html https://hydrusnetwork.github.io/hydrus/developer_api.html
>>8223 For advanced technical stuff like that, I am limited by the libraries I use. My main 'go get stuff' network library is called 'requests', a very popular python library https://docs.python-requests.org/en/latest/ although for actual work I think it uses the core urllib3 python library https://pypi.org/project/urllib3/ . So my guess is when python supports it and we upgrade to that new version of python, this will happen naturally, or it will be a flag I can set. I searched a bit, and there might be a way to hack it in using an external library, but I am not sure how well that would work. I am not a super expert in this area. Is there a way of hacking this in at the system level? Can you tell your whole OS to do DNS lookups on https with the new protocol in the same way you can override which IP to use for DNS? If this is important to you, that might be a way to get all your software to work that way. If you discover a solution, please let me know, I would be interested. Otherwise, I think your best simple solution for now is to use a decent VPN. It isn't perfect, but it'll obscure your DNS lookups to smellyfeetbooru.org and similar from your ISP.
>>8232 The various web interfaces are all under active development right now. All are in testing phases, and I am still building out the Client API, so I can't promise there are any 'nice' solutions available right now. All the Client API tools are made by users. Many hang out on the discord, if you are comfortable going there. https://discord.gg/wPHPCUZ The best place to get support otherwise is probably on the gitlab/github/whatever sites the actual projects are hosted on, if they have issue trackers and etc.. For Hydrus.app I think that's here https://github.com/floogulinc/hydrus-web I'm not sure why your external IP access isn't working. If your your friend can see the lady welcome page across the internet, they should be able to see the whole Client API and do anything else. Sometimes http vs https can be a problem here.
>>8233 >If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. Is it even possible to download mega links through hydrus? I've been using mega.py for automating mega downloads, and looking at the code for that, it seems quite a bit more complicated than just sending the right http request. https://github.com/odwyersoftware/mega.py/blob/master/src/mega/mega.py#L695 I'd love to be proven wrong, but looks to me like this is a job for an external downloader. Speaking of which, any plans to let us configure a fallback options for URLs that hydrus can't be configured to handle directly? At very least, I want to be able to save URLs for later processing.
>>8238 My problem is that some of the galleries I subscribe to might occasionally contain external links. For example, some artists uploading censored images, but also attaching a mega or google drive link containing the uncensored versions. I can easily set up the parser to look for these URLs in the message body and pursue them, but if hydrus itself doesn't know how to handle them, they get thrown out. Would be nice if these URLs could be stored in my inbox in some way, so I can check if I want to download them manually or paste them into some other program. Even after you implement a way to send the URL to an external program (which sounds great), it would be useful to see what URLs hydrus found but didn't know what to do with, so the user can know what URL classes they need to add.
>>8233 >For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. Oh wow, I never knew what that option did. Thanks! I made url classes. Note: one of the mega url formats (which I think is an older format) has no parameters at all, it's just "https://mega.nz/#blah". So if you just give it the url "https://mega.nz/" it will match that url. Kind of weird, but not really a huge issue. >>8184 I mean, that's not really particular to hydrus. It's true for almost any booru.
Hey, After exiting the duplicate filter I was greeted with two 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' I'm running the AUR version, if you need any more info let me know.
could the downloading black/white list be adjusted to work on matching a search, rather than just specific tags? There's a lot of kinds of posts I'd rather not download, but most of the time they aren't simple enough to be accurately described with a single tag.
I was ill for the start of the week and am short on work time. Rather than put out a slim release, I will spend tomorrow doing some more normal work and put the release off a week. 483 should be on the 4th of May. Thanks everyone! >>8246 Sorry, I messed up some duplicate logic that will trigger on certain cases where it wants to back up a pair! This is fixed in 483 along with more duplicate filter code cleanup, please hang in there.
>>8260 Get well anon.
Is there an (easy) way to extract the data used to make the file history chart into a CSV? I'd like to play around with that data myself.
Minor bug report: hovering over tags while in the viewer and scrolling with the mouse wheel causes the viewer to move through files as if you were scrolling on the image itself. May be related to the bug from a few weeks ago.
I had a good couple of weeks. There are a variety of small fixes and quality of life improvements and the first version of 'multiple local file services' is ready for advanced users to test. The release should be as normal tomorrow.
>>8326 hello mr dev I just found out about this software and from reading the docs I have only this to say: based software based dev long live power users
Hey h dev, moveing to a new os soon, along with whatever happened recently in hydrus made video more stable so I can parse it. I know I asked about this a while ago, having a progress bar permanently under the video as an option, im wondering if that ever got implemented as an option or if it's something you haven't gotten to yet? I run into quite a few 5 second gifs next to 3 minute long webm's and me hovering the mouse over them takes up a non insignificant amount of the video, at least enough that I have to move the mouse off of it just to move it back to scrub. thanks in advance for any response.
just want to confirm the solution for broken mpv from my half sloppy debian install like in this issue: https://github.com/hydrusnetwork/hydrus/issues/1130 as suggested, copying just the system libgmodule-2.0.so to Hydrus directory helps although the path may be different, because I have such files at /usr/lib/x86_64-linux-gnu/
>>8333 sounds great, with this I will be able to have Inbox Seen to parse Parse nsfw Parse sfw Archive nsfw Archive sfw if i'm able to search across everything, I get unfiltered results, but refine it down to specific groups outside of just a rating filter that would be great
>>8333 Does copying between local file services duplicate the file in the database?
Is it just me or is there a bug preventing files from being deleted in v483? I can send them to trash but trying to "physically delete" them doesn't work. Hitting delete with files selected does nothing, neither does right clicking and hitting "delete physically now".
>>8317 Not an easy way, but attached is the original code that a user made to draw something very similar in matplotlib. If you adjust this, you could pipe it to another format, or look through the SQL to see how to extract what you want manually. My code is a bit complicated and interconnected to easily extract. The main call is here-- https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDB.py#L3098 --but there's a ton of advanced bullshit there that isn't easy to understand. If you have python experience, I'd recommend you run the program from source and then pipe the result of the help->show file history call to another location, here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUI.py#L2305 I am also expecting to expand this system. It is all hacked atm, but as it gets some polish, I expect it could go on the Client API like Mr Bones recently did. Would you be ok pulling things from the Client API, like this?: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_database_mr_bones
>>8330 Awesome, thank you. I will update the help to reference this specifically. >>8335 Yeah, I think my next step here is to make these sorts of operations easier. You can set up a 'search everything' right now by clicking 'multiple locations' in the file domain selector and then hitting every checkmark, but it should be simpler than that. ~Maybe~ even favourite domain unions, although that seems a bit over-engineered, so I'll only do it if people actually want it. Like I have 'all local files', which is all the files on your hard disk, I need one that is all your media domains in a nice union. Also want some shortcuts so people like you will be able to hit shift+n or whatever and send a file from inbox to your parse-nsfw domain super easy. As you get into this, please let me know what works well and badly for you. All the code seems generally good, just some stupid things like a logic problem when trying to open 'delete files' on trash, so now I just need to make the UI and workflow work well. >>8340 No, it only needs one copy of the file in storage. But internally, in the database, it now has two file lists. >>8356 Yes, sorry! Thank you for the report. This is just an accidental logic bug that is stopping some people from opening the dialog on trash--sorry for the trouble! I can reproduce it and will fix it. If you really want to delete from trash, the global 'clear trash' button on review services still works, and if you have the advanced file deletion dialog turned on, you can also short-circuit by hitting shift+delete to undelete and then deleting again and choosing 'permanently delete'.
First of all, thank you for all your hard work HydrusDev I have small feature request, now that we have multiple local services For the Archive/Delete filter, there should be keyboard shortcuts for "Move/Copy to Service X" as well as "Move to Trash with reason X" and "Delete Permanently with reason X" The latter two would be nice because having to bring up the delete dialog every time is kind of clunky
>>8361 >Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? Yes, it is from Hydrus Companion, I forgot that it was a separate program since I started using it at the same time that I started using Hydrus. Now that I think about it though, just avoiding Pixiv probably isn't the best solution either, since there's plenty of content that can only be found on Pixiv. If there is a way to download the English translations of the tags, then that would mostly solve the issue, since I could then use parent/sibling tagging to align them with the other tags. I don't know how doable that would be though, so for now the best solution is probably to import a sibling tag file that changes all the Japanese pixiv tags to their English tags, assuming that someone has already made this.
>>8330 I was able to get it working by copying libmpv.so.1 and libcdio.so.18 from my old installation (still available on my old drive) to the hydrus installation folder.
I entered the duplicate filter, and after a certain point it wouldn't let me make decisions any more. I'd press the "same quality duplicate" button and it just did nothing. I exited the filter, then the client popped up a bunch of "list index out of range" errors. here's the traceback for one of them: v483, linux, frozen IndexError list index out of range File "hydrus/client/gui/ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3548, in ProcessApplicationCommand self._MediaAreTheSame() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3149, in _MediaAreTheSame self._ProcessPair( HC.DUPLICATE_SAME_QUALITY ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3259, in _ProcessPair self._ShowNextPair() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3454, in _ShowNextPair self._ShowNextPair() # there are no useful decisions left in the queue, so let's reset File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3432, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ): I reentered the duplicate filter, and I got through a few more pairs before it stopped letting me continue again. It seems like it was on the same file as last time too. Could this bug have corrupted my file relationships?
>>8359 >Python script That'll help a lot, thanks! >Would you be ok pulling things from the Client API, like this? Yeah, definitely.
>>8361 a 3 pixel tall scan bar... that honestly wont be a bad option, my only concern would be the immediate visibility of it, and i'm not sure there is a good way to do that... would it be possible to have custom colors for it, both when its small and when its large? when its large that light grey with dark grey isn't a bad option, but small it would kind of be a constantly moving needle in the haystack. but if for instance, I had the background of the smaller bad be black with a marginally think red strip, I would only see that red strip move, this may not be a great option for everyone, but I could see various different colors for higher contrast being a good thing especially when its 3 pixels big. yea I think it's a great idea, it would make it readily available from the preview how long the video is and it would be so out of the way that nothing is massively covered up. if its an option would the size it is be changeable/user settable? its currently 60 pixels if my counting is right, but I could see something maybe 15 or so being something I could leave permanently visible, if it can't than it doesn't matter, but if its possible to make it an option I think this would be a fantastic middle ground till you give it a serious pass. anyway, whatever you decide on will help no matter what path it is.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
Sorry for the double post. Verification was acting up.
>>8367 This issue isn't just with the one pair now. It's happened with multiple pair when trying to go through the filter. And it's not just happening when I mark them as same quality. It also happens when I mark them as alternates. I also noticed that when this bug happens, the number in the duplicate filter (the one that's like "13/56") jumps up a bunch
I had an ok week. I fixed some bugs (including non-working trash delete, and an issue with the new duplicate filter queue auto-skipping badly), improved some quality of life, and integrated the new multi-service 'add/move file' commands into the shortcuts system and the media viewer. The release should be as normal tomorrow. >>8367 >>8396 Thank you for this report, and sorry for the trouble! Should be fixed tomorrow, please let me know if you still have any problems with it.
Are sorting/collection improvements on the to-do list? I sometimes have to manually sort video duplicates out and being able to collect by duration/resolution ratio and sort by duration and then by resolution ratio would be extremely helpful. Sorting pages by total filesize or by smallest/largest subpage could have some uses as well, but that might be too autistic for other users.
>>8409 >>8377 Nice, the scan bar is far more visible than I thought it may have been. I think there is the possibility that other colors may also help legibility but for me its just fine as is.
ok h dev, probably my last question for a while, I have so far parsed thought about 5000-10000 "must be pixel dups" I have yet to find one where I have ever decided 'lets keep the one with the larger file size' I have decided, at least for the function of exact dupes, i'm willing to trust programs judgement is there any automation in the program for these yet? from what I can see a few of my subscriptions are generating a hell of alot of these, and even then, I had another 50000 to go though, if there is a way to just keep the smaller file and yeet the larger with the same settings I have assigned to 'this is better' this would be amazing. I dont recall if anything has been added to hydrus yet, I would never trust this for any speculative match as I constantly get dups that require hand parsing with that, but holy shit is it mind numbing to go though pixel dups... scratch that, when I have all files, I have 325k must be pixel dupes (2 million something potential dups, so this isn't a case of the program lagging behind options)
Can't seem to do anything with these files. I can't delete them, and setting a job to remove missing or invalid files doesn't touch them. They don't have URLs so I can't easily redownload them either. What do?
>>8418 Note, they do have tags, sha256sums, and file IDs, but nothing else as far as I can tell. If I manage to redownload one by searching for each file manually based off the tags it appears and can be deleted. Maybe I could do some sqlite magic and remove the records via the file IDs using the command line, but I don't know how. The weird thing is how they appear in searches. They don't show up when I search only system:everything, but do show up when searching for tags that the missing file is tagged with. I tried adding a dummy tag to all of my working files and searching with -dummy, and the missing files didn't show up. If I search some tag that matches a missing file and use -dummy, the missing files that are tagged with whatever other tag I used to search do show up. Luckily all of these files had a tag in common so I can easily make a page with all of the missing files, 498 total. I can open the tag editor for these, and adding tags works but I cannot search for tags that only exist on missing files (I tried adding a 'missing file' tag, can't search it). Nothing interesting in the logs, unless I try to access one which either gives KeyError 101 or a generic missing file popup. Hydev, if you're interested in a copy of my database folder, I could remove most of the large working files and upload a copy somewhere if you want to mess with it. I'm open to trying whatever you want me to if that's more convenient though.
Got this error when after updating. (def jumped multiple versions, not sure how much) Manually checking my files seems that all of them are fine. It's just that hydrus can't seem to make sense of it for some reason...? FYI my files are on a separate hdd and my hydrus installation is on an ssd. Neither are on the same drive as my OS
>>8363 Thanks. I agree. I figured out the move/add internal application commands for 484, so they are ready to be integrated. 'Delete with reason x' will need a bit of extra work, but I can figure it out, and then I will have a think about how to integrate it into archive/delete and what the edit UI of this sort of thing looks like. Ideally, although I doubt I will have time, it would be really nice to have multiple archive/delete filters. >>8364 Yeah, this sounds tricky. Although it is complex, I think your best bet might be to personally duplicate and then edit the redirection scripts or tag parsers involved here. You may be able to edit the hydrus pixiv parser to grab the english tags (I know we used to have this option as an alternate parser, but I guess it isn't available any more? maybe pixiv changed how this worked?), or change whatever is parsing SauceNao, although I guess that is part of Hydrus Companion. EDIT: Actually, if your only solid problem with pixiv is you don't want its japanese tags, hit up network->downloaders->manage default tag import options, scroll down to 'pixiv file page api' and 'pixiv manga_big page' and set specific defaults there that grab no tags. Any hydrus import page that has its tag import options set to 'use the current defaults' will then default to those, and not grab any tags. >>8366 Thank you! >>8376 Thanks. I'll make a job to expose this data on the Client API.
>>8377 >>8413 I'm glad. I am enjoying it too in my IRL use. I thought it would be super annoying, but after a bit of use, it just blends into my view and is almost unconsciously useful. Just FYI: The options are a ugly debug/prototype, but you can edit the scanbar colours now. Hit up install_dir/static/qss and duplicate 'default_hydrus.qss'. Then edit your duplicate so the 'qproperty' values under HydrusAnimationBar have different hex colour values. Load up the client, switch your style to your duplicated qss file, and the scanbar should change colour! If you already use a QSS style, then you'll want to copy the custom HydrusAnimationBar section to a duplicate of the QSS style file you use and edit that. >>8379 Thank you, I will investigate this. I was actually going to try exposing all the modified timestamps on the Client API and the client, not just the aggregate value, so I will do this too, and that will help to figure out what is going on here. >>8408 I would like to do this. It can sometimes be tricky, but that's ok--the main problem is I have a lot of really ugly UI code behind the scenes that I need to clean up before I can sanely extend these systems, and then when I extend them I will also have to update the UI to support more view types. It will come, but it will have to wait for several rounds of code cleaning all across the program before I dive properly back in here. Please keep reminding me. Sorting pages themselves should be easier. You can already do a-z name and num_files, so adding total_filesize should be ok to do. I'll make a job. >>8417 Thanks. There is no automation yet, but this will be the first (optional) automated module I add to the duplicate filter, and I strongly expect to have it done this year. I will make sure it is configurable so you can choose to always get rid of the larger. Ideally, this will process duplicates immediately upon detection, so the client will negotiate it and actually delete the 'worse' file as soon as file imports happen.
>>8446 Missing files anon here, it said "my files". I should have mentioned this in my first post, but I had to restore my database from a backup a while back and these first appeared then. I'm assuming they were in the database when I backed it up, but had been deleted in between making the backup and restoring it. I fucked around with file maintenance jobs and managed to fix it. It didn't work the first time because "all media files" and/or "system:everything" wasn't matching the missing files. The files did all have a tag in common that I didn't care to remove from my working files, and for some reason this tag would match the missing files when searched for. I ran the maintenance search on that tag and did the job, and now they're gone.
>>8446 >>8447 Actually, scratch that. The job was able to match the files and reported them as missing, put their sha256sums into a file in the database folder, and made them vanish from the page that had the tag searched, but refreshing it shows that they weren't actually removed and I still encounter them when searching for other tags. Not sure what to do now.
Hello. Is there a way to make sure that when scraping tags, the imaged that were deleted aren't going to be downloaded again?
Can someone help me? Since the last 3 releases Hydrus has been pretty much unusable for me. Having it open after a while it ends up on (not responding) and it can stay that way for hours or until i force close it. I asked on the discord but no one has replied me (I can't complain tho they have helped me a lot in the past) I have a pretty decent PC. R7 1700, 32GB of RAM and I have the main files on a NVM drive and the rest on a 4TB HDD. Please help, I haven't been able to use Hydrus for almost a month.
Trying to download by Pixiv bookmarks, but everytime I enter the url "https://www.pixiv.net/en/users/numbers/bookmarks/artworks" I get an error saying "The parser found nothing in the document". Only trying to grab public bookmarks and I've got Hydrus Companion setup with the API key. Not sure what I'm doing wrong, unless there's some alternate URL I'm supposed to use for bookmarks.
could you change the behavior of importing siblings from a text file so that if a pair would create a loop with siblings you already have, it just asks if you want to replace those pairs you already have that would be part of the loop with the ones from the file? The way it works now, there's no way to replace those siblings with the ones from the file except for manually going through each one yourself, but that defeats the purpose of importing from a file. This would be an exception in the case of you clicking "only add pairs, don't remove" but that's okay because the dialog window would ask you first. As it is right now, the feature is unfortunately useless for my purposes, which is a shame because I thought I finally found a solution for an issue with siblings I've been having for a while. A real bummer.
I had a good simple week. I cleaned some code, improved some quality of life, and made multiple local file services ready for all users. The release should be as normal tomorrow.
I'm pretty new to using this but, is there a way to tag a media with a gang of niggers tag without including its parent tags?
I'm looking to use an android app (or equivalent) that lets me manage (archive/delete) my collection hosted on a computer within a local network, so say if I had no internet I could still use it. Is this a thing? Is there a program that will do this? The available apps out there are a bit confusing as to what their limitations or features are.
Is it possible to download pics from Yandex Images with Hydrus, or can someone suggest a good program that can? Thanks.
is there a setting to make it so hydrus adds filenames as tags by default, such as when importing local files?
>>8453 Isn't that the default behavior of downloaders? Make sure "exclude previously deleted files" is checked. Or are you trying to add tags to files you've already deleted without redownloading them? I don't know if you can do that. >>8468 If you want to give something a tag without including its parent tags, it sounds like that tag shouldn't have those parent tags in the first place. >>8487 Import folders can do that. You can just have a folder somewhere that you can dump files in, and you can set hydrus to periodically check it and do things like add the filename or directory as tags.
>>8446 The cloning process seems to have worked in the sense that the integrity checks now pass. However now I get this message when I boot up hydrus. Is it safe to proceed or am I in deeper shit?
>>8491 It seems I already had "all local files" on, but changing it back to just "my files" seems to have no effect. I tried "clear orphan file records" and it nearly instantly completed without finding any.
>>8493 >For now, your best bet is the Client API Managed to figure it out, thanks. I used gallery-dl to download the metadata for all the files, gathered the md5 and tags from the metadata, searched up the md5 in the API and got the sha256, then added the tags to the sha256.
Hi, I didn't use Hydrus (Linux version) for three months, and after update to the latest version I noticed the following: when you start a selection in file manager (e. g. press shift and repeatedly press → to select multiple files) the image preview is freezing at selection begin, but the tag list is reflecting your movements. Old behavior is that both preview and tag list were changing synchronously.
>>8475 >>8493 Okay, thanks for the response. When the development is finished, I assume there will be an announcement. I had considered the VNC option. I'm not sure who's developing the app, if it's you or someone else, but do you know if it will be like a remote control of hydrus on a host computer, if it'll be a kind of a port of existing hydrus, or if it'll have functionality of both options? I'm also curious as of an approximate timeframe as well.
>>8455 I got it work via URL like that thouhg Hydrus url import page: https://www.pixiv.net/ajax/user/YOURPIXIVID/illusts/bookmarks?lang=en&limit=48&offset=96&rest=show&tag= I didn't try to change the limit key (was afraid of ban), so whole process was page by page - increasing offset by 48 every input of URL
>>8505 update: Hydrus finally booted, thank god, however it's completely empty. All the files are still on my HDD I can check, hydrus just seems to have forgotten about them. I suspect it might have also forgotten about pretty much all other settings as well, such as my thumbnails and files drive location. (thumbnails on ssd, files on hdd, originally, as suggested)
>>8515 Would I be able to do a "restore from a database backup", select my old, now seemingly "unlinked"/"forgotten" db, and proceed?
>>8518 Alright lemme give just a little more context to the current state of things then. This is how my setup [b]used[/b] to be set up client.exe in (SSD) E:\Hydrus Network\ thumbnails in (SSD) E:\Hydrus Network\thumbnails\ files in (HDD) F:\Hydrus Network Files\files\ (from f00 to fff) after this whole fuckery happened, I manually checked and all files remained in their place and continue to be fully intact and viewable from the file explorer, and also able to be opened and viewed without a fuss. Coming home from work I checked and it seems my suspicions were right. All my settings were reset to default, including the default file locations so for example were I to save a picture from 8chan it would by default put it in: E:\Hydrus Network\db\client_files\ There are currently no files actually saved in this location. It's empty. To clarify I didn't "create a backup" before this, but since my previous files in (F:) still remain there completely fine and viewable I was wondering if I could simple instruct hydrus to "look here for pictures" basically. At this point I don't care about tags, watches, and all that stuff, I'm just glad my files are safe and I want to get hydrus back in shape where it's useable for me.
>>8520 PS.: It's as if hydrus had uninstalled then reinstalled itself. Quite bizarre...
Can Hydrus have audio WavPack (.wv) files support, even only just for storing, not playback? That will be a good addition to the already available .flac and .tta.
down the line this will probably be obsolete, but before than it will help quite a bit. with duplicates, when they are pixel matches, is there a way to either set the lower file size one to be green and the bigger one to be red? its already this way with jpeg vs png ones, but same vs same just has both as blue, and with a pixel duplicates there would never be a reason to choose the larger file size. for me I want the duplicate deciding process to be as speedy as possible, at least with these exact duplicate ones, and I have been watching things while doing this, however and this may be my monitor, unless im staring straight at the numbers, they kind of blend, making 56890 all kind of look alike, requiring me to sit up and look at it straight on. I think if the lower number was green on exact dupes it would speed the process up significantly, at least until an auto discard for exact dupes (and hopefully this takes the smaller file size as the better pair) gets implemented and we no longer have to deal with exacts. I don't know if this would be simple to implement, but if it is, it would be much appreciated.
I'm trying to download a thread from archived.moe and arciveofsins.com but it keeps giving errors with a watcher and keeps failing with a simple downloader. it seems like manually clicking on the page somehow redirects to a different link then when hydrus does it.
>>8158 >In terms of metadata, hydrus keeps all other metadata it knows about the file. If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? Also, what does telling Hydrus to forget previously deleted files actually remove if it still keeps the files' hashes? I don't feel comfortable (or desperate) enough to use the method you gave, but I also don't want to go through the trouble of exporting all my files, deleting the database, reinstalling Hydrus, and then importing and tagging the files all over again.
My autocompleted tag list displays proper tag counts, but when I search them I get dramatically less images. I can still find these images in the database through system:* searches and they're still properly tagged. My tag siblings and parents aren't working for some tags either. But all the database integrity checks say everything is okay. What's my next step?
Still getting some errors in the duplicate filter, I think it has something to do when I'm choosing to delete images v485, win32, frozen IndexError list index out of range File "hydrus\client\gui\ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3598, in ProcessApplicationCommand command_processed = CanvasWithHovers.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1581, in ProcessApplicationCommand self._Delete() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2928, in _Delete self._SkipPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3488, in _SkipPair self._ShowNextPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3442, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ):
>>8494 I have had a report from another user about a situation a bit similar to yours related to the file service that holds repository update files. I am going to investigate it this week, please check the changelog for 487. I can't promise anything, but I may discover a bug where some files aren't being cleanly removed from services at times and have a fix. >>8496 Yes, hit up options->gui pages and check the new preview-click focus options. Note that shift-click is a bit more clever now, too--if you go backwards, you can 'rewind' the selection. >>8499 Yeah, I like to highlight neat new apps in the release posts or changelogs. I do not make any of the apps, but I am thinking of integrating 'do stuff with this other client' tech into the client itself, so you'll be able to browse a rich central client with a dumb thin local client. Timeframe I can't promise. For me, it'll always be long. I'm expecting my 'big' jobs for the next 12-18 months to be a mix of server improvements, smart file relationships, and probably a downloader object overhaul. I'll keep working on Client API improvements in that time in my small work, and I know the App guys are still working, so I just expect the current betas to get better and better over time, a bit like Hydrus, with no real official launch. Check in again on the links in the Client API help page in 4-6 months, is probably a good strategy.
>>8547 >If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? It depends on what 'OK' means, I think. If you want to remove the hash record, sure, you can delete it if you like, but you might give yourself an error in two years when some maintenance routine scans all your stuff for integrity or something. Renaming the hash to a random value would be better. Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. Telling hydrus to remove a deletion record only refers to the particular file domain where the file was deleted from. It might still be present in other places, and other services, like the PTR, may still have tags for it. It basically goes to the place in the database where it says 'this file was deleted from my files ten days ago' and removes that row. If you really really need this record removed, please don't rebuild your whole client. Make a backup (which means making a copy of your database), then copy/paste my routine into the sqlite terminal exactly, then try booting the client. If all your files are fucked, revert to the backup, but if everything seems good, then it all went correct. Having a backup means you can try something weird and not worry so much about it going wrong. More info here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
>>8553 The nuclear way to fix this sort of problem, if it is a miscounting situation, is database->regenerate->tag storage mappings cache (all, deferred...). If the bad tag counts here are on the PTR, this operation could take several hours unfortunately. If the tags are just your 'my tags' or similar, it should only be a couple of minutes. Once done, you'll have to wait for some period for your siblings and parents to recalculate in idle time. But even if that fixes it, it does not explain why you got the miscount in the first place. I think my recommendation is see if you can find a miscounted tag which is on your 'my tags' and not on the PTR in any significant amount. A 'my favourites' kind of tag, if you have one. Then regen the storage cache for that service quickly and see if the count is fixed after a restart. If it is, it is worth putting the time into the PTR too. If it doesn't fix the count, let me know and we can drill more into what is actually wrong here. >>8555 Damn, thank you, I will look into this.
>>8565 This seems to have fixed it, thank you! However, it's left quite a few unknown tags. I guess those tags were broken, which was the problem in both my counts and parent/siblings. Is there any way to restore those "unknown tag" namespaced tags, or is it better to just try to replace them one by one?
>>8563 Here is some samples of WavPack from the web: https://telparia.com/fileFormatSamples/audio/wavPack/ But just in case I attached short random laugh compressed with recent release of encoder on Linux. Format seems have magic number "wvpk" as stated on wikipedia or github repo: https://github.com/dbry/WavPack/blob/master/doc/WavPack5FileFormat.pdf
Will it be possible at some point to edit hydrus images without needing to import it as a brand new image? It's annoying opening images in an external editor, making the edit, saving the image, importing said image, transferring all the tags back onto it, and then deleting the old version when all I'm doing usually is cropping part of it.
I had an ok week. I didn't have time to get to the big things I wanted, but I cleared a variety of small bug fixes and quality of improvements. The release should be as normal tomorrow.
>>8555 Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work.
How long until duplicates are shown properly? Also, are transitive duplicates sorting (as in files which aren't possible duplicates but have duplicates in common) in the to do list?
>>8563 nice, hopefully the rules come soonish, would make going through them a bit easier, definitely want to check out some things in 487 as they are things I made work arounds for like pushing the images to a page, I currently have a rateing that does something similar when i want to check the file a bit closer, be it a comic page I want to reverse search or something I want to see where it came from, this may be a better option.
>switch to arch linux from windows >get hydrus running >use retarded samba share on nas for the media folder >permission error from the subscription downloader >can view and search my images fine otherwise, in both hydrus and file manager Any idea which permissions would be best to change? I'm retarded when it comes to fstab and perms, but I know not to just run everything as root. I just can't figure out if its something like the executable's permissions/owner, the files permissions/owner, or something retarded in how I mount it. Pictured are the error, fstab entry, the hydrus client's permissions, and what the permissions for everything in the samba share are. The credentials variable in fstab is a file that only root can read, for slight obfuscation of credentials according to the internet. The rest to the right was stuff I added to allow myself to manipulate files in the samba share, again just pulled from random support threads.
>>8618 >Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work. Appears fixed for me with v487 - Thanks.
Perhaps, another bug?: >file>options>files and trash>Remove files from view when they are sent to trash. Checking/Unchecking has the desired result with watchers and regular files but does not seem work anymore with newly downloaded files published to their respective pages. Here, the files are merely marked with the trash icon but not removed from view, as it had been the case (for me) until version 484.
>>8627 It seems like I can manipulate files within the samba drive but it spits out an error when moving from the OS drive to there. So I guess it's some kind of samba caching problem.
I have noticed some odd non-responsiveness with the program. It is hosted on an SSD. While in full-screen preview browsing through files to archive or delete, sometimes the program will stop responding for approximately 10 seconds when browsing to the next file (usually a GIF but not always). The next file isn't large or long or anything. I'm not sure what's causing this issue. Is it just the program generating a new list of thumbnails?
>>8641 I also wanted to note this issue is not unique to this most recent update. It has been there for a while.
>>8641 >>8642 I guess I should also reiterate that the program AND the database are both hosted on the same drive (default db location)
well this is a first, the png on a pixel for pixel against a jpeg was smaller... i'm guessing that jpeg is hiding something.
>>8618 >>8630 Great, thanks for letting me know. >>8619 I expect to do a big push on duplicates in Q4 this year or Q1 2023. I really want to have better presentation, basically an analogue to how danbooru shows 'hey, this file has a couple of related files here (quicklink) (quicklink)'. Estimating timeframes is always a nightmare, so I'll not do it, but I would like this, and duplicates are a popular feature for the next 'big job'. At the moment, there is a decent amount of transitive logic the duplicates system. If A-dup-B, and B-dup-C, then A-dup-C is assumed. Basically duplicates in the hydrus database are really a single blob of n files with a single 'best' king, so when you say 'this is better' you are actually merging two blobs and choosing the new king. I have some charts at the bottom of this document if you want to dive into the logic some more. https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced But to really get a human feel for this, I agree, we need more UI to show duplicate relationships. It is still super technical, opaque, and not fun to use. >>8627 >>8636 I'm afraid I am no expert on this stuff. The 'utime' bit in that first traceback is hydrus trying to copy the original file's modified time from a file in your temp directory to the freshly imported file in the hydrus file system, so if the samba share has special requirements for that sort of metadata modification, that's your best bet I think.
>>8643 >>8647 There is no downloading or synching being done. Client is basically running stock, with no tags or anything (not even allowed to access the internet yet). Think it might be AV? Running Kaspersky on Low (uses very little resources for automated scanning).
>>8648 >>8647 Also, no active running imports. Just an open import window with about 60k files for me to sift through.
>>8649 >>8647 I tried it with an exclusion for the entire Hydrus folder for automated scanning but the problem persists so I don't think its AV related.
Would it be possible to add a sort of sanity check to modified times to prevent obviously wrong ones from being displayed? I've noticed a few files downloaded from certain sites since modified times were added to Hydrus show a modified time of over 52 years, which makes me think that files from sites which don't supply a time are given a 0 epoch second timestamp. In this case I think it would be better to show a string like "Unknown modification time" or none at all.
>>8652 Also, if I try to download the same file from a site that does have modified times, the URL of the new site is added but the modified time stays the incorrect 52 years. Maybe there could be an option to replace modified times for this query/always if new one found/only if none is already known (or set to 1970). I also couldn't find a way to manually change modified time, but maybe I didn't look hard enough.
>>8647 I would send it to ya but I dumped the trash before I saw your response, so far I have seen a few of these, if I find another ill send it to ya.
>>8656 Update on this issue: I tried exporting all my parent tags, then deleting all the parent tag configurations and using the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to indicate there's no work to do. I then added back in one parent tag from my original set (that only applied to 5 files in the repository) and the "maintenance" window says there's now one parent to sync, but isn't actually processing that one parent.
>>8648 >>8649 >>8650 Hmm, if you have a pretty barebones client, no tags and no clever options, then I am less confident what might be doing this. I've seen some weird SSD driver situations cause superlag. I recommend you run the profile so we can learn more. >>8652 >>8655 Thanks, can you point me to some example URLs for these? I do have a sanity check that is supposed to catch 1970-01-01, but it sounds like it is failing here. The good news is I store a separate modified time for every site you download from, so correcting this retroactively should be doable and not too destructive. I want to add more UI to show the different stored modified times and let you edit them individually in future. At the moment you just get an aggregated min( all_modified_times ) value.
>>8656 >>8662 Damn, this is not good. I'm sorry for the trouble and annoyance. Have you seen very slow boots during this? That thumbnail cache is instantiated during an early stage of boot, so it looks like the sibling/parent sync manager is going bananas as soon as it starts. I have fixed the bug, I think, for tomorrow's release. That may help your other issue, which is the refusal to finish outstanding work, but we'll see. Give tomorrow's release a go, and if it gets to a '95% done' mode again and won't do the last work, please try database->regenerate->tag parents lookup cache. While the 'storage mappings cache' reset will cause the siblings and parents to sync again, the 'lookup' regen actually does the mass structure that holds all the current relationships. It sounds like I have a logical bug there when you switch certain parents around. You don't have to say the exact tags if you don't want, but can you describe the exact structure of the revisions you made here? Was it simply flipped parent-child relationships, so you had 'evangelion->ayanami rei', and it should have been 'ayanami rei->evangelion'? Were there any siblings involved with the tags, and did the parent tags that were edited have any other parent relationships? I'm wondering if there is some weird cousin loop I am not detecting here, or perhaps detecting but not recognising as creating outstanding sync work. Whatever the case, let me know how you get on with this!
I had a good week. I did some simple work to make a clean release before my vacation. The release should be as normal tomorrow.
>>8665 Yes, I did have a few very slow startups: a few times it took like two hours for the UI to show, though I could see the process was indeed started in task manager. Thanks; I'll try tomorrow's release and see if that helps anything. Parent-tag-wise, the process I think I was doing right before it failed was I had a bunch of things tagged with something generic, which had one level of namespacing (e.g. "location:outdoor"), and I decided to make a few more-specific tags (e.g. "location:forest", "location:driving", and "location:beach"; all of which should also get "location:outdoor" as a "parent"). But I first created the parent relationship the wrong way and didn't notice it (so everything that was "outdoor" would now get three additional tags added to it). I saved the parent config and started manually re-tagging (e.g. remove "outdoor" and add "beach" for those that were in that subgroup), and after doing a few I noticed the F3 tagging window wasn't showing the "parent" tag yet (wasn't showing "outdoor" nested under "beach"), and so I went back to the tag manager and realized they were wrong, so deleted the relationship and re-added them the right way and continued re-tagging. After a while I noticed it still hadn't synced, and realized it didn't seem to be progressing any more, and started triaging to see if it was a bug. None of them had siblings defined.
>>8664 >Thanks, can you point me to some example URLs for these? It looks like this is only affecting permanent booru. I'm using pic related posted in one of these threads. Here's a SFW example URL: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/post/3742726/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa It may be of note that the "direct file URL" is from IPFS, and the following onion gateway URL is added to the file's URLs as well: http://xbzszf4a4z46wjac7pgbheizjgvwaf3aydtjxg7vsn3onhlot6sppfad.onion/ipfs/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa The same file is available here with a correct modification time (2022-02-27): https://e621.net/posts/3197238 The modified time in the client shows 52 years 5 months, which is in January 1970. Not sure if there's an easy way to see the exact time.
>>8645 >but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech Couldn't you just make a temporary "import these files and use _ as _ to find alternates, then do _ if _" for now? Like "import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller"? I mean it sounds like too much when you write it out like that, but the underlying logic should be pretty simple.
trying to use Hydrus for the first time; is there a way to add subscription for videos specifically? So that it leaves out photos?
>>8675 Have a nice vacation OP and watch out for fucking normies.
id:6549088 from gelbooru. (nsfw) with download dec. bomb deactivated. When downloading this specific picture, before it finishes downloading, it makes the program jump to 3 gb of ram until i close it. Is opens normally with browser, but spikes to 3 gb on hydrus. and since i only have 4 gb it makes the pc freeze. Just wanted to report on that. Also, no native enflish speaker here.
>>8679 forgot, using version 474
>>8668 Reporting in that v488 seems to have fixed both these bugs. There's no longer the thumbnail exception being logged, the startup time to get to a UI window is quicker, and the parent-sync status un-stuck itself. Hooray!
>>8645 This is about what I figured. I pulled the database from a dying hard drive a few months ago. Every integrity scan between now and then ran clean, but I had a suspicion something had gotten fucked up somewhere along the line. Since it's been a minute, any backups are either also corrupted, or too old to be useful. Luckily, re-constructing them hasn't been too painful. I made an "unknown tag:*anything*" search page, then right-click->search individual tags to see what's in them. Most have enough files in to give context to what it used to be, so I'll just replace it. It's been a good excuse to go through old files, clean up inconsistent tags, set new and better parent/sibling relationships, etc, so it's actualy been quite pleasing to my autisms. I had 80k files in with an unknown tag back when I started cleaning up, and now I'm down to just under 40k. I'm sure I've lost some artist/title tags from images with deleted sources, or old filenames, but all in all, it could be much worse.
Thanks man! Have a good vacation!
>>8676 if you're just subscribing to a booru, they will generally have a "video" tag. you can add "video" to the tag search.
>>8703 nope, not a booru. So there isn't a way to filter that. awh.
Is there any way to get Hydrus to automatically tag images with the tags present in the metadata? Specifically the tags metadata field, why whole collection was downloaded using Grabber.
>>8710 my*
>>8709 What website is it? You might be able to add to/alter the parser to spit out the file type by reading the json or file ending, then use a whitelist to only get certain file endings (i.e. videos)
I've been using hydrus for a while now and is in the process of importing all my files. Is there any downside to checking the "add filename? [namespace]" button while importing? Think i got over 300k images so it would create a lot of unique tags if that would be a problem.
About how long do you estimate it might take before hydrus will be able to support any files. I specifically need plaintext files and html files (odd, I know) if that makes a difference. The main thing is just that it'd be nice for me to have all my files together in hydrus instead of needing to keep my html and (especially) my text files separate from the pics and vids. Also. I'm curious. Why can't hydrus simply "support" all filetypes, by just having an "open externally" button for files that it doesn't have a viewer for? It already does that for things like flash files, afterall.
>>8627 >>8636 >>8646 It seems to be working now, not sure what changed but somehow arch doesn't always mount the samba directory anymore and needs a manual command on boot now, which it didnt before. Maybe it was some hiccup, maybe some package I happened to install as I installed more crap, maybe it was a samba bug that got updated.
Is there a way to reset the file history graph, under Help?
>>8668 >>8681 Great, thanks for letting me know! >>8671 Thank you. The modified date for that direct file was this: Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT I thought my 'this is a stupid date m8' check would catch this, but obviously not, so I will check it! Sorry for the trouble. I'll have ways to inspect and fix these numbers better in future. >>8674 I'm sorry to say I don't understand this: >"import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller" But if you mean broadly that you want some better metadata algebra for mass actions, I do hope to have more of this in future. In terms of copying metadata from one thing to another, I just need to clean up and unify and update the code. It is all a hellish mess from my original write of the duplicates system years ago, and it needs work to both function better and be easier to use
>>8676 >>8703 >>8709 >>8716 In the nearish future, I will add a filetype filter to 'file import options', just like Import Folders have, so you'll be able to do this. Sorry for the trouble here, this will be better in a bit! >>8679 >>8680 I'm sorry, are you sure you have the right id there? gif of the frog girl from boku no hero academia? I don't have any trouble importing or viewing this file, and by it looks it doesn't seem too bloated, although it is a 30MB gif, so I think your memory spike was something else that happened at the same time as (and probably blocked) the import. Normally, decompression bombs are png files, stuff like 12,000x18,000 patreon rewards and similar. I have had several reports of users with gigantic memory spikes recently, particularly related to looking at images in the media viewer. I am investigating this. Can you try importing/opening that file again in your client and let me know if the memory spike is repeatable? If not, please let me know if you still get memory spikes at other times, and more broadly, if future updates help the situation. Actually, now I think of it, if you were on 474, I may have fixed your gigantic memory issue in a recent update. I did some work on more cleanly flushing some database journal data, which was causing memory bloat a bit like you saw here, so please update and then let me know if you still get the problem. >>8688 Good luck!
>>8723 Great, let me know how things go in future! >>8725 What part would you like to 'reset'? All the data it presents is built on real-world stuff in your client, like actual import and archive times. Do you want to change your import times, or maybe clear out your deleted file record?
I had a good week. I did a mix of cleanup and improvements to UI and an important bug fix for users who have had trouble syncing to the PTR. The release should be as normal tomorrow.
when trying to do a file relationship search, is there a way to search for same quality duplicates. I don't see any way to do that, and every time I look at the relationships of a file manually, it's always a better/worse pair. Does Hydrus just randomly assign one of the files as being better when you say that they're the same quality?
>>8743 Yes, 'same quality' actually chooses the current file to be the better, just as if you clicked 'this is better', but with a different set of merge options. The first version of the duplicate system supported multiple true 'these are the same' relationships, but it was incredibly complicated to maintain and didn't lend itself to real world workflows, so in the end I reinvented the system to have a single 'king' that stands atop a blob of duplicates. I have some diagrams here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced I don't really like having the 'this is the same' ending up being a soft 'this is better', but I think it is an ok compromise for what we actually want, which is broadly to figure out the best of a group of files. If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. I may revisit this topic in a future iteration of duplicates, but I'm not sure what I really want beyond much better relationship visibility, so you can see how files are related to each other and navigate those relationships quickly. Can you say more why you wanted to see the same quality duplicate in this situation? Hearing that user story can help me plan workflows in future.
>>8151 What do I do for this? I'm just tyring to have my folder of 9,215 images tagged.
What installer does Hydrus use? I'm trying to set up an easy updating script with Chocolatey (since whoever maintains the winget repo is retarded).
>>8755 Figured it out, Github artifacts shows InnoSetup. Too bad Chocolatey's docs are half fucking fake and they don't do shit unless you give them money. This command might work, but choco's --install-arguments command doesn't work like the fuckwads claim it does. choco upgrade hydrus-network --ia='"/DIR=C:\x\Hydrus Network"'
>>8756 No, actually, that command doesn't work, because the people behind chocolatey are lying fucking hoebags. Seeing this horseshit, after THEY THEMSELVES purposfully obfuscated this bullshit is FUCKING INFURIATING.
>>8745 The main thing I wanted to do is compare the number of files that were marked as a lower-quality duplicates across files from different url domains with files that aren't lower-quality duplicates (either kings, or alts, or no relationships) to see which domains tend to give me the highest ratio of files that end up being deleted later as bad-dupes, and which ones give me the lowest, so I know which ones I should be more adamant about downloading from, and which ones I should be more hesitant about. This doesn't really work that well if same-quality duplicates can also be considered "bad dupes" by hydrus, because that means I'm getting a bunch of files in the search that shouldn't be there, since they're not actually worse duplicates, but same-quality duplicates that hydrus just treats as worse arbitrarily. Basically, I was trying to create a ranking of sites that tend to give me the highest percentage of low-quality dupes and ones that give me the lowest. I can't do that if the information that hydrus has about file relationship is inaccurate though. It's also a bit confusing when I manually look at a file's relationships, because I always delete worse duplicates, but then I saw many files that are considered worse duplicates and I thought to myself "did I forget to delete it that time". Now this makes sense, but it still feels wrong to me somehow.
>>8757 >2022 and still using windoze Time to dump the enemy' backdoor.
>>8753 The good catch-all solution here is to hit up services->review services and click 'refresh account' on the repository page. That forces all current errors to clear out and tries to do a basic network resync immediately. Assuming your internet connection and the server are ok again, it'll fix itself and you can upload again. >>8755 >>8756 >>8757 Yeah, Inno. There's some /silent or something commands I know you can give the installer to do it quietly, and in fact that's one reason the installer now defaults to not checking the 'open client' box on the last page, so some automatic installer a guy was making can work in the background. I'm afraid I am no expert in it though. If I can help you here, let me know what I can do. >>8758 Ah, yeah, sorry--there's no real detailed log kept or data structure made of your precise decisions. If you do always delete worse duplicates though, then I think you can get an analogue for this data you want. Any time you have a duplicate that is still in 'my files', you know that was set as 'same quality', since it wasn't deleted. Any time a duplicate is deleted, you know you set it as 'worse'. If you did something like: 'sort by modified time' (maybe a creator tag to reduce the number of results) system:file relationships: > 0 dupe relationships then you switch between 'my files' and 'all known files' (you need help->advanced mode on to see this), you'll see the local 'worse' (you set same quality) vs also the non-local worse (you set worse-and-delete), and see the difference. In future, btw, I'd like to have thumbnails know more about their duplicates so we can finally have 'sort files by duplicate status' and group them together a bit better in large file count pages. If you are trying to do this using manual database access in SQLite and want en masse statistical results, let me know. The database structure for this is a pain in the ass, and figuring out how to join it to my files vs all known files would be difficult going in blind.
>>8759 >Unironically being that guy Buddy, you just replied to a reply about easier updating with something that would make it ten times harder. Not to mention that hilariously dated meme. >>8760 Yeah, Choco passes /verysilent IIRC, and /DIR would work, but Powershell's quote parsing is fucking indecipherable, Choco's documention on the matter is outright wrong, and I can't 'sudo' in cmd. I'm considering writing a script to just produce update PRs for the Winget repo myself, since it's starting to seem like that would be easier, but I don't want to go through all of Github's API shit.
Pyside is nearly PyPy compatible (see https://bugreports.qt.io/browse/PYSIDE-535). What work would need to be done in Hydrus to support running under PyPy?

Forms
Delete
Report