/t/ - Technology

Discussion of Technology

Index Catalog Archive Bottom Refresh
Mode: Reply

Max message length: 8000


Max file size: 32.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more


(used to delete files and postings)


Remember to follow the rules

The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 2.0.

Please be aware of the Site Fallback Plan!
In case outages in Eastern Europe affect site availability, we will work to restore service as quickly as possible.

(Estamos planeando la actualización del sitio 2.8 para este fin de semana, del lunes 6 al 27 por la tarde o por la noche en CST. El tiempo de inactividad será breve y luego buscaremos errores.)

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Board Nuking Issue should be resolved. Apologies for any missing posts.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #4 Anonymous Board volunteer 04/16/2022 (Sat) 17:14:57 No. 8151
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
https://www.youtube.com/watch?v=nShSEUBKe3o windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Linux.-.Executable.tar.gz I had a great week. Lots of different small jobs done. notes and hover windows I'm happy with last week's work making notes show in media viewers, but I introduced some little bugs while rewriting hover windows. I have now fixed the bad text colour behind the top hover, the problem where clicking on tags or greyspace was propagating up to the archive/delete and duplicate filters, the bad hover panel colour on non-default stylesheets, and some note window position and size issues. Also, for notes, you can now right-click them to collapse them in the hover window. Right-click again on the name to expand again. This is a test, really, just to see if it helps navigating files with many long notes. Double-clicking on the note tab in the edit dialog lets you rename, and a checkbox under the new options->notes now lets you choose whether the text caret starts at the beginning or end of the document when editing. Furthermore, I have updated all the icon buttons in all the hovers to no longer take focus when you click on them. They were previously stealing arrow key and space after a click (to do button-to-button form navigation), which meant you couldn't click on, say, a duplicate filter action button and then go back to arrow keys to navigate. Now you should be able to mix clicks and arrow keys without trickery. If this affects you, let me know how it goes! other highlights If you didn't like the recent 'ctrl- and shift-clicks no longer show files in the preview viewer' change, check out the new checkboxes under options->gui pages. You can make either click type focus for all files again or just files with no duration--if you don't want noisy videos being annoying while you ctrl-click. The 'advanced mode' autocomplete dropdown now has two 'OR' buttons. The left one opens a new empty OR edit dialog, the right one opens the advanced text parsing input as before. full list - fixes and improvements after last week's hover and note work: - fixed the text colour behind the top middle hover window - stopped clicks on the taglist and hover greyspace being duplicated up to the main canvas (this affected the archive/delete and duplicate filter shortcuts) - fixed the background colour of the hover windows when using non-default stylesheets - fixed the notes hover window--after having shown some notes--could then lurk in the top-left corner when it should have been hidden completely - cleaned up some old focus test logic that weas used when hovers were separate windows - rewrote how each note panel in the new hover is stored. a bunch of sizing and event handling code is less hacked - significantly improved the accuracy of the 'how high should the note window be?' calculation, so notes shouldn't spill over so much or have a bunch of greyspace below - right- or middle-clicking a note now hides its text. repeat on its name to restore. this should persist through an edit, although it won't be reflected in the background atm. let's see how it works as a simple way to quickly browse a whole stack of big notes - a new 'notes' option panel lets you choose if you want the text caret to start at the beginning or end of the document when editing - you can now double-click a note tab in 'edit notes' to rename the note. some styles may let you double-click in note greyspace to create a new note, but not all will handle this (yet) - as an experiment, all the buttons on the media viewer hover windows now do not take focus when you click them. this should let you, for instance, click a duplicate filter processing button and then use the arrow keys and space to continue to navigate. previously, clicking a button would focus it, and navigation keys would be intercepted to navigate the 'form' of the buttons on the hover window. you can still focus buttons with tab. if this affects you, let me know how this goes! - . - misc: - added checkboxes to _options->gui pages_ to control whether ctrl- and shift- selects will highlight media in the preview viewer. you can choose to only do it for files with no duration if you prefer - the 'advanced mode' tag autocomplete dropdown now has 'OR' and 'OR*' buttons. the former opens a new empty OR search predicate in the edit dialog, the latter opens the advanced text parser as before - the edit OR predicate panel now starts wider and with the text box having focus - hydrus is now more careful about deciding whether to make a png or a jpeg thumbnail. now, only thumbnails that have an alpha channel with interesting data in it are saved to png. everything else is jpeg - when uploading to a repository, the client will now slow down or speed up depending on how fast things are going. previously it would work on 100 mappings at a time with a forced 0.1s wait, now it can vary between 1-1,000 weight - just to be clean, the current files line on the file history chart now initialises at 0 on your first file import time - fixed a bug in 'if file is missing, remove record' file maintenance job. if none of the files yet scanned had any urls, it could error out since the 'missing and invalid files' directory was yet to be created - linux users who seem to have mpv support yet are set to use the native viewer will get a one-time popup note on update this week just to let them know that mpv is stable on linux now and how to give it a go - the macOS App now spits out any mpv import errors when you hit _help->about_, albeit with some different text around it - I maybe fixed the 'hold shift to not follow a dragged page' tech for some users for whom it did not work, but maybe not - thanks to a user, the new website now has a darkmode-compatible hydrus favicon - all file import options now expose their new 'destination locations' object in a new button in the UI. you can only set one destination for now ('my files', obviously), but when we have multiple local file services, you will be able to set other/multiple destinations here. if you set 'nothing', the dialog will moan at you and stop you from ok-ing it. - I have updated all import queues and other importing objects in the program to pause their file work with appropriate error messages if their file import options ever has a 'nothing' destination (this could potentially happen if future after a service deletion). there are multiple layers of checks here, including at the final database level - misc code cleanup - . - client api: - added 'create_new_file_ids' parameter to the 'file_metadata' call. this governs whether the client should make a new database entry and file_id when you ask about hashes it has never seen before. it defaults to false, which is a change on previous behaviour - added help talking about this - added a unit test to test this - added archive timestamp and hash hex sort enum definitions to the 'search_files' client api help - client api version is now 31 next week Next week is cleanup. Nothing too exciting, but I'd like to break the database code up a bit more.
Is there a way to set the media viewer to use integer scaling (I think that's what it's called) rather than fitting the view to the window, so that hydrus chooses the highest zoom where all pixels are the same size and the whole image is still visible. My understanding is that nearest neighbor is a lossless scaling algorithm when the rendered view size is a multiple of the original, otherwise you get a bunch of jagged edges from the pixels being duplicated unevenly. It looks like Hydrus only has options to use "normal zooms" (what you set manually in the options? I'm confused by this), always choosing 100% zoom, or scaling to canvas size regardless of if that's with a weird zoom level (like 181.79%) that causes nearest-neighbor to create jagged edges.
When I deleted a file in Hydrus, how sure can I be that it is COMPLETELY gone? Are there any remnants that are left behind?
>>8156 yeah all the metadata for the file (tags and urls and such) are still there. There isn't currently a way to remove that stuff.
>>8154 Yeah, under options->media, and the filetype handling list, on the edit dialog is 'only permit half and double zooms'. That locks you to 50%, 100%, 200%, 400% etc... It works ok for static gifs and some pngs, if you have a ton of pixel art, but I have never really liked it myself. Set the 'scale to the canvas size' options to 'scale to the largest regular zoom that fits', I think that'll work with the 50/100/200/400 too. Let me know if it doesn't. >>8156 >>8157 Once the file is out of your trash, it will be sent to your OS's recycle bin, unless you have set in options->files and trash to permanently delete instead. Its thumbnail is permanently deleted. In terms of the file itself, it is completely gone from hydrus and you are then left with the normal issues of deleting files permanently from a disk. If you really need to remove traces of it from the drive, you'll need a special program that repeatedly shreds your empty disk sectors. In terms of metadata, hydrus keeps all other metadata it knows about the file. Information like the file's hash (basically its name), its resolution, filesize, a perceptual hash that summarises how it looked, and tags it has, ratings you gave it, URLs it knows the file is at, and when it was deleted. It may have had some of this information before it was imported (e.g. its hash and tags on the PTR) if you sync with the public tag repository. Someone who accessed your database and knew how hydrus worked would probably be able to reconstruct that you once imported this file. There are no simple ways to tell the client 'forget everything you ever knew about this file' yet. Hydrus keeps metadata because that is useful in many situations. Deletion records, for instance, help the downloader know to not re-import something your previously deleted. That said, I am working on a system that will be able to purge file knowledge on command, and other related database-wide cleanup of now-useless definition records, but it will take time to complete. There are hundreds of tables in the database that may refer to certain definitions. If you are concerned about your privacy (and everyone should be!), I strongly recommend putting your hydrus database inside an encrypted container, like with veracrypt or ciphershed or similar software. If you are new to the topic, do some searching around on how it works and try some experiments. If you are very desperate to hide that you once had a file, I can show you a basic hack to obscure it using SQLite. Basically, if you know the file's hash, you go into your install_dir/db folder, run the sqlite3 executable, and then do this: (MAKE A BACKUP FIRST IN CASE THIS GOES WRONG) .open client.master.db update hashes set hash = x'0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef' where hash = x'06b7e099fde058f96e5575f2ecbcf53feeb036aeb0f86a99a6daf8f4ba70b799'; .exit That first hash, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", should be 64 characters of random hex. The second should be the hash of the file you want to obscure. This isn't perfect, but it is a good method if you are desperate.
I just updated to the latest version, and there seems to be a serious (well, seriously annoying, but not dangerous) bug where frames/panels register mouse clicks as being higher up when you scroll down, as if you didn't scroll down. It's happening with the main tag search box drop down menu, and also in the tag edit window where tags are displayed and you can click on them to select them. I'm on Linux.
>>8159 Sorry, yeah, I messed something up last week doing some other code cleaning. I will fix it for next week and add a test to make sure it doesn't happen again. Sorry for the trouble. I guess I don't scroll and click much when I dev or use the client IRL.
>>8159 >on Linux I confirm that.
>>8159 I've got this problem on windows as well. Also, am I the only one experiencing extremely slow PTR uploads? Now instead of uploading 100 every 0.1 seconds, it is more like 1-4 every 0.1s
>>8164 i'm also getting this error when uploading to the PTR v481, win32, frozen StreamTimeoutException Connection successful, but reading response timed out! Traceback (most recent call last): File "urllib3\connectionpool.py", line 426, in _make_request File "<string>", line 3, in raise_from File "urllib3\connectionpool.py", line 421, in _make_request File "http\client.py", line 1344, in getresponse File "http\client.py", line 307, in begin File "http\client.py", line 268, in _read_status File "socket.py", line 669, in readinto File "urllib3\contrib\pyopenssl.py", line 326, in recv_into socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "requests\adapters.py", line 439, in send File "urllib3\connectionpool.py", line 726, in urlopen File "urllib3\util\retry.py", line 410, in increment File "urllib3\packages\six.py", line 735, in reraise File "urllib3\connectionpool.py", line 670, in urlopen File "urllib3\connectionpool.py", line 428, in _make_request File "urllib3\connectionpool.py", line 335, in _raise_timeout urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\networking\ClientNetworkingJobs.py", line 1460, in Start response = self._SendRequestAndGetResponse() File "hydrus\client\networking\ClientNetworkingJobs.py", line 2036, in _SendRequestAndGetResponse response = NetworkJob._SendRequestAndGetResponse( self ) File "hydrus\client\networking\ClientNetworkingJobs.py", line 710, in _SendRequestAndGetResponse response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) ) File "requests\sessions.py", line 530, in request File "requests\sessions.py", line 643, in send File "requests\adapters.py", line 529, in send requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\gui\ClientGUI.py", line 318, in THREADUploadPending service.Request( HC.POST, 'update', { 'client_to_server_update' : client_to_server_update } ) File "hydrus\client\ClientServices.py", line 1206, in Request network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1872, in WaitUntilDone raise self._error_exception File "hydrus\client\networking\ClientNetworkingJobs.py", line 1643, in Start raise HydrusExceptions.StreamTimeoutException( 'Connection successful, but reading response timed out!' ) hydrus.core.HydrusExceptions.StreamTimeoutException: Connection successful, but reading response timed out!
(27.33 KB 835x522 2022-04-17 150515.png)

(714.79 KB 1457x934 2022-04-17 150704.png)

(4.31 KB 1231x34 2022-04-17 150838.png)

Apologies if the answer is already somewhere on the /hydrus/ board somewhere, I hadn't been able to quite find it, yet. I'm wondering how to make hydrus be able to download pictures from 8chan (using hydrus companion) when direct access results in a 404? I was assuming some fuckery with cookies but sending the cookies from 8chan trough hydrus companion to hydrus client seemingly made no difference
>>8166 afaik there's no way to import directly from urls of "protected" boards, but I'd love to be proven wrong.
>>>/hydrus/17585 >Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing? >>>/hydrus/17586 >>notes >I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. I'm not either of these people, but a filename namespace is useless if the filename cares about case. Hydrus will just turn it all into lowercase. In those scenarios I've had to manually add the filename to the notes for each one... painful. Also, somewhat related: hydrus strips the key from mega.nz urls, so I have to manually add those to notes as well. More pain. >>8166 Have you tried giving hydrus your user-agent http header as well as the cookies?
>>8174 >Have you tried giving hydrus your user-agent http header as well as the cookies? No I haven't, however I'm still quite inexperienced when it comes to using hydrus so I don't really know how I'd be able to do that. Using the basic features of hydrus companion is pretty much as far as my skillset goes atm. Would you please kindly explain how I might do what you had described?
Trying to add page tags to my imported files is turning out to be an even bigger headache than I expected. The page namespace doesn't specify what it is a page of, so you can end up with multiple contradictory page tags. For example, an artist uploads sets of 1-3 images frequently to his preferred site, but posts larger bundles less frequently to another site. Or he posts a few pages at a time of a manga in progress, and when it's finished he aggregates all the pages in a single post for our convenience. Either way, you can end up with images that have two different page tags, both of which are technically correct for a given context, but the tags themselves don't contain enough information to tell which context they're correct in. If I wanted to be really thorough, I could make a separate namespace for each context a page can exist in, but then I'd be creating an even bigger headache for myself whenever I want to sort by pages. The best I can imagine would be some kind of nested tag system, so you can specify the tags "work:X" and "page:Y(of work:X)", and then sort by "work-page(of work)". As an added bonus, it would make navigation a lot smoother in a lot of contexts. For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter.
>>8183 Hydrus sucks at organizing files that are meant to be a sequential series. This has been a known problem for a long time unfortunately.
>>8183 >For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter. You may use kinda nested namespaces: 1 - namespace:whatever soap opera you want (to identify the group) 2 - namespace:chapter 1 (to identify the sub-group) 3 - namespace:chapter 1 - page 01 (to identify the order) So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. Done.
>>8190 >So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. At that point you're basically navigating folders in a file explorer, just more clumsy. That's exactly what I was trying to get away from when I installed hydrus.
I had a great week of simple work. I fixed some bugs--including the scrolled taglist selection issue--and improved some quality of life. The release should be as normal tomorrow.
>>8192 >At that point you're basically navigating folders in a file explorer What are you talking about? In Hydrus all files are in a centralized directory and searched with a database. I understand the hassle to tag manually, but not software is clairvoyant and reads your mind about what exactly you are searching for.
>>8813 if ordered sets are important to you installing danbooru is an option, they do put their source up on github. Last I tried it it was a pain in the ass to get working but I did eventually get it. Though it did lack a number of hydrus features I've gotten used to.
>>8183 Hydrus works off of individual files. It can adapt it to multi-file works, but the more robust of a solution you need the more you’ll butt up against Hydrus’ core design. The current idiomatic solution of generic series, title, chapter, page, etc. namespaces works for 90% of things (with another 9% of things being workable by ignoring all but one context), but if you need a many to many relationship the best you can do is probably use bespoke namespaces for each collection (e.g. “index of X:1” “index of Y:2”) and then use the custom namespace sort to view the files in whatever context you've defined. I guess an ease of use that could get added would be an entry in the tag context menu to sort by namespace. That way you wouldn't need to type it out every time.
>>8197 >That way you wouldn't need to type it out every time. In the future drag and drop tags may be the solution.
I want to remove the ptr from my database. Is there a way to use the tag migration feature to migrate tag relationships only for tags used in my files? You can do it with the actual tags, but I don't see an option to do something similar for relationships, and I'd rather not migrate over thousands of parents/children and siblings for tags I'll never see.
>>8195 You have to add multiple search terms to narrow it down to something useful, similar to how a file explorer requires you to navigate through several subdirectories to get to what you want. And for moving from chapter 1 to chapter 2, you need to remove one search term and add another. I like how hydrus allows me to pick exactly the search term I want, no matter how broad or narrow, and with the right tags and the right namespace sorting rules, sorts everything in view into logical sets and logical sequences within those sets. Maybe I should give a more concrete example of how I manage my stuff. Say an artist uploads both to pixiv and pixiv fanbox. For both services, a post often contains several images in a specific sequence. So I subscribe to both and set the downloader to tag images with the numerical id of the post the image was pulled from (namespace "post id:"), the image's index within all the images in the post (namespace page:), and the service it was pulled from (namespace site:). Then I just have to search for the artist and set namespace sorting to "site-post id-page", and everything works great. But then the artist uploads the same image to both services, and suddenly I have an image with two post id tags and two page tags. Quickest solution would be to have one version of each namespace for each site, then my sorting rule would look like "site-fanbox post id-pixiv post id-fanbox page-pixiv page". Looks ugly, but it does the job. If I only ever downloaded from those two services, I could deal with it, but with all the different sites I download from, my sorting rules become a huge fucking mess. I would probably be fine with any quick hack that allows me to define unique namespaces that get treated as the same namespace for the purpose of sorting (for example, "post id(site:pixiv)" and "post id(site:fanbox)" are treated as if they're just "post id"). Wouldn't sort reliably in every context, but would be good enough for my purposes. However, the dream would be if (assuming sorting rule is "site-post id") it first sorts by site, and then looks for a "post id(*):" tag, where * is the site it sorted by. Unfortunately I don't know enough about databases or sorting to tell how feasible something like this would be.
>>8166 Looks like you need to send the referral URL with your request. The 8chan.moe thread downloader that comes with hydrus already takes care of that, so I assume you're trying to download individual files or something? I think the proper thing here would be for the hydrus companion to attach the thread you found the image in as the referral URL, but I'm not sure if the hydrus API even supports that at the moment. So failing that, you can give 8chan.moe files an URL class and force hydrus to use https://8chan.moe/ as the referral URL for them when no other referral URL is provided. Hopefully this won't get you banned or anything.
https://www.youtube.com/watch?v=PGEZutQ-tCM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Linux.-.Executable.tar.gz I had a great week doing cleanup and other simple work. highlights I fixed the problem where clicks on a scrolled taglist were going to the wrong location. I was cleaning up some ancient wx->Qt code hacks and it seems I rarely scroll and click when working, so I never noticed the problem. I have a new test to make sure this does not happen again. Sorry for the trouble! The URLs in the top-right hover menu are now styled better. No longer underlined, and now colourable by QSS. I have updated all the default stylesheets that come with the client (you can set these under options->style) to have some decent colours. If you have your own custom QSS, check my default to see how to set it yourself. You can now set duplicate action options to 'always archive both files', if you want to play with making the duplicate filter do some of the work of the archive/delete filter. Also, the duplicate filter now has improved image prefetch. There should be less flickering when you switch from A to B the first time and when you action a pair and move to the next. Please note that if you still get flicker for 4k images, try boosting the image cache size under options->speed and memory (I boosted the default up to 384MB this week, so you might like to give it some more too). full list - misc: - fixed the stupid taglist scrolled-click position problem--sorry! I have a new specific weekly test for this, so it shouldn't happen again (issue #1120) - I made it so middle-clicking on a tag list does a select event again - the duplicate action options now let you say to archive both files regardless of their current archive status (issue #472) - the duplicate filter is now hooked into the media prefetch system. as soon as 'A' is displayed, the 'B' file will now be queued to be loaded, so with luck you will see very little flicker on the first transition from A->B. - I updated the duplicate filter's queue to store more information and added the next pair to the new prefetch queue, so when you action a pair, the A of the next pair should also load up quickly - boosted the default sizes of the thumbnail and image caches up to 32MB and 384MB (from 25/150) and gave them nicer 'bytes quantity' widgets in the options panel - when popup windows show network jobs, they now have delayed hide. with luck, this will make subscriptions more stable in height, less flickering as jobs are loaded and unloaded - reduced the extremes of the new auto-throttled pending upload. it will now change speed slower, on less strict of a schedule, and won't go as fast or slow max - the text colour of hyperlinks across the program, most significantly in the top-right media hover window, can now be customised in QSS. I have set some ok defaults for all the QSS styles that come with the client, if you have a custom QSS, check out my default to see what you need to do. also hyperlinks are no longer underlined and you can't 'select' their text with the mouse any more (this was a weird rich-text flag) - the client api and local booru now have a checkbox in their manage services panel for 'normie-friendly welcome page', which switches the default ascii art for an alternate - fixed an issue with the hydrus server not explicitly saying it is utf-8 when rendering html - may have fixed some issues with autocomplete dropdowns getting hung up in the wrong position and not fixing themselves until parent resize event or similar - . - code cleanup: - about 80KB of code moved out of the main ClientDB.py file: - refactored all combined files display mappings cache code from the code database to a new database module - refactored all combined files storage mappings cache code from the code database to a new database module - refactored all specific storage mappings cache code from the code database to a new database module - more misc refactoring of tag count estimate, tag search, and other code down to modules - hooked up specific display mappings cache to the repair system correctly--it had been left unregistered by accident - some misc duplicate action options code cleanup - migrated some ancient pause states--repository, subscriptions, import&export folders--to the newer options structure - migrated the image and thumbnail cache sizes to the newer options structure - removed some ancient db and dialog code from the retired dumper system next week I want to catch up on some github issues and do a little more multiple local file services work.
(18.35 KB 871x737 meme collection.png)

I hope collections will be expanded upon in the future. It's very nice to be able to group together images in a page, but often I want an overview of the individual images of a group. Right now I have to right click a group and pick open->in a new page, which is awkward. Here's a quick mock-up of how I'd like it to work. Basically, show all images, but visually group them together based on the selected namespaces.
>>8203 >I assume you're trying to download individual files or something? Yes, kinda... I'm using hydrus companion's right-click -> hydrus companion -> send to hydrus I'm browsing threads which I don't want to watch but contain a few select pictures I'd still like to save I tried looking into your suggested solution but I'm still very inexperienced using hydrus and have had so far no luck setting up an url class for 8chan.moe files, I'll keep trying in the meantime, just wanted to give you an update on what I was trying to do. On an unrelated note I did some digging and found probably what it is exactly that's the problem. Please do not be fooled. I am no expert. Far from it. I was just lucky enough to know about inspect element and compared the direct and indirect links plus used some googling. I must reiterate that despite of what it may seem, I am a complete noob at this and anything related to this. I do not possess knowledge or skill necessary to understand probably 90% of instructions you might throw at me, if they're not in a step-by-step format. That's not a demand btw, just a cautionary word. I appreciate all the support that I can receive. Anyway, now with that disclaimer out of the way, here's what I found. Comparing "request headers" under the network section of inspect element of the 404 with the 304 I found 2 things of note: Referrer Policy: strict-origin-when-cross-origin and sec-fetch-site: same-origin or sec-fetch-site: none googling it allowed me some insight as to what the 8chan administration did to achieve this frustrating but unfortunately necessary situation. As far as I can tell this "sec-fetch-site" is filled out by the application (in this case chrome) to it's liking. So all hydrus would need to do is when requesting 8chan.moe files to use the "sec-fetch-site: same-origin" No idea if whatever I just explained even had any use to any of you, or if you already knew all of this already, but I thought it better to share what info I have instead of withholding it. The bane of all customer support amirite? (No pictures this time because of login cookies and other identifiable info being vehemently present)
>>8210 The png I posted contains the URL class. Just go to network > downloaders > import downloaders and drag and drop the image from >>8203
Any way to stop hydrus from running maintenance (in my case ptr processing) while it's downloading subscriptions? I think that should prevent maintenance mode from kicking in. It always happen when I start Hydrus and leave it to download subs, because i have idle at 5 minutes. The downloads slow to a crawl because ptr processing is hogging the database. I could raise the time to idle but i still want it that low once hydrus has finished downloading subs...
Is any way to export the notes, like the file and tags? Something like: File: test.jpg Tags: test.jpg.txt Notes: test.jpg.notes.txt
>>8219 I get the impression that notes are a WIP feature. Personally I'm hoping we'll get the option to make the content parser save stuff as notes soon.
(5.19 KB 402x62 ClipboardImage.png)

>>8212 Bruh
>>8221 seems like you're not on the latest version
Are there plans to add dns over https support to hydrus? Most browsers seem to have that feature now, so it'd be cool if hydrus did too.
How do I enable a web interface for my Hydrus installation, so others can use it by my external IP? I need something simple like hydrus.app, but unfortunately it refuses to work with my external IP, only accepts the localhost, even though I enabled non-local access in API and entering my external IP in browser opens the same API welcome page as with localhost. Who runs that app, anyway, where do I see support for it?
>>8164 >>8165 Thank you for these reports. I added some pending committing auto-throttling in 481 so instead of always going for 100 rows, it could go 1-1,000 depending on how fast your machine and the PTR was doing. It seems to have backfired for some people. For 482, I capped the limits at 25-500, and I increased the tolerance of the test and reduced the acceleration. It should be less spiky while still responding to a slow database or busy PTR, but I'll be interested to know what you get. As for the read timeout on the PTR, that's more odd. Maybe the PTR was super super busy when you were talking to it, but 60 seconds without a response seems extreme. This error is essentially harmless, so don't worry too much, please just try again later. Let me know if you still get it this week and in future. It may be the result of my auto-throttling, it may just have been the PTR being super busy one day, or it might be something else. If it keeps happening, I'll write a hook for 'the PTR is busy atm, try again later' or similar. >>8174 Your thoughts on filenames have similar parallel with the 'title' tag, which I was very keen on when I started hydrus but I now generally think has been a failure. Tags are good for searching, not describing. I'd like more notes import/export support, along with the recently added Client API support, so we can play with it more for richer descriptive metadata. For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. I originally added that checkbox for a Mega supporting experiment, although I don't see anything on the github here https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders so I am not sure how well that ended up going. If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. >>8200 Ah, yeah, sorry, I don't have a nice way to filter siblings or parents by files you have yet. This has come up before, I remember now, and I'd like to add it. I recommend you migrate all the siblings and parents now, and in future when a filtering operation becomes available you can do it then. Some things will still be slow, like the edit sibs/parents dialog, but actually applying siblings and parents will be super fast since you won't have all the PTR mappings to work on.
>>8209 Thanks. Yeah, this is exactly what I want to do too. I am in the midst of a long rewrite to clean up some bad decisions I made when first making the thumbnail grid, and as I go I am adding more selection and display tools. Once things are less tangled behind the scenes, I will be able to write a 'group by' system like this, both the data structure behind and the new display code needed. Unfortunately it will take time, but I agree totally. >>8216 There's no explicit way at the moment. I have generally been comfortable with both operations working at the same time, since I'm generally ok if subs run at, say, 50% speed. I designed subs to be a roughly background activity and don't mean for them to run as fast as possible. If your machine really struggles to do both at once though, maybe I can figure out a new option. I think your best shot in the meantime, since PTR processing only works in idle time but subs can run any time, is to tweak the other idle mode options. The mouse one might work, if you often watch your subs come in live, or the 'consider the system busy if CPU above' might work, as that stops PTR work from starting if x cores are busy. If you are tight on CPU time anyway, that could be a good test for other situations too. You can also just turn off idle PTR processing and control it manually with 'process now' in services->review services. I don't like suggesting this solution as it is a bit of a sledgehammer, but you might like to play with it. >>8219 >>8220 Yeah, not yet, but more import/export options will come. If you know scripting, the Client API can grab them now: https://hydrusnetwork.github.io/hydrus/client_api.html https://hydrusnetwork.github.io/hydrus/developer_api.html
>>8223 For advanced technical stuff like that, I am limited by the libraries I use. My main 'go get stuff' network library is called 'requests', a very popular python library https://docs.python-requests.org/en/latest/ although for actual work I think it uses the core urllib3 python library https://pypi.org/project/urllib3/ . So my guess is when python supports it and we upgrade to that new version of python, this will happen naturally, or it will be a flag I can set. I searched a bit, and there might be a way to hack it in using an external library, but I am not sure how well that would work. I am not a super expert in this area. Is there a way of hacking this in at the system level? Can you tell your whole OS to do DNS lookups on https with the new protocol in the same way you can override which IP to use for DNS? If this is important to you, that might be a way to get all your software to work that way. If you discover a solution, please let me know, I would be interested. Otherwise, I think your best simple solution for now is to use a decent VPN. It isn't perfect, but it'll obscure your DNS lookups to smellyfeetbooru.org and similar from your ISP.
>>8232 The various web interfaces are all under active development right now. All are in testing phases, and I am still building out the Client API, so I can't promise there are any 'nice' solutions available right now. All the Client API tools are made by users. Many hang out on the discord, if you are comfortable going there. https://discord.gg/wPHPCUZ The best place to get support otherwise is probably on the gitlab/github/whatever sites the actual projects are hosted on, if they have issue trackers and etc.. For Hydrus.app I think that's here https://github.com/floogulinc/hydrus-web I'm not sure why your external IP access isn't working. If your your friend can see the lady welcome page across the internet, they should be able to see the whole Client API and do anything else. Sometimes http vs https can be a problem here.
>>8233 >If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. Is it even possible to download mega links through hydrus? I've been using mega.py for automating mega downloads, and looking at the code for that, it seems quite a bit more complicated than just sending the right http request. https://github.com/odwyersoftware/mega.py/blob/master/src/mega/mega.py#L695 I'd love to be proven wrong, but looks to me like this is a job for an external downloader. Speaking of which, any plans to let us configure a fallback options for URLs that hydrus can't be configured to handle directly? At very least, I want to be able to save URLs for later processing.
>>8237 >Is it even possible to download mega links through hydrus? No. #fragment text is never sent to a server, so it won't work in a traditional URL. Mega use clientside javascript or their add-on to read the fragment text and convert that into navigation commands in their client. Eventually that gets converted into whatever clever streaming download system they actually have. If you want to download Mega links, I recommend megatools or jdownloader. Just copy/paste from hydrus. Or if you want to browse, click on the link in the top-right hover of hydrus's media viewer to open it up in your browser, but bear in mind that #fragment text will often not survive a normal OS call, so you'll need to set an explicit browser executable path under options->external programs. To save a mega link in hydrus, you'll basically have to set it manually with 'manage urls', although I know some users are working on downloaders and Client API tools that will associate these URLs automatically. For native hydrus support, in the future, I'd like to have an 'exe manager' that says like 'this exe is called ffmpeg, it is here, and with these commands it will convert a webm to an mp4', for all sorts of external exes, waifu2x or youtube-dl, or indeed jdownloader. Then I can write a hook for that into URL Classes or whatever and automatically send a mega URL to an external downloader and pick up the downloaded files later for import, all natively in the client. This will be some time off though, so you'll have to do it manually for now.
>>8238 My problem is that some of the galleries I subscribe to might occasionally contain external links. For example, some artists uploading censored images, but also attaching a mega or google drive link containing the uncensored versions. I can easily set up the parser to look for these URLs in the message body and pursue them, but if hydrus itself doesn't know how to handle them, they get thrown out. Would be nice if these URLs could be stored in my inbox in some way, so I can check if I want to download them manually or paste them into some other program. Even after you implement a way to send the URL to an external program (which sounds great), it would be useful to see what URLs hydrus found but didn't know what to do with, so the user can know what URL classes they need to add.
>>8233 >For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. Oh wow, I never knew what that option did. Thanks! I made url classes. Note: one of the mega url formats (which I think is an older format) has no parameters at all, it's just "https://mega.nz/#blah". So if you just give it the url "https://mega.nz/" it will match that url. Kind of weird, but not really a huge issue. >>8184 I mean, that's not really particular to hydrus. It's true for almost any booru.
Hey, After exiting the duplicate filter I was greeted with two 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' I'm running the AUR version, if you need any more info let me know.
Is it just me or are URL classes needlessly restrictive? Forcing every URL to either be a gallery, a post or a file seems to create more issues than it solves. A post on kemono.party contains a link to a google drive folder, so all I need to do is parse it as a pursuable URL and let the google drive downloader handle the rest, right? Except google drive folder URLs count as gallery URL, and you can only pursue post and file URLs. Okay, I'll parse it as a next gallery page instead. Except you can only do that from a gallery parser, not from a post parser. That leaves two solutions. One, change kemono.party posts to count as galleries so that they're allowed to direct to other gallery URLs. That fucks with URL association, since you're only allowed to set associated URLs from post URL parsers. Second, change the URL class of google drive folders so that they count as post URLs (with multiple files) so that post URL parsers are allowed to pursue them. This breaks the google drive folder parser, because it's no longer allowed to go to next gallery page. Hold on, what if I also change next gallery page to be pursuable URLs? Not intuitive at all, but it actually does seem to work so far. As far as I can tell, the only reason to set something as a gallery URL is if you want it to be able to direct to other gallery URLs, or if you want to make use of URL parameters to find the next page. But jesus christ what a headache it was to figure all of this out while navigating between the URL class manager, the parser manager, and the download page file log. I'm guessing some of these restrictions are there to prevent people from accidentally configuring a parser that requests the next page ad infinitum, but there has to be a better way. I also have a sneaking suspicion that the dev only really downloads off boorus and designed the system around that, and that features like sub-gallery pages and "post page can produce multiple files" option had to be tacked on later to support other use cases.
could the downloading black/white list be adjusted to work on matching a search, rather than just specific tags? There's a lot of kinds of posts I'd rather not download, but most of the time they aren't simple enough to be accurately described with a single tag.
I was ill for the start of the week and am short on work time. Rather than put out a slim release, I will spend tomorrow doing some more normal work and put the release off a week. 483 should be on the 4th of May. Thanks everyone! >>8246 Sorry, I messed up some duplicate logic that will trigger on certain cases where it wants to back up a pair! This is fixed in 483 along with more duplicate filter code cleanup, please hang in there.
>>8260 Get well anon.
>>8239 For now, I think your best bet is to tell the parser to add these URLs as ''url to associate (source url)'. rather than 'url to download/pursue'. It will attach these google drive or mega or whatever links to the file as a known url, and if you have a matching URL Class like in >>8240 you'll see them nicely named in the media viewer top-right hover, but it won't download them yet. In future, when we get support (or there's a Client API solution, whatever), we'll scan the database for all the URLs of the URL classes we now support and do them retroactively. >>8240 Thank you, I will add these! >>8248 I am sorry for the trouble. When I next do a network overhaul, I would like to add more tools here. You are correct that my main fear here is to stop loops or crazy big searches. I don't want a google folder that parses ten google folders that parse a pixiv artist link that then grabs 3,000 files that grabs several other external links that splay out into a handful of deviant art tag searches by accident and so on. You are also right that I build the system for boorus originally (and some gallery sites like hentai foundry or deviant art), hence the gallery/post system. Since the downloader engine is locked into this atm, everything we have done since has been working with these fundamental objects, so the more a site deviates from that model, the more shaky hydrus is with it. Maybe I can define a 'folder tree' downloader object in a big future update, something more akin to jdownloader or a torrent client resolving a magnet link, and rather than automatic download, it instead parses the tree and presents you a summary in some new UI so you can choose what to download. I am not totally sure yet though, since that would be a ton of work and meaty, usually human-triggered actions like 'download 3.2GB from this Mega' are already well handled by other software. I would also, in the next overhaul, like to unify the edit UI in general. Jumping between the different dialogs, and the general nightmare of nested dialogs when editing parsers, I'd like to clean most of it up. Also, a highly requested feature in downloaders is downloader versioning. The update system is a complete nightmare. Just a lot of work. I am not sure when it will happen. I want to finish the multiple local file services system and then do some tag repository admin/janny workflow improvements, which will probably take me into Q3 of this year. Then I'll be free to do some other 'big' work. Most likely something to do with file relationships, since that is most popular, and then I think downloader versioning is not far behind. So, while not trying to be too optimistic or pessimistic, I hope I may be seriously planning at least some of this early/mid 2023. >>8257 Not yet, but perhaps in the future. I am planning more metadata filtering tools in future, and it would be nice to unify that with other hardcoded rules we have at the moment like 'do not download a gif > 32MB'. What sort of searches are you thinking of--something with a lot of OR clauses? Or something like 'nothing of this character by this artist'? Bear in mind that while I can expand post-download filtering too, I usually only know the tags of a file when I run the tag filters. I sometimes know the filesize and filetype right as I start a download, but I can't do something like 'veto files less than 5 seconds long' and stop the download early to save you bandwidth. >>8262 Thanks m8, doing great now. Keep on pushing.
Is there an (easy) way to extract the data used to make the file history chart into a CSV? I'd like to play around with that data myself.
Is there a way to exclude downloading files from a specific Booru/Gallery site? I want to make it so that I don't download my files from Pixiv when I use the feature that looks up a file on SauceNao and IQDB and sends the link to Hydrus. For Pixiv, I don't want to download my files from there since the tags are in Japanese, and are few in number compared to other sites like Gelbooru. This should be the easiest solution to this issue, though another solution would be to have another downloader option that specifically only searches IQDB, rather than having to use Saucenao and IQDB together, since that option always prioritizes downloading from Pixiv.
Minor bug report: hovering over tags while in the viewer and scrolling with the mouse wheel causes the viewer to move through files as if you were scrolling on the image itself. May be related to the bug from a few weeks ago.
I had a good couple of weeks. There are a variety of small fixes and quality of life improvements and the first version of 'multiple local file services' is ready for advanced users to test. The release should be as normal tomorrow.
>>8326 hello mr dev I just found out about this software and from reading the docs I have only this to say: based software based dev long live power users
Hey h dev, moveing to a new os soon, along with whatever happened recently in hydrus made video more stable so I can parse it. I know I asked about this a while ago, having a progress bar permanently under the video as an option, im wondering if that ever got implemented as an option or if it's something you haven't gotten to yet? I run into quite a few 5 second gifs next to 3 minute long webm's and me hovering the mouse over them takes up a non insignificant amount of the video, at least enough that I have to move the mouse off of it just to move it back to scrub. thanks in advance for any response.
just want to confirm the solution for broken mpv from my half sloppy debian install like in this issue: https://github.com/hydrusnetwork/hydrus/issues/1130 as suggested, copying just the system libgmodule-2.0.so to Hydrus directory helps although the path may be different, because I have such files at /usr/lib/x86_64-linux-gnu/
https://www.youtube.com/watch?v=ymI1g2VjyCY windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing some regular work and getting 'multiple local file services' ready for testing. multiple local file services This is not ready for everyone yet! Advanced users only for now please. I turned multiple local file services on in debug mode last week, just to see how things were looking, and it turned out suprisingly great, no big problems. For several months now I have been doing prep work for it, and that seems to have paid off. I decided to finish the last important things and get a v1.0 out. So, it is now possible to have multiple 'my files' services in your client, and to search, import to, and migrate files between them. These services are completely blind to each other, so searching for autocomplete tags in one will not return suggestions from another. The hope is this will allow fairly good sfw/nsfw-style separations in clients and open up interesting new contained workflows. I am recommending this only for advanced users for now, and moreso only those who have been following this feature. I have not yet written up nice help for this, and some of the UI/workflow is still not user friendly, so what I would like is for people who are enthusiastic to try it out and let me know what they think. I really haven't run into any massive errors, but I won't encourage you to go crazy on a real client yet. Go nuts on a new empty test client, or experiment carefully on a real client, just in case something goes wrong, and I will keep polishing the experience. The basics are: you can now make a new 'local file domain' in manage services. file import options now lets you import to different or multiple local file domains, and thumbnail right-click lets you copy or move files between them too. The normal search page dropdown lets you jump between local services just like searching trash, and of course it now supports multiple domains if you want to do a union. The delete and undelete commands are similarly a little more powerful when you start adding new services. Check out the changelog for more specific details. Next step I think is to make it more obvious when thumbnails/files are in certain services, since at the moment you have to scan the text on the status bar, top media hover, or thumbnail menu. Maybe custom icon rules (e.g. 'when the file is in "sfw" domain, give it a flower icon'). Then general polish like shortcut integration, maybe some more search tech, and then I really want to write a nice help document for it all to introduce normal experienced users to the idea, and some 'merge these clients' tech would be great, so users who have been using two or more clients for years can finally combine them into one. the rest This is a two week release because I was ill earlier on and it cut into my work time. So, there is a mix of different small work. Updated downloaders, reworked sibling&parent help with some neat new charts, fixes and improvements to the duplicate filter, some quality of life in UI labelling and texts. Nothing super important, but some things should be a bit smoother!
full list - multiple local file services: - the multiple local file services feature is ready for advanced users to test out! it lets you have more than one 'my files' service to store things, which will give us some neat privacy and management tools in future. there is no nice help for this feature yet, and the UI is still a little non-user-friendly, so please do not try it out unless you have been following it. and, while this has worked great in all testing, I generally do not recommend it for heavy use on a real client either, just in case something does go wrong. with those caveats, hit up _manage services_ in advanced mode, and you can now add new 'local file domain' services. it is possible to search, import to, and migrate files between these and everything basically works. I need to do more UI work to make it clear what is going on (for instance, I think we'll figure out custom icons or similar to show where files are), and some more search tech, and write up proper help, and figure out easy client merging so users can combine legacy clients, but please feel free to experiment wildly on a fresh client or carefully on your existing one - if you have more than one local file service, a new 'files' or 'local services' menu on thumbnail right-click handles duplicating and moving across local services. these actions will preserve original import times (e.g. if you move from A to B and then back to A), so they should be generally non-destructive, but we may want to add some advanced tools in future. let me know how this part goes--I think we'll probably want a different status than 'deleted from A' when you just move A->B, so as not to interfere with some advanced queries, but only IRL testing will show it - if you have a 'file import options' that imports files to multiple local services but the file import is 'already in db', the file import job will now examine if and where the file is still needed and send content update calls to fill in the gaps - the advanced delete files dialog now gives a new 'delete from all and send to trash' option if the file is in multiple local file domains - the advanced delete files dialog now fully supports file repositories - cleaned up some logic on the 'remember action' option of the advanced file deletion dialog. it also supports remembering specific file domains, not just the clever commands like 'delete and leave no record'. also this dialog no longer places the 'suggested' file service at the top of the radio button list--instead it selects that 'suggested' if there is no 'remember action' initial selection applicable. the suggested file service is now also set by the underlying thumbnail grid or media canvas if it has a simple one-service location context - the normal 'non-advanced' delete files dialog now supports files that are in multiple local file services. it will show a part of the advanced dialog to let you choose where to delete from - . - misc: - thanks to user submissions, there is a bit more help docs work--for file search, and for some neat new 'mermaid' svg diagrams in siblings/parents, which are automatically generated from a markup and easy to edit - with the new easy-to-edit mermaid diagrams, I updated the unhelpful and honestly cringe examples in the siblings and parents help to reflect real world PTR data and brushed up all the text in the top sections - just a small thing--the 'pages' menu and the page picker dialog now both say 'file search' to refer to a page that searches files. previously, 'search' or 'files' was used in different places - completely rewrote the queue code behind the duplicate filter. an ancient bad idea is now replaced with something that will be easier to work with in future - you can now go 'back' in the duplicate filter even when you have only done skips so far - the 'index string' of duplicate filters, where it says 53/100, now also says the number of decisions made - fixed some small edge case bugs in duplicate filter forward/backward move logic, and fixed the recent problem with going back after certain decisions - updated the default nijie.info parser to grab video (issue #1113) - added in a user fix to the deviant art parser - added user-made Mega URL Classes. hydrus won't support Mega for a long while, but it can recognise and categorise these URLs now, presenting them in the media viewer if you want to open them externally - fixed Exif image rotation for images that also have ICC Profiles. thanks to the user who provided great test images here (issue #1124) - hitting F5 or otherwise saying 'refresh' explicitly will now turn a search page that is currently in 'searching paused' to 'searching immediately'. previously it silently did nothing - the 'current file info' in the media window's top hover and the status bar of the main window now ignores Deletion reason, and also file modified date if it is not substantially different from another timestamp already stated. this data can still be seen on the file's right-click menu's expanded info lines off the top entry. also, as a small cleanup, it now says 'modified' and 'archived' instead of 'file modified/archived', just to save some more space - like the above 'show if interesting' check for modified date, that list of file info texts now includes the actual import time if it is different than other timestamps. (for instance, if you migrate it from one service to another some time after import) - fixed a sort error notification in the edit parser dialog when you have two duplicate subsidiary parsers that both have vetoes - fixed the new media viewer note display for PyQt5 - fixed a rare frame-duration-lookup problem when loading certain gifs into the media viewer - . - boring code cleanup: - cleaned up search signalling UI code, a couple of minor bugs with 'searching immediately' sometimes not saving right should be fixed - the 'repository updates' domain now has a different service type. it is now a 'local update file domain' rather than a 'local file domain', which is just an enum change but marks it as different to the regular media domains. some code is cleaned up as a result - renamed the terms in some old media filtering code to make it more compatible with multiple local file services - brushed up some delete code to handle multiple local file services better - cleaned up more behind the scenes of the delete files dialog - refactored ClientGUIApplicationCommand to the widgets module - wrote a new ApplicationCommandProcessor Mixin class for all UI elements that process commands. it is now used across the program and will grow in responsibility in future to unify some things here - the media viewer hover windows now send their application commands through Qt signals rather than the old pubsub system - in a bunch of places across the program, renamed 'remote' to 'not local' in file status contexts--this tends to make more sense to people at out the gate - misc little syntax cleanup next week Some small misc jobs and user-friendly-isation of multiple local file services.
>>8333 sounds great, with this I will be able to have Inbox Seen to parse Parse nsfw Parse sfw Archive nsfw Archive sfw if i'm able to search across everything, I get unfiltered results, but refine it down to specific groups outside of just a rating filter that would be great
>>8333 Does copying between local file services duplicate the file in the database?
Is it just me or is there a bug preventing files from being deleted in v483? I can send them to trash but trying to "physically delete" them doesn't work. Hitting delete with files selected does nothing, neither does right clicking and hitting "delete physically now".
(3.66 KB HydrusGraph.zip)

>>8317 Not an easy way, but attached is the original code that a user made to draw something very similar in matplotlib. If you adjust this, you could pipe it to another format, or look through the SQL to see how to extract what you want manually. My code is a bit complicated and interconnected to easily extract. The main call is here-- https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDB.py#L3098 --but there's a ton of advanced bullshit there that isn't easy to understand. If you have python experience, I'd recommend you run the program from source and then pipe the result of the help->show file history call to another location, here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUI.py#L2305 I am also expecting to expand this system. It is all hacked atm, but as it gets some polish, I expect it could go on the Client API like Mr Bones recently did. Would you be ok pulling things from the Client API, like this?: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_database_mr_bones
>>8321 Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? I don't work on that, so I'm afraid I can't help there, but I have been thinking of adding a feature on the hydrus side to say 'never download this'. A bit like a tag blacklist, but instead of URL Classes, so in your case you'd say 'never download from pixiv'. I was mostly thinking of it in terms of 'this domain is broken currently' tech, but I'd expose it to the user too. However if you want to download from pixiv on other occasions this might not be helpful. >>8325 Thank you for this report! I think the scroll is ok as long as there is a scrollbar on the taglist that can move in that direction, but if the scrollbar is at the end, or there aren't enough tags to make a scrollbar, the scroll is being promoted up to the parent panel. I'll silence this. Let me know if you have any more trouble. >>8327 I'm glad you like it! Let me know if you run into any trouble, and once you have figured things out, I'd be interested to know what you found most easy and difficult to learn. The help docs and general onboarding is always out of date, and feedback from new users on that front is always helpful. >>8328 I haven't got to it yet, I'm afraid. There is a shortcut on the 'global' set that forces the scanbar to show, but this will always cover up the bottom part of the video. I have the same problem with the short gifs and moving my mouse over only to see it was 1.1s long anyway. For some stupid layout code reasons, it is actually a pain atm for me to support both the current hide/show and the animation bar hanging beneath the video. I was thinking, as a compromise, how about an option that says 'instead of hiding the scanbar, when the mouse isn't near it, just make it 3 pixels tall'. How does that sound? Then you'd always see it if you wanted, but it wouldn't take up much space. That'd better solve the problem in the meantime and give me time to fix some hellish layout code here in the background.
>>8330 Awesome, thank you. I will update the help to reference this specifically. >>8335 Yeah, I think my next step here is to make these sorts of operations easier. You can set up a 'search everything' right now by clicking 'multiple locations' in the file domain selector and then hitting every checkmark, but it should be simpler than that. ~Maybe~ even favourite domain unions, although that seems a bit over-engineered, so I'll only do it if people actually want it. Like I have 'all local files', which is all the files on your hard disk, I need one that is all your media domains in a nice union. Also want some shortcuts so people like you will be able to hit shift+n or whatever and send a file from inbox to your parse-nsfw domain super easy. As you get into this, please let me know what works well and badly for you. All the code seems generally good, just some stupid things like a logic problem when trying to open 'delete files' on trash, so now I just need to make the UI and workflow work well. >>8340 No, it only needs one copy of the file in storage. But internally, in the database, it now has two file lists. >>8356 Yes, sorry! Thank you for the report. This is just an accidental logic bug that is stopping some people from opening the dialog on trash--sorry for the trouble! I can reproduce it and will fix it. If you really want to delete from trash, the global 'clear trash' button on review services still works, and if you have the advanced file deletion dialog turned on, you can also short-circuit by hitting shift+delete to undelete and then deleting again and choosing 'permanently delete'.
First of all, thank you for all your hard work HydrusDev I have small feature request, now that we have multiple local services For the Archive/Delete filter, there should be keyboard shortcuts for "Move/Copy to Service X" as well as "Move to Trash with reason X" and "Delete Permanently with reason X" The latter two would be nice because having to bring up the delete dialog every time is kind of clunky
>>8361 >Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? Yes, it is from Hydrus Companion, I forgot that it was a separate program since I started using it at the same time that I started using Hydrus. Now that I think about it though, just avoiding Pixiv probably isn't the best solution either, since there's plenty of content that can only be found on Pixiv. If there is a way to download the English translations of the tags, then that would mostly solve the issue, since I could then use parent/sibling tagging to align them with the other tags. I don't know how doable that would be though, so for now the best solution is probably to import a sibling tag file that changes all the Japanese pixiv tags to their English tags, assuming that someone has already made this.
>>8330 I was able to get it working by copying libmpv.so.1 and libcdio.so.18 from my old installation (still available on my old drive) to the hydrus installation folder.
I entered the duplicate filter, and after a certain point it wouldn't let me make decisions any more. I'd press the "same quality duplicate" button and it just did nothing. I exited the filter, then the client popped up a bunch of "list index out of range" errors. here's the traceback for one of them: v483, linux, frozen IndexError list index out of range File "hydrus/client/gui/ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3548, in ProcessApplicationCommand self._MediaAreTheSame() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3149, in _MediaAreTheSame self._ProcessPair( HC.DUPLICATE_SAME_QUALITY ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3259, in _ProcessPair self._ShowNextPair() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3454, in _ShowNextPair self._ShowNextPair() # there are no useful decisions left in the queue, so let's reset File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3432, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ): I reentered the duplicate filter, and I got through a few more pairs before it stopped letting me continue again. It seems like it was on the same file as last time too. Could this bug have corrupted my file relationships?
>>8359 >Python script That'll help a lot, thanks! >Would you be ok pulling things from the Client API, like this? Yeah, definitely.
>>8361 a 3 pixel tall scan bar... that honestly wont be a bad option, my only concern would be the immediate visibility of it, and i'm not sure there is a good way to do that... would it be possible to have custom colors for it, both when its small and when its large? when its large that light grey with dark grey isn't a bad option, but small it would kind of be a constantly moving needle in the haystack. but if for instance, I had the background of the smaller bad be black with a marginally think red strip, I would only see that red strip move, this may not be a great option for everyone, but I could see various different colors for higher contrast being a good thing especially when its 3 pixels big. yea I think it's a great idea, it would make it readily available from the preview how long the video is and it would be so out of the way that nothing is massively covered up. if its an option would the size it is be changeable/user settable? its currently 60 pixels if my counting is right, but I could see something maybe 15 or so being something I could leave permanently visible, if it can't than it doesn't matter, but if its possible to make it an option I think this would be a fantastic middle ground till you give it a serious pass. anyway, whatever you decide on will help no matter what path it is.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
Sorry for the double post. Verification was acting up.
>>8367 This issue isn't just with the one pair now. It's happened with multiple pair when trying to go through the filter. And it's not just happening when I mark them as same quality. It also happens when I mark them as alternates. I also noticed that when this bug happens, the number in the duplicate filter (the one that's like "13/56") jumps up a bunch
I had an ok week. I fixed some bugs (including non-working trash delete, and an issue with the new duplicate filter queue auto-skipping badly), improved some quality of life, and integrated the new multi-service 'add/move file' commands into the shortcuts system and the media viewer. The release should be as normal tomorrow. >>8367 >>8396 Thank you for this report, and sorry for the trouble! Should be fixed tomorrow, please let me know if you still have any problems with it.
Are sorting/collection improvements on the to-do list? I sometimes have to manually sort video duplicates out and being able to collect by duration/resolution ratio and sort by duration and then by resolution ratio would be extremely helpful. Sorting pages by total filesize or by smallest/largest subpage could have some uses as well, but that might be too autistic for other users.
https://www.youtube.com/watch?v=OtPsKtUyGxg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Linux.-.Executable.tar.gz I had an ok week. I fixed some things, improved some quality of life, and made internal file migration a bit easier. highlights Last week's debut of multiple local file services went well! As far as I know, no one who tried it out had any big problems, and my main concerns--mostly that it needs some better migration tools and workflows and 'this file is in here' UI--proved true. So, I know what I have to do and will keep working. Multiple local file services remains for advanced users for now, but I hope to launch it properly for everyone, with nice help, next week. However, while doing this work, I did accidentally break the simple version of the 'delete files' dialog when files were in the trash--rather than say 'delete these permanently?', it just wouldn't appear. This was due to a logical oversight where it wasn't testing and counting up 'trash' status correctly. It is fixed now. Also, there was a problem with the new duplicate filter queue for users who have done a good bit of processing. A certain function that in complicated situations automatically skips some pairs was failing whenever it hit the end of a batch. This is also fixed now, thank you for the great reports on this. For multiple local file services, I updated the UI code, fixing some little bugs and improving the workflow when you have complicated situations, and I integrated the shortcuts system and the media viewer. You can now create 'add/move to service x' actions in the 'media' shortcut set, and the media viewer has the same add/move menu on right-clicks. The media viewer has several other improvements: I think I fixed that annoying bug where a fullscreen borderless view of media that exactly fits the screen would sometimes not resize when you went back to normal window mode! Also, scrolling the mouse over the taglist hover window should no longer ever cause a 'previous/next media' event. And I have implemented a 'short and simple' version of the video/audio scanbar to show (instead of completely hiding it) when your mouse is away--just a few pixels to show things 'at a glance'. Even though it covers a few pixels of video at the bottom, I liked this so much that I set it as the default for all users. If you don't like it, you can hide it again with the new setting under options->media. full list - misc: - fixed the simple delete files dialog for trashed files. due to a logical oversight, the simple version was not testing 'trashed' status and so didn't see anything to permanently delete and would immediately dump out. now it shows the option for trashed files again, and if the selection includes trash and non-trash, it shows multiple options - fixed an error in the 'show next pair' logic of the new duplicate filter queue where if it needed to auto-skip through the end of the current batch and load up the next batch (issues #1139, #1143) - a new setting on _options->media_ now lets you set the scanbar to be small and simple instead of hidden when the mouse is moved away. I liked this so much personally it is now the default for all users. try it out! - the media viewer's taglist hover window will now never send a mouse wheel event up to the media viewer canvas (so scrolling the tags won't accidentally do previous/next if you hit the end of the list scrollbar) - I think I have fixed the bug where going on the media viewer from borderless fullscreen to a regular window would not trigger a media container resize if the media perfectly fitted the ratio of the fullscreen monitor! - the system tray icon now has minimise/restore entries - to reduce confusion, when a content parser vetoes, it now prepends the file import 'note' with 'veto: ' - the 'clear service info cache' job under _database->regenerate_ is renamed to 'service info numbers' and now has a service selector so you can, let's say, regen your miscounted 'number of files in trash' count without triggering a complete recount of every single mapping on the PTR the next time you open review services - hydrus now recognises most (and maybe all) windows executables so it can discard them from imports confidently. a user discovered an interesting exe with embedded audio that ffmpeg was seeing as an mp3--this no longer occurs - the 'edit string conversion step' dialog now saves a new default (which is used on 'add' events) every time you ok it. 'append extra text' is no longer the universal default! - the 'edit tag rule' dialog in the parsing system now starts with the tag name field focused - updated 'getting started/installing' help to talk more about mpv on Linux. the 'libgmodule' problem _seems_ to have a solid fix now, which is properly written out there. thanks to the users who figured all this out and provided feedback - . - multiple local file services: - the media viewer menu now offers add/move actions just like the thumb grid - added a new shortcut action that lets you specify add/move jobs. it is available in the media shortcut set and will work in the thumbnail grid and the media viewer - add/move is now nicer in edge cases. files are filtered better to ensure only local media files end up in a job (e.g. if you were to try to move files out of the repository update domain using a shortcut), and 'add' commands from trashed files are naturally and silently converted to a pure undelete - . - boring code cleanup: - refactored the UI side of multiple local file services add/move commands. various functions to select, filter, and question the user on actions are now pulled to a separate simple module where other parts of the UI can also access them, and there is now just one isolated pipeline for file service add/move content updates. - if a 'move' job is started without a source service and multiple services could apply, the main routine will now ask the user which to use using a selector that shows how many files each choice will affect - also rewrote the add/move menu population code, fixed a couple little issues, and refactored it to a module the media viewer canvas can use - wrote a new menu builder that can place a list of items either as a single item (if the list is length 1), or make a submenu if there are more. it drives the new add/move commands and now the behind the scenes of all other service-based menu population next week Next week is a cleanup week, so I will do some boring code cleanup and see if I can write some nice introductory help for the multiple local file services system. I have four more weeks before my vacation, so I am aiming to have the big work of multiple local file services finished by then.
>>8409 >>8377 Nice, the scan bar is far more visible than I thought it may have been. I think there is the possibility that other colors may also help legibility but for me its just fine as is.
ok h dev, probably my last question for a while, I have so far parsed thought about 5000-10000 "must be pixel dups" I have yet to find one where I have ever decided 'lets keep the one with the larger file size' I have decided, at least for the function of exact dupes, i'm willing to trust programs judgement is there any automation in the program for these yet? from what I can see a few of my subscriptions are generating a hell of alot of these, and even then, I had another 50000 to go though, if there is a way to just keep the smaller file and yeet the larger with the same settings I have assigned to 'this is better' this would be amazing. I dont recall if anything has been added to hydrus yet, I would never trust this for any speculative match as I constantly get dups that require hand parsing with that, but holy shit is it mind numbing to go though pixel dups... scratch that, when I have all files, I have 325k must be pixel dupes (2 million something potential dups, so this isn't a case of the program lagging behind options)
(34.93 KB 1920x1080 help.png)

Can't seem to do anything with these files. I can't delete them, and setting a job to remove missing or invalid files doesn't touch them. They don't have URLs so I can't easily redownload them either. What do?
>>8418 Note, they do have tags, sha256sums, and file IDs, but nothing else as far as I can tell. If I manage to redownload one by searching for each file manually based off the tags it appears and can be deleted. Maybe I could do some sqlite magic and remove the records via the file IDs using the command line, but I don't know how. The weird thing is how they appear in searches. They don't show up when I search only system:everything, but do show up when searching for tags that the missing file is tagged with. I tried adding a dummy tag to all of my working files and searching with -dummy, and the missing files didn't show up. If I search some tag that matches a missing file and use -dummy, the missing files that are tagged with whatever other tag I used to search do show up. Luckily all of these files had a tag in common so I can easily make a page with all of the missing files, 498 total. I can open the tag editor for these, and adding tags works but I cannot search for tags that only exist on missing files (I tried adding a 'missing file' tag, can't search it). Nothing interesting in the logs, unless I try to access one which either gives KeyError 101 or a generic missing file popup. Hydev, if you're interested in a copy of my database folder, I could remove most of the large working files and upload a copy somewhere if you want to mess with it. I'm open to trying whatever you want me to if that's more convenient though.
Got this error when after updating. (def jumped multiple versions, not sure how much) Manually checking my files seems that all of them are fine. It's just that hydrus can't seem to make sense of it for some reason...? FYI my files are on a separate hdd and my hydrus installation is on an ssd. Neither are on the same drive as my OS
>>8363 Thanks. I agree. I figured out the move/add internal application commands for 484, so they are ready to be integrated. 'Delete with reason x' will need a bit of extra work, but I can figure it out, and then I will have a think about how to integrate it into archive/delete and what the edit UI of this sort of thing looks like. Ideally, although I doubt I will have time, it would be really nice to have multiple archive/delete filters. >>8364 Yeah, this sounds tricky. Although it is complex, I think your best bet might be to personally duplicate and then edit the redirection scripts or tag parsers involved here. You may be able to edit the hydrus pixiv parser to grab the english tags (I know we used to have this option as an alternate parser, but I guess it isn't available any more? maybe pixiv changed how this worked?), or change whatever is parsing SauceNao, although I guess that is part of Hydrus Companion. EDIT: Actually, if your only solid problem with pixiv is you don't want its japanese tags, hit up network->downloaders->manage default tag import options, scroll down to 'pixiv file page api' and 'pixiv manga_big page' and set specific defaults there that grab no tags. Any hydrus import page that has its tag import options set to 'use the current defaults' will then default to those, and not grab any tags. >>8366 Thank you! >>8376 Thanks. I'll make a job to expose this data on the Client API.
>>8377 >>8413 I'm glad. I am enjoying it too in my IRL use. I thought it would be super annoying, but after a bit of use, it just blends into my view and is almost unconsciously useful. Just FYI: The options are a ugly debug/prototype, but you can edit the scanbar colours now. Hit up install_dir/static/qss and duplicate 'default_hydrus.qss'. Then edit your duplicate so the 'qproperty' values under HydrusAnimationBar have different hex colour values. Load up the client, switch your style to your duplicated qss file, and the scanbar should change colour! If you already use a QSS style, then you'll want to copy the custom HydrusAnimationBar section to a duplicate of the QSS style file you use and edit that. >>8379 Thank you, I will investigate this. I was actually going to try exposing all the modified timestamps on the Client API and the client, not just the aggregate value, so I will do this too, and that will help to figure out what is going on here. >>8408 I would like to do this. It can sometimes be tricky, but that's ok--the main problem is I have a lot of really ugly UI code behind the scenes that I need to clean up before I can sanely extend these systems, and then when I extend them I will also have to update the UI to support more view types. It will come, but it will have to wait for several rounds of code cleaning all across the program before I dive properly back in here. Please keep reminding me. Sorting pages themselves should be easier. You can already do a-z name and num_files, so adding total_filesize should be ok to do. I'll make a job. >>8417 Thanks. There is no automation yet, but this will be the first (optional) automated module I add to the duplicate filter, and I strongly expect to have it done this year. I will make sure it is configurable so you can choose to always get rid of the larger. Ideally, this will process duplicates immediately upon detection, so the client will negotiate it and actually delete the 'worse' file as soon as file imports happen.
>>8418 >>8428 Thanks, this is odd, but it may be completely explainable. Can you check the 'file domain' (left button) of the tag autocomplete dropdown of those search pages. Does it say 'my files' or 'all known files'? Given you can re-download these, it sounds like these are previously deleted files. If you right-click on one and hit the top item so it expands out to all the info lines and timestamps, does it say something like 'deleted from my files 3 months ago' or similar? I'm actually going to write about this a bit more this week as I do the multiple local file services help, but hydrus doesn't technically care if a file is in a domain or not--as long as the client has once heard of its hash, it can add tags or ratings or urls to it. This is the core of how the PTR works. If a file is in the client, then it can draw a thumbnail, otherwise, it draws that default hydrus icon and a red border. Normally, you never see these 'non-local' files, since when you search, you are limited to the 'my files' domain, so you filter hydrus's knowledge only to the files you have on disk, but if your file domain on that search page is 'all known files' or another advanced search, then they may have been exposed. If you see these on 'my files' or 'trash' or 'all local files', then something is definitely going wrong. >>8430 I am very sorry, this error means it is extremely likely that you have had some hard drive damage and your database files (on your SSD) have been damaged. Sometimes these errors are severe (hard drive dying), but often they are trivial (just a bit of extra junk data after a rough powercut). It may be the update routine walked over a damaged area and set a flag. Your next step is to check "install_dir/db/help my db is broke.txt". This document will talk all about it and your next steps to ensure your data is safe and start recovery. Normally this error would point you to that file, but it seems to have happened at an inconvenient moment for you and the error handling isn't clever enough to figure it out. Let me know if you need any help, I'm happy to go one on one to help fix or recover from anything serious.
>>8446 Missing files anon here, it said "my files". I should have mentioned this in my first post, but I had to restore my database from a backup a while back and these first appeared then. I'm assuming they were in the database when I backed it up, but had been deleted in between making the backup and restoring it. I fucked around with file maintenance jobs and managed to fix it. It didn't work the first time because "all media files" and/or "system:everything" wasn't matching the missing files. The files did all have a tag in common that I didn't care to remove from my working files, and for some reason this tag would match the missing files when searched for. I ran the maintenance search on that tag and did the job, and now they're gone.
>>8446 >>8447 Actually, scratch that. The job was able to match the files and reported them as missing, put their sha256sums into a file in the database folder, and made them vanish from the page that had the tag searched, but refreshing it shows that they weren't actually removed and I still encounter them when searching for other tags. Not sure what to do now.
Hello. Is there a way to make sure that when scraping tags, the imaged that were deleted aren't going to be downloaded again?
Can someone help me? Since the last 3 releases Hydrus has been pretty much unusable for me. Having it open after a while it ends up on (not responding) and it can stay that way for hours or until i force close it. I asked on the discord but no one has replied me (I can't complain tho they have helped me a lot in the past) I have a pretty decent PC. R7 1700, 32GB of RAM and I have the main files on a NVM drive and the rest on a 4TB HDD. Please help, I haven't been able to use Hydrus for almost a month.
Trying to download by Pixiv bookmarks, but everytime I enter the url "https://www.pixiv.net/en/users/numbers/bookmarks/artworks" I get an error saying "The parser found nothing in the document". Only trying to grab public bookmarks and I've got Hydrus Companion setup with the API key. Not sure what I'm doing wrong, unless there's some alternate URL I'm supposed to use for bookmarks.
could you change the behavior of importing siblings from a text file so that if a pair would create a loop with siblings you already have, it just asks if you want to replace those pairs you already have that would be part of the loop with the ones from the file? The way it works now, there's no way to replace those siblings with the ones from the file except for manually going through each one yourself, but that defeats the purpose of importing from a file. This would be an exception in the case of you clicking "only add pairs, don't remove" but that's okay because the dialog window would ask you first. As it is right now, the feature is unfortunately useless for my purposes, which is a shame because I thought I finally found a solution for an issue with siblings I've been having for a while. A real bummer.
I had a good simple week. I cleaned some code, improved some quality of life, and made multiple local file services ready for all users. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=AKgjOCuW_MU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Linux.-.Executable.tar.gz I had a good week. The multiple local file services system is now ready for all users. multiple local file services I have written some proper help for this new system to talk about what it is and how to use it. The basic idea is you can now have more than one 'my files', which lets you compartmentalise things for privacy or workflow reasons. The help is here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html All users can try this out--you no longer have to be in advanced mode--but in terms of experience level, I recommend it to people who are at least comfortable with tag siblings and parents. This system is fundamentally feature complete. The outstanding immediate problems are that it file location doesn't show up in UI very well yet, the Client API should plug into it better, and it needs some en masse controls to do large file migrations and client merging. I hope to work on these in the coming weeks. If you give this a go, let me know what you think! full list - multiple local file services: - multiple local file services are now available for everyone! you no longer need to be in advanced mode to create them. all are welcome, but in terms of skill level, I most recommend it for users who are comfortable with tag siblings and parents - the tl;dr: you can now have more than one 'my files', which lets you put things in isolated locations - I wrote a proper help document on multiple local file services--what they are, how they work, my recommendations, and a bit of extra info about hydrus file search in general, right here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html - file searches in 'multiple locations' on large clients are now massively faster in almost all situations. the only place multiple location searches are still slow is whenever the duplicates system (system:file relationships) comes into play - . - misc: - in the page tab menu, you can now sort pages by total file size - the 'force system:limit for all searches' option is moved from the 'speed and memory' to 'search' panel - when files download from sites, if the raw file is served by cloudflare and has a timestamp radically different to a parsed source time, that CF timestamp is saved under a different domain rather than overwriting the original domain timestamp. this seemed to affect danbooru on about 1 in 10-20 files. note this does not change much at the moment, but when you can see and sort on individual domain modified dates, this should improve the sort - updated the 'installing' help to talk about bad install locations for the database. network locations are bad, and thanks to user reports, we now know USB drives can be bad if the database is busy when the OS goes to sleep - if a 'database is malformed' error occurs on boot, the client now recognises it and points the user to 'install_dir/db/help my db is broke.txt' for the next steps - . - boring code cleanup: - another 60KB or so of code pulled out of ClientDB.py: - created a new database module for url mappings and refactored various fetch and update routines to it - created a new database module for some rich file metadata and refactored some file filtering, history, and status testing code to it - created a new database module for file searching and moved all tag-based file searching code to it - moved several other misc methods down to database modules next week I am behind on my github bug reports and lots of other small work, so I will chip away at these. Thanks everyone!
I'm pretty new to using this but, is there a way to tag a media with a gang of niggers tag without including its parent tags?
I'm looking to use an android app (or equivalent) that lets me manage (archive/delete) my collection hosted on a computer within a local network, so say if I had no internet I could still use it. Is this a thing? Is there a program that will do this? The available apps out there are a bit confusing as to what their limitations or features are.
Is it possible to download pics from Yandex Images with Hydrus, or can someone suggest a good program that can? Thanks.
is there a setting to make it so hydrus adds filenames as tags by default, such as when importing local files?
>>8453 Isn't that the default behavior of downloaders? Make sure "exclude previously deleted files" is checked. Or are you trying to add tags to files you've already deleted without redownloading them? I don't know if you can do that. >>8468 If you want to give something a tag without including its parent tags, it sounds like that tag shouldn't have those parent tags in the first place. >>8487 Import folders can do that. You can just have a folder somewhere that you can dump files in, and you can set hydrus to periodically check it and do things like add the filename or directory as tags.
Is there a way to download tags and other things from a parser even if the parser can't find a file to download? There are a bunch of images on e621 that I downloaded a long time ago but I didn't download the tags. Since then the artist has had almost all their images taken off of e621. Even though the images have been down, the tags are still there. Example: https://e621.net/posts/1292060 The images have the e621 url in their known urls, but if I try to download the url with hydrus it just says that it can't find anything to download. Even if "force file downloading even if url recognised" is unchecked, it won't add the tags to the file already in the db. Maybe this could be a file import option. Call it "if post url recognised, ignore failure to find file" or something.
>>8446 The cloning process seems to have worked in the sense that the integrity checks now pass. However now I get this message when I boot up hydrus. Is it safe to proceed or am I in deeper shit?
>>8447 >>8452 Thank you, this is odd. It feels like your different file services have somehow become desynced, so 'my files' has a different file list to 'all local files'. Like with 'all media files' not grabbing the orphan file records. If you make sure help->advanced mode is on, and then change the file domain from 'my files' to 'all local files', do the ghost files still show up? If not, that suggests yes there is a desync here. There is a special command for this, but it is old and I don't know how well it works in the new multiple local file service era. Please make a backup before you try this, in case it goes wrong. Then give database->db maintenance->clear orphan file records a go. It should give you some info. >>8453 >>8488 Yeah, this is default. The option is under the file import options button of any downloader. Defaults for these options are under options->importing. >>8454 When you run the program, can you check your install_dir/db folder for me? Do the different temporary .db-wal files grow very large, like 800MB+? I am chasing down a bug related to this that sounds a bit like your problem. Otherwise, please bear with the lag for a bit and hit up help->debug->profiling. There is a 'what is this?' menu entry there that explains how it works. pastebin or email me the profile log and I will see what is running so slow for you. Quick things to try: 1) if you have hundreds of pages or hundreds of download queries, reduce the size of your session 2) pause tag sibling/parent background sync maintenance under tags->sibling/parent sync.
>>8455 I am not a pixiv user IRL so I can't talk too intelligently, but hydrus is only set up to parse certain URLs. Typically that is stuff like an artist's gallery homepage, like this: https://www.pixiv.net/en/users/67138065 That URL you posted, is that your favourites on Pixiv? Hydrus would have to be taught how to parse your favourites, which I don't think it does by default. The community repository has some downloaders here that look good: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Pixiv So, if you download that newer bookmarks png: https://raw.githubusercontent.com/CuddleBear92/Hydrus-Presets-and-Scripts/master/Downloaders/Pixiv/pixiv%20bookmarks%20-%202020-11-23.png And import it via network->downloaders->import downloaders (drag and drop it on Lain), maybe it will work? Sorry if I can't help more. >>8460 Sure, thank you. I'll figure out some yes/no dialogs to change the import behaviour to a sort of 'overwrite'. >>8468 >>8488 Yeah, parents are not optional. They are supposed to apply to definitional relationships, like a 'car' is always a 'vehicle'. If you really hate the parents that, say, the PTR gives, you can change what applies where under tags->manage where tag siblings and parents apply.
>>8475 They are mostly under development right now. Some are better than others. Actual 'management' is limited, mostly they do read-only search atm, but the tools will expand in future. I assume you have been here to see the list, but if not: https://hydrusnetwork.github.io/hydrus/client_api.html#browsers_and_tools_created_by_hydrus_users Hydrus Web is your best bet if you are looking for a booru-style interface. Normally you use a site to load the interface, but if you want a local network solution, you can spin up a Docker instance, if you have that support. An alternative--this sounds stupid but I know a few guys who do it to great effect--is to just run a VNC app through your tablet, maybe with a hotkey overlay set up for your hydrus shortcuts, and then just tap to go through your archive/delete filter on the cough. Since you are on a local network, you have all the bandwidth you need for smooth VNC. >>8476 Not by default, and I'm afraid I don't see a user-made downloader at the community repository here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders I am mostly confident that hydrus could be taught to download from yandex images, but actually learning how to do that takes some time. You might like to play with hydrus's 'simple downloader', maybe one of the default formulae in there can grab image links or something and get what you want. Or, if you are very nice to a hydrus user who knows how to make downloaders, you might be able to get them to make one for you. >>8489 There might be a way to bodge this, like if you grabbed the hash in the parser maybe and hydrus never noticed it was missing a direct file URL, but I think there are too many weird hurdles to overcome and it would just fail somewhere. It is a long term project of the program to have efficient hash->tag lookup maintenance, so I do plan to have official support for this some time in the future with a future iteration of the whole parsing and lookup and maintenance systems. For now, your best bet is the Client API. Grab whatever kind of hash and tag info you like in your own script, and then throw it at the Client API. https://hydrusnetwork.github.io/hydrus/client_api.html >>8490 This looks good to me! The clone has removed all the damaged data, which seems to include some tag count tables. This is good news, because all this data can be regenerated, and it even seems that I wrote some special repair code to fix it automatically. With luck, the worst damage here is the annoyance of waiting for things to fix themselves. Click ok, let it do its work, and have a browse around. There may be more warning popup windows like this. Other data may be missing (e.g. not a whole missing table, which is easy to spot, but a table now missing half its contents from the clone), but if it is all limited to client.caches.db, you are in luck, because all that can be regenerated. Let me know if you notice any whack counts or bad searches once you are working again and I can help you figure out which of the guys under database->regenerate you should run. (NOTE: do not run any of those unless you know you need them, some of them take ages).
>>8491 It seems I already had "all local files" on, but changing it back to just "my files" seems to have no effect. I tried "clear orphan file records" and it nearly instantly completed without finding any.
>>8493 >For now, your best bet is the Client API Managed to figure it out, thanks. I used gallery-dl to download the metadata for all the files, gathered the md5 and tags from the metadata, searched up the md5 in the API and got the sha256, then added the tags to the sha256.
Hi, I didn't use Hydrus (Linux version) for three months, and after update to the latest version I noticed the following: when you start a selection in file manager (e. g. press shift and repeatedly press → to select multiple files) the image preview is freezing at selection begin, but the tag list is reflecting your movements. Old behavior is that both preview and tag list were changing synchronously.
>>8475 >>8493 Okay, thanks for the response. When the development is finished, I assume there will be an announcement. I had considered the VNC option. I'm not sure who's developing the app, if it's you or someone else, but do you know if it will be like a remote control of hydrus on a host computer, if it'll be a kind of a port of existing hydrus, or if it'll have functionality of both options? I'm also curious as of an approximate timeframe as well.
>>8455 I got it work via URL like that thouhg Hydrus url import page: https://www.pixiv.net/ajax/user/YOURPIXIVID/illusts/bookmarks?lang=en&limit=48&offset=96&rest=show&tag= I didn't try to change the limit key (was afraid of ban), so whole process was page by page - increasing offset by 48 every input of URL
>>8505 update: Hydrus finally booted, thank god, however it's completely empty. All the files are still on my HDD I can check, hydrus just seems to have forgotten about them. I suspect it might have also forgotten about pretty much all other settings as well, such as my thumbnails and files drive location. (thumbnails on ssd, files on hdd, originally, as suggested)
>>8515 Would I be able to do a "restore from a database backup", select my old, now seemingly "unlinked"/"forgotten" db, and proceed?
The release will only be recommended for advanced users! Regular users please check back next week. I had a great week. I fixed several things, improved some quality of life, and added a new service to the database to make managing multiple local file services a bit easier. The release should be as normal tomorrow. >>8505 >>8515 >>8516 Damn, this is not good. Your options structure has, yeah, been damaged, which means that client.db was affected too. Did you get lots and lots more 'this module was missing some tables' warnings? If your client sees no files, then it sounds like your core file table was damaged as well. This sounds stupid, but please check file->open->database location to make sure the client is pointing at the right location. In the off-chance that somehow your db folder has been set to read-only due to drive damage, it might redirect to a different location and would appear to be a brand new database. EDIT: There is an odd thing here I can't explain--your options structure was destroyed, and presumably the database made a fresh one. If this is true, it should not have a database backup location stored. If you made a backup previously, I think hitting 'restore from a database backup' is the correct answer here. Since everything is very damaged, I would not do this in the client, but externally, and make sure you keep everything. Something like this: - Go to install_dir/db - Move the damaged client*.db files somewhere safe. - Go to your db_backup folder (this used to be something like install_dir/db/db_backup, but it could be somewhere else. Search your system for "client.master.db" if you aren't sure where) - Copy the client*.db files from the backup folder to your install db folder. - Try to boot Make sure you don't delete anything, and make sure your temporary folders are labelled so you don't lose track of anything. I am not sure what has happened. You seem to have had some really bad database damage, and this may ultimately need some more focused back and forth. Let me know how you get on, and if you like, please email me or DM me on discord and we can get into it more closely. You've been reading 'help my db is broke.txt', but I'll just reiterate--please make sure your SSD is healthy, in case there are ongoing read errors here or something.
>>8518 Alright lemme give just a little more context to the current state of things then. This is how my setup [b]used[/b] to be set up client.exe in (SSD) E:\Hydrus Network\ thumbnails in (SSD) E:\Hydrus Network\thumbnails\ files in (HDD) F:\Hydrus Network Files\files\ (from f00 to fff) after this whole fuckery happened, I manually checked and all files remained in their place and continue to be fully intact and viewable from the file explorer, and also able to be opened and viewed without a fuss. Coming home from work I checked and it seems my suspicions were right. All my settings were reset to default, including the default file locations so for example were I to save a picture from 8chan it would by default put it in: E:\Hydrus Network\db\client_files\ There are currently no files actually saved in this location. It's empty. To clarify I didn't "create a backup" before this, but since my previous files in (F:) still remain there completely fine and viewable I was wondering if I could simple instruct hydrus to "look here for pictures" basically. At this point I don't care about tags, watches, and all that stuff, I'm just glad my files are safe and I want to get hydrus back in shape where it's useable for me.
>>8520 PS.: It's as if hydrus had uninstalled then reinstalled itself. Quite bizarre...
>>8520 >>8521 Yeah, this is very odd. If you had not posted about the malformed errors and the problem loading the serialised options object, I would have guessed that your database files had been accidentally deleted. If the client boots with no 'client.db' file in the db directory, it assumes this is first start and creates a fresh one. That would give the symptoms of resetting your file locations back to install_dir/db/client_files. I am sorry to say I think your client.db probably was eviscerated in some way, almost certainly a very bad hard drive event, or something external--like a crazy anti-virus program, or it might be a cloud backup process--removed or broke the file. In any case, I am sad to say I think your best bet is to move everything in your 'db' folder to a safe location and start again. The current database is either damaged or strange and can't really be trusted going forward. Make a new database and import the files in F:\Hydrus Network Files\files\ in batches. You can't go 'just look here and get the files', unfortunately, but you can import them manually no problem. If there are things like inbox status you want to try to save from the old database, I can help with that, but it will require some time and complicated manual SQL to do. Let me know what you miss. This situation sucks, but if your files are safe, that's great. Once you are feeling better about your situation, please check out how to maintain a backup of your client: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
https://www.youtube.com/watch?v=ZUrcYKghr-Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Linux.-.Executable.tar.gz I had a great week working on a variety of smaller issues and some important database updates. The release this week is only recommended for advanced users. I make an important change, and I want to make sure the update works quickly and without problems before I roll it out to everyone. If you are not an advanced user, please check back in next week! The update will also take a few minutes this week. all my files So, I have made a new virtual service, 'all my files', which covers the union of all your local file services. This service is very similar to 'all local files', but it does not include trash or repository files. It provides a bunch of tools across the program for quick and precise searching of all the files that have value and are worth looking at. When you update, this new service will be created and populated. It will take a few minutes, longer if you have millions of files and tags. My 2.8-million-file ptr-syncing client took 32 minutes. There are progress updates on the splash window. Once you are booted, you will see 'all my files' in review services and the file domain selector if you have more than one local file domain. Feel free to play around with it--it will run a lot faster than previously going 'multiple locations' and unioning all your local file services. The code is working really well on my end, and I am not afraid of anything being damaged, but if something goes wrong, it may require some clever/slow regeneration to fix. The main things I would like to know are: 1) Did your update take significantly longer than ~100k files/minute? Did it get held up on anything? 2) After some use, have you noticed any file/tag miscounting with 'all my files'? As always, make a backup before you update. other highlights The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. The database now cleans up after itself more thoroughly. Some users have been having trouble with very large 'WAL' files, some getting to be multiple GB, and perhaps seeing bloated memory use along with it. A set of new maintenance routines now force write-flushing at regular intervals. In my testing, there is no lag related to this, but I will be interested to hear if anyone gets new commit hang-ups during very heavy work. If you have had a huge WAL, let me know if this helps! full list - This week's release is for advanced users only! I make a big change, and I want to make sure the update is fast and there are no unusual problems before rolling it out to all users. - all my files: - the client adds a new virtual file service this week, 'all my files', which is an umbrella covering all your local file domains. if you do not engage the multiple local file services system, you won't see it much, but if you do, you'll now have a convenient tool for saying 'all my stuff' without including trash and repository updates - it will take a minute or two to generate this new service on update. if you have a client with millions of files, it may take a while - 'all my files' now appears in the file domain selector button on your tag entry box if you have more than one local file domain. selecting this searches the union of all your local file domains with fast and precise count (as opposed to 'multiple locations' of the full union, which will have imprecise counts and be slower). it also does duplicate file work laser-fast (again, unlike 'multiple locations', which is often slow due to UNION complexity) - 'all my files' also appears in review and manage services, very similarly to 'all local files' - a heap of hacks I instituted when getting multiple local file services ready are now replaced with this clean 'yeah this file is valued and worth looking at' domain. for instance, downloader pages now view files in this way. - mr bones and the file history chart also use 'all my files', and are significantly faster to calculate. the chart also excludes repo update files and trash now - calls to delete or undelete on 'all my files' (this is mostly Client API and some 'default' situations) will be converted to a blanket 'force send to trash' and 'force undelete all deleted records' - the 'undelete files?' dialog is now a button selection dialog. it also now has an 'all the above' option when more than one local service may apply, which tells the client to undelete to all services the files have been deleted from - updated multiple local file services help to talk a little about the new domain - rearranged the sort in a couple of places where the different local file services appear. they should now be: local file domains, all my files, trash, repo updates, all local files - ADVANCED: the 'presentation import options' under 'file import options' now allows a full-fledged location context using the new multiple local file services system rather than the previous 'in your files(and trash too)' choice. it defaults to the new 'all my files' domain
- misc: - thanks to a user, the 'getting started with downloading' help has had a full pass. if you have had trouble with downloaders, particularly if you are unsure about what file import options are for, or what subscriptions are, please check it out! - the 'media viewers' shortcut set gets three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max' (issue #1141) - if a media type is set to do 'exact zooms', it will now not exceed the otherwise specified max zoom - the file sort widget will now preserve ascending/descending status on sort type changes (rather than resetting to default) if the asc/desc strings do not change. so, if you are on 'import time'/'oldest first', and switch to 'archive time', it will now stay on 'oldest' rather than resetting to 'newest' - the manage tag siblings dialog now tries to automatically break loops for you, just like it will automatically break A->B, A->C conflicts. this works on manual entry or mass import - the manage tag siblings dialog now shows the stated 'reason' for any pair change (e.g. "AUTO-PETITION TO BREAK LOOP") in the 'note' column - the 'short' animation scanbar--when your mouse is away--now keeps a short disabled volume button beside it. I found it very annoying how the scan nub would jump a few pixels left/right as this popped up and down, so now it is the same width big and small - right-clicking on files when in pages with 'multiple locations' file domains is now much much faster - the filename tagging dialog now starts with the 'tags for all' focused, and the 'press up/down on empty input' shortcuts are now plugged in, so pressing up/down will change service - I believe I may have completely eliminated the additional superlag that sometimes occurs when adding or deleting a service. it was a database maintenance routine getting carried away with other outstanding work - move/add actions in the new multiple local file system now operate asynchronously and politely, spreading their work time out when the client is busy, and for large jobs they will also make a cancellable progress popup - cleaned up how the autocomplete entry sends some of its signals to other parts of the program - did some misc help and code edits/refactoring, including brushing up the Windows install section with more advanced options - removed the 'hydrus zooms big bad' warning from the 'media' options page. hydrus zooms big good now! - . - some database stuff: - tl;dr: database cleans up after itself better now - some users have had trouble with database journal files (the 'wal' files in your db directory) on certain clients getting huge after lots of work, multiple GB, and causing the OS a headache if the journal is doing work through a computer sleep. these journals are 'supposed' to checkpoint and clean themselves up naturally, but I think a busy database chokes them. therefore, I have improved the hydrus maintenance this week: 1) the 'journal size limit' PRAGMA, which applies softly after every 30 seconds or so, is now 128MB down from 1GB. 2) databases in PERSIST (rare) mode will now specifically zero out their journal fifteen minutes. 3) databases in WAL mode (the default), in addition to regular PASSIVE checkpointing now every five minutes, will force an additional TRUNCATE checkpoint every fifteen. this should force a regular full flush and maybe help some other problems like gigantic memory bloat the same users sometimes saw. if you are a very advanced user and do active debug on the database while hydrus is using it, please note this new TRUNCATE command is aggressive and may block itself or you inconveniently. let me know how you get on! - moved the recent 'be careful of usb drives' section in 'installing' help to 'help my db is broke.txt'. it is very likely this problem was related to the above WAL stuff, and it was not just usb drives, I rewrote it as generalised help for anyone who gets 'delayed write failed' errors at the OS level - massively optimised several critical duplicate files filtering methods if the current location context has more than one file domain, and I think I cleared out the basic 'get duplicate info for this file' call of all slow calls in complex location contexts - the repair routine that regenerates mapping caches if any tables are missing on boot is now more reliable and covers the entirety of the mappings cache system using the new modules system. it also now regenerates just for the tag services with missing tables, not the whole cache - if multiple types of mapping cache tables are missing on boot, and multiple waves of regenerations covering different areas are planned, duplicate regenerations will now be skipped next week Beyond some more multiple local file services work--probably client api updates--next week is a 'medium size' job week. I want to plough some time into better en masse import/export tools for tags and other metadata. I'm not sure how far I will get, but I want a framework sketched out so I can start hanging things off it in future.
Can Hydrus have audio WavPack (.wv) files support, even only just for storing, not playback? That will be a good addition to the already available .flac and .tta.
down the line this will probably be obsolete, but before than it will help quite a bit. with duplicates, when they are pixel matches, is there a way to either set the lower file size one to be green and the bigger one to be red? its already this way with jpeg vs png ones, but same vs same just has both as blue, and with a pixel duplicates there would never be a reason to choose the larger file size. for me I want the duplicate deciding process to be as speedy as possible, at least with these exact duplicate ones, and I have been watching things while doing this, however and this may be my monitor, unless im staring straight at the numbers, they kind of blend, making 56890 all kind of look alike, requiring me to sit up and look at it straight on. I think if the lower number was green on exact dupes it would speed the process up significantly, at least until an auto discard for exact dupes (and hopefully this takes the smaller file size as the better pair) gets implemented and we no longer have to deal with exacts. I don't know if this would be simple to implement, but if it is, it would be much appreciated.
I'm trying to download a thread from archived.moe and arciveofsins.com but it keeps giving errors with a watcher and keeps failing with a simple downloader. it seems like manually clicking on the page somehow redirects to a different link then when hydrus does it.
>>8158 >In terms of metadata, hydrus keeps all other metadata it knows about the file. If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? Also, what does telling Hydrus to forget previously deleted files actually remove if it still keeps the files' hashes? I don't feel comfortable (or desperate) enough to use the method you gave, but I also don't want to go through the trouble of exporting all my files, deleting the database, reinstalling Hydrus, and then importing and tagging the files all over again.
My autocompleted tag list displays proper tag counts, but when I search them I get dramatically less images. I can still find these images in the database through system:* searches and they're still properly tagged. My tag siblings and parents aren't working for some tags either. But all the database integrity checks say everything is okay. What's my next step?
Still getting some errors in the duplicate filter, I think it has something to do when I'm choosing to delete images v485, win32, frozen IndexError list index out of range File "hydrus\client\gui\ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3598, in ProcessApplicationCommand command_processed = CanvasWithHovers.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1581, in ProcessApplicationCommand self._Delete() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2928, in _Delete self._SkipPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3488, in _SkipPair self._ShowNextPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3442, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ):
>>8494 I have had a report from another user about a situation a bit similar to yours related to the file service that holds repository update files. I am going to investigate it this week, please check the changelog for 487. I can't promise anything, but I may discover a bug where some files aren't being cleanly removed from services at times and have a fix. >>8496 Yes, hit up options->gui pages and check the new preview-click focus options. Note that shift-click is a bit more clever now, too--if you go backwards, you can 'rewind' the selection. >>8499 Yeah, I like to highlight neat new apps in the release posts or changelogs. I do not make any of the apps, but I am thinking of integrating 'do stuff with this other client' tech into the client itself, so you'll be able to browse a rich central client with a dumb thin local client. Timeframe I can't promise. For me, it'll always be long. I'm expecting my 'big' jobs for the next 12-18 months to be a mix of server improvements, smart file relationships, and probably a downloader object overhaul. I'll keep working on Client API improvements in that time in my small work, and I know the App guys are still working, so I just expect the current betas to get better and better over time, a bit like Hydrus, with no real official launch. Check in again on the links in the Client API help page in 4-6 months, is probably a good strategy.
>>8530 Sure, just point me to some example files (or send me some) and I'll see if it is easy to recognise them. >>8545 Yes, I want to write some special rules that you can customise for pixel dupes. Some users always want the bigger file, some the smaller, so I'm planning to make the current weights you see in options->duplicates a bit richer, and probably add some '- unless they are pixel dupes, in with case use [ 123 ] [ ] do not care if pixel dupes' side options. >>8546 Can you paste any of the errors, so I can see more information? They should be in the 'note' column of the search/file log on the downloader page, and you can copy them with right-click menu. I don't know much about those sites, but if they have complicated redirects or login requirements, or Cloudflare rules, maybe to stop spiders, the situation may be more tricky than the simple downloader can handle. If it is a login situation (i.e. lots of cloudflare problems or 403/401 errors), then maybe Hydrus Companion's ability to copy your browser's login cookies to hydrus via the Client API may help. https://gitgud.io/prkc/hydrus-companion
>>8547 >If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? It depends on what 'OK' means, I think. If you want to remove the hash record, sure, you can delete it if you like, but you might give yourself an error in two years when some maintenance routine scans all your stuff for integrity or something. Renaming the hash to a random value would be better. Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. Telling hydrus to remove a deletion record only refers to the particular file domain where the file was deleted from. It might still be present in other places, and other services, like the PTR, may still have tags for it. It basically goes to the place in the database where it says 'this file was deleted from my files ten days ago' and removes that row. If you really really need this record removed, please don't rebuild your whole client. Make a backup (which means making a copy of your database), then copy/paste my routine into the sqlite terminal exactly, then try booting the client. If all your files are fucked, revert to the backup, but if everything seems good, then it all went correct. Having a backup means you can try something weird and not worry so much about it going wrong. More info here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
>>8553 The nuclear way to fix this sort of problem, if it is a miscounting situation, is database->regenerate->tag storage mappings cache (all, deferred...). If the bad tag counts here are on the PTR, this operation could take several hours unfortunately. If the tags are just your 'my tags' or similar, it should only be a couple of minutes. Once done, you'll have to wait for some period for your siblings and parents to recalculate in idle time. But even if that fixes it, it does not explain why you got the miscount in the first place. I think my recommendation is see if you can find a miscounted tag which is on your 'my tags' and not on the PTR in any significant amount. A 'my favourites' kind of tag, if you have one. Then regen the storage cache for that service quickly and see if the count is fixed after a restart. If it is, it is worth putting the time into the PTR too. If it doesn't fix the count, let me know and we can drill more into what is actually wrong here. >>8555 Damn, thank you, I will look into this.
>>8565 This seems to have fixed it, thank you! However, it's left quite a few unknown tags. I guess those tags were broken, which was the problem in both my counts and parent/siblings. Is there any way to restore those "unknown tag" namespaced tags, or is it better to just try to replace them one by one?
(739.29 KB output.zip)

>>8563 Here is some samples of WavPack from the web: https://telparia.com/fileFormatSamples/audio/wavPack/ But just in case I attached short random laugh compressed with recent release of encoder on Linux. Format seems have magic number "wvpk" as stated on wikipedia or github repo: https://github.com/dbry/WavPack/blob/master/doc/WavPack5FileFormat.pdf
Will it be possible at some point to edit hydrus images without needing to import it as a brand new image? It's annoying opening images in an external editor, making the edit, saving the image, importing said image, transferring all the tags back onto it, and then deleting the old version when all I'm doing usually is cropping part of it.
I had an ok week. I didn't have time to get to the big things I wanted, but I cleared a variety of small bug fixes and quality of improvements. The release should be as normal tomorrow.
>>8555 Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work.
How long until duplicates are shown properly? Also, are transitive duplicates sorting (as in files which aren't possible duplicates but have duplicates in common) in the to do list?
https://www.youtube.com/watch?v=VKuGYKkH3oA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Linux.-.Executable.tar.gz I had an ok week. I was a unexpectedly short on time, so I couldn't get everything I wanted done, but I cleared out some small work. highlights The big update last week, which I recommended only for advanced users, went well. There don't seem to be any obvious problems with the logic of the new search cache, so I now recommend it for everyone. You will be presented with a popup just before the update runs, giving you an estimate of how long it thinks it will take. Most users should take 5-10 minutes, but if you have millions of files, it will be longer. Just let it run and some things will run a bit faster and neater in the background. If you have played with 'multiple local file services', then check out the new 'all my files' domain you will see--this is basically an efficient umbrella of all your local file services. It works super fast for things like the duplicates system. I also put some time into the duplicate filter this week. The logic of the queue is improved again, so some rare errors when reaching the end of a batch should be fixed. I also integrated manual file deletes into the queue processing: now, when you manually delete a file, or both, the deletes will not happen until you commit--just like the other decisions you are making--and they are undoable if you select 'forget' or go back a pair. You also won't see a file you manually deleted again in a batch (it'll auto-skip if that file comes up again). Also, the duplicate filter now has a little 'send pair to page' button, which publishes the current pair to the duplicates page that made the filter, just in case you want to save them for some extra processing after you are done filtering. You can do this with multiple pairs and they'll just stack up in the page. A couple other neat things happened in last week's advanced-user-only release, which I will repeat here: The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. full list - misc: - updated the duplicate filter 'show next pair' logic again, mostly simplification and merging of decision making. it _should_ be even more resistant to weird problems at the end of batches, particularly if you have deleted files manually - a new button on the duplicate filter right hover window now appends the current pair to the parent duplicate media page (for if you want to do more processing to them later) - if you manually delete a file in the duplicate filter, if that file appears again in the current batch of pairs, those will be auto-skipped - if you manually delete a file in the duplicate filter, the actual delete is now deferred to when you commit the batch! it also undoes if you go back! - fixed a bug when editing the external program launch paths in the options - fixed an annoying delay-and-error-popup when clearing the separator field when editing a String Splitter. now the field just turns red and vetoes an OK with a nicer error text - also improved how string splitters report actual split errors - if you are in advanced mode, the _review services_ panels now have an 'id' button that lets you fetch the database service id - wrote a new database maintenance routine under _database->check and repair->resync tag mappings cache files_, which is a lightweight way of fixing ghost files or situations where files with a tag are neither counted nor appear in file results. this fixes these problems in a couple minutes, so for this it is much better than a full regen of the cache - . - cleanup and other boring stuff: - the archive/delete filter now says which file domain it will be deleting from - if an archive/delete filter is launched on a 'multiple locations' file domain, it is now careful to only make delete records for the deleted files for the file services each one is actually in - renamed the 'default local file search location' option to 'fallback' and updated its tooltip a bit. this was really a hacky thing I needed to fill some gaps while rewriting from 'my files' to multiple local file services. the whole thing needs more attention to become more useful. I also fixed an issue where it could become invalid 'nothing' if you deleted a file service it was referring to (issue #1155) - I think I fixed a rare 'did not find info for that file' style problem when highlighting some watchers/downloaders - I think I have silenced some unhelpful BeautifulSoup (html parser) warnings that were spamming to the log in some situations - updated last week's big update to work with TRUNCATE journalling mode. I will be doing this for other big updates going forwards, since multi-GB WAL transactions cause problems for some users - last week's update also gives a time estimate in its pre-popup, based on 60k files per minute - removed some old database cache data that wasn't cleared in a previous update - a variety of misc UI text fixes and cleanup next week I regret I did not have time for a larger import/export framework. It will have to wait. I have one more week of work before my vacation week, so I will try to just do some small cleanup and polishing so the release is 'clean' before my break.
>>8563 nice, hopefully the rules come soonish, would make going through them a bit easier, definitely want to check out some things in 487 as they are things I made work arounds for like pushing the images to a page, I currently have a rateing that does something similar when i want to check the file a bit closer, be it a comic page I want to reverse search or something I want to see where it came from, this may be a better option.
>switch to arch linux from windows >get hydrus running >use retarded samba share on nas for the media folder >permission error from the subscription downloader >can view and search my images fine otherwise, in both hydrus and file manager Any idea which permissions would be best to change? I'm retarded when it comes to fstab and perms, but I know not to just run everything as root. I just can't figure out if its something like the executable's permissions/owner, the files permissions/owner, or something retarded in how I mount it. Pictured are the error, fstab entry, the hydrus client's permissions, and what the permissions for everything in the samba share are. The credentials variable in fstab is a file that only root can read, for slight obfuscation of credentials according to the internet. The rest to the right was stuff I added to allow myself to manipulate files in the samba share, again just pulled from random support threads.
>>8618 >Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work. Appears fixed for me with v487 - Thanks.
Perhaps, another bug?: >file>options>files and trash>Remove files from view when they are sent to trash. Checking/Unchecking has the desired result with watchers and regular files but does not seem work anymore with newly downloaded files published to their respective pages. Here, the files are merely marked with the trash icon but not removed from view, as it had been the case (for me) until version 484.
>>8627 It seems like I can manipulate files within the samba drive but it spits out an error when moving from the OS drive to there. So I guess it's some kind of samba caching problem.
I have noticed some odd non-responsiveness with the program. It is hosted on an SSD. While in full-screen preview browsing through files to archive or delete, sometimes the program will stop responding for approximately 10 seconds when browsing to the next file (usually a GIF but not always). The next file isn't large or long or anything. I'm not sure what's causing this issue. Is it just the program generating a new list of thumbnails?
>>8641 I also wanted to note this issue is not unique to this most recent update. It has been there for a while.
>>8641 >>8642 I guess I should also reiterate that the program AND the database are both hosted on the same drive (default db location)
well this is a first, the png on a pixel for pixel against a jpeg was smaller... i'm guessing that jpeg is hiding something.
>>8566 Ah shit, if you have 'unknown tag:abcdef...' garbage, this is strong evidence that you have actually had database damage (to client.master.db), most likely through a hard drive blip. This probably also explains why your searches were jank--your 'client.caches.db' was probably damaged as well. I don't think there is a way to figure out which original tags those 'unknown tag:blah' actually referred to, at least no simple easy one. Basically when the client tried to rebuild your cache, it found gaps in the definition table and filled them with random but valid data. Your next step is to read the 'help my db is broke.txt' document in install_dir/db directory. This has background reading about the nature of hard drive problems and things you should do to check your drive is ok and your database files are ok. If you have a recent backup, hold on to it! If you have a backup, we may be able to recover your bad tags. But before then, make sure everything is safe now and there aren't more problems. Let me know how you get on! >>8568 Thank you! I'll see what I can do. >>8608 I hope that as the duplicate system gets more tech, this will be more possible. Hydrus works on exact file content, so it will never natively support editing, but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech, including for other conversions like jpegxl, waifu2x, or video re-encoding. For now, though, hydrus is really for 'finished' files.
>>8618 >>8630 Great, thanks for letting me know. >>8619 I expect to do a big push on duplicates in Q4 this year or Q1 2023. I really want to have better presentation, basically an analogue to how danbooru shows 'hey, this file has a couple of related files here (quicklink) (quicklink)'. Estimating timeframes is always a nightmare, so I'll not do it, but I would like this, and duplicates are a popular feature for the next 'big job'. At the moment, there is a decent amount of transitive logic the duplicates system. If A-dup-B, and B-dup-C, then A-dup-C is assumed. Basically duplicates in the hydrus database are really a single blob of n files with a single 'best' king, so when you say 'this is better' you are actually merging two blobs and choosing the new king. I have some charts at the bottom of this document if you want to dive into the logic some more. https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced But to really get a human feel for this, I agree, we need more UI to show duplicate relationships. It is still super technical, opaque, and not fun to use. >>8627 >>8636 I'm afraid I am no expert on this stuff. The 'utime' bit in that first traceback is hydrus trying to copy the original file's modified time from a file in your temp directory to the freshly imported file in the hydrus file system, so if the samba share has special requirements for that sort of metadata modification, that's your best bet I think.
>>8631 Thank you, I will check this! The specific rules that govern when it is and isn't correct to apply this option are frustratingly complicated, and adding multiple local file services made it moreso. I'll have a play and see what I can figure out. >>8641 >>8642 >>8643 Thank you for this report. Sometimes this is my fault, sometimes it is something else. Since your files are on a fast SSD, we can rule out some weirder things like NAS directory scan times, but I do know that Windows anti-virus got a lot more aggressive in the past couple of years, and pretty much any file you access gets a scan before it is loaded. This can cause a ~50-150ms delay on some video files in hydrus, which are not pre-cached yet. Maybe maybe if anti-virus was working hard, and search indexer was also going bananas as it sometimes does, and your client was working hard doing imports and things, all the locks would add up and it would halt. 10 seconds sounds like a bigger problem though. Can you try turning off the 'normal time' sync under tags->sibling/parent sync->sync during normal time? Does that free you up a bit? You can check the 'review' panel on that same sub-menu to see if your client has a lot of catch-up work to do there. But that's probably only applicable if you sync with the PTR. Do you have a lot of imports, btw? Like do you have 25+ active file import queues, be they downloaders or hard drive imports or whatever, running at once? It could just be the file system is overwhelmed with new writes and can't serve you the read request for the gif. Otherwise, please check help->debug->profiling->profile mode. There's a 'what's this?' on the same menu to show you how to use it. Run that for a bit and see if you can capture a freeze, then pastebin or email me the profile log and I'll see if anything helpful was recorded. >>8644 Wow, yeah, that's the first time I have seen that too. I assume this image is just an anime babe or something and nothing like a crazy geometric pattern? If you are ok sharing, I'd be interested to either have that jpeg or get a link to a booru it is on, just so I can check it out myself. No worries if you don't want to share. There's probably some EXIF browsing programs out there that might be able to expose it. Another trick is just to export, rename to .zip, and see if 7zip will open it. Some hidden archives are literally just appended to the end of the image file data.
>>8643 >>8647 There is no downloading or synching being done. Client is basically running stock, with no tags or anything (not even allowed to access the internet yet). Think it might be AV? Running Kaspersky on Low (uses very little resources for automated scanning).
>>8648 >>8647 Also, no active running imports. Just an open import window with about 60k files for me to sift through.
>>8649 >>8647 I tried it with an exclusion for the entire Hydrus folder for automated scanning but the problem persists so I don't think its AV related.
Would it be possible to add a sort of sanity check to modified times to prevent obviously wrong ones from being displayed? I've noticed a few files downloaded from certain sites since modified times were added to Hydrus show a modified time of over 52 years, which makes me think that files from sites which don't supply a time are given a 0 epoch second timestamp. In this case I think it would be better to show a string like "Unknown modification time" or none at all.
>>8652 Also, if I try to download the same file from a site that does have modified times, the URL of the new site is added but the modified time stays the incorrect 52 years. Maybe there could be an option to replace modified times for this query/always if new one found/only if none is already known (or set to 1970). I also couldn't find a way to manually change modified time, but maybe I didn't look hard enough.
I've gotten my instance of Hydrus into a state where the "parent/sibling sync" process is stuck. I have several parent/child pairs that were working fine, and running on ~v450, but recently I added a few more and after applying realized some were the wrong way around parent/child-wise. I went back in and edited the parent tag configs to delete the bad ones and re-add them with the tags the right way around. But it seems my instance has stopped processing the tag updates. tags > parent/sibling sync > review parent/sibling maintenance showed it was aware there was more work to do, but stayed stuck at the same percent done for over 12 hours, even when I clicked the "work hard now!" button and had it set to sync "all the time" (not just during idle time). I used the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to go back to zero percent done, but now has not progressed passed zero percent done for over 24 hours. I'm not sure the "maintenance" is even doing anything, as the Hydrus client process in task manager isn't using much CPU/RAM/Disk process at all. I upgraded to v487, but no change in symptoms. This instance has 85 parent configs set, 5,000 files in it, has no subscriptions/services/downloaders, and is only using local tags, running on Windows 10. The client log seems to have no errors that seem related to a parent/child sync issue, but one error does pop up on each startup: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\metadata\ClientTagsHandling.py", line 514, in MainLoop self._controller.WaitUntilViewFree() File "hydrus\client\ClientController.py", line 2279, in WaitUntilViewFree self.WaitUntilThumbnailsFree() File "hydrus\client\ClientController.py", line 2284, in WaitUntilThumbnailsFree self._caches[ 'thumbnail' ].WaitUntilFree() KeyError: 'thumbnail' File "threading.py", line 890, in _bootstrap File "threading.py", line 932, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 416, in run HydrusData.ShowException( e ) File "hydrus\core\HydrusData.py", line 1215, in PrintException PrintExceptionTuple( etype, value, tb, do_wait = do_wait ) File "hydrus\core\HydrusData.py", line 1243, in PrintExceptionTuple stack_list = traceback.format_stack()
>>8647 I would send it to ya but I dumped the trash before I saw your response, so far I have seen a few of these, if I find another ill send it to ya.
>>8656 Update on this issue: I tried exporting all my parent tags, then deleting all the parent tag configurations and using the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to indicate there's no work to do. I then added back in one parent tag from my original set (that only applied to 5 files in the repository) and the "maintenance" window says there's now one parent to sync, but isn't actually processing that one parent.
>>8648 >>8649 >>8650 Hmm, if you have a pretty barebones client, no tags and no clever options, then I am less confident what might be doing this. I've seen some weird SSD driver situations cause superlag. I recommend you run the profile so we can learn more. >>8652 >>8655 Thanks, can you point me to some example URLs for these? I do have a sanity check that is supposed to catch 1970-01-01, but it sounds like it is failing here. The good news is I store a separate modified time for every site you download from, so correcting this retroactively should be doable and not too destructive. I want to add more UI to show the different stored modified times and let you edit them individually in future. At the moment you just get an aggregated min( all_modified_times ) value.
>>8656 >>8662 Damn, this is not good. I'm sorry for the trouble and annoyance. Have you seen very slow boots during this? That thumbnail cache is instantiated during an early stage of boot, so it looks like the sibling/parent sync manager is going bananas as soon as it starts. I have fixed the bug, I think, for tomorrow's release. That may help your other issue, which is the refusal to finish outstanding work, but we'll see. Give tomorrow's release a go, and if it gets to a '95% done' mode again and won't do the last work, please try database->regenerate->tag parents lookup cache. While the 'storage mappings cache' reset will cause the siblings and parents to sync again, the 'lookup' regen actually does the mass structure that holds all the current relationships. It sounds like I have a logical bug there when you switch certain parents around. You don't have to say the exact tags if you don't want, but can you describe the exact structure of the revisions you made here? Was it simply flipped parent-child relationships, so you had 'evangelion->ayanami rei', and it should have been 'ayanami rei->evangelion'? Were there any siblings involved with the tags, and did the parent tags that were edited have any other parent relationships? I'm wondering if there is some weird cousin loop I am not detecting here, or perhaps detecting but not recognising as creating outstanding sync work. Whatever the case, let me know how you get on with this!
I had a good week. I did some simple work to make a clean release before my vacation. The release should be as normal tomorrow.
>>8665 Yes, I did have a few very slow startups: a few times it took like two hours for the UI to show, though I could see the process was indeed started in task manager. Thanks; I'll try tomorrow's release and see if that helps anything. Parent-tag-wise, the process I think I was doing right before it failed was I had a bunch of things tagged with something generic, which had one level of namespacing (e.g. "location:outdoor"), and I decided to make a few more-specific tags (e.g. "location:forest", "location:driving", and "location:beach"; all of which should also get "location:outdoor" as a "parent"). But I first created the parent relationship the wrong way and didn't notice it (so everything that was "outdoor" would now get three additional tags added to it). I saved the parent config and started manually re-tagging (e.g. remove "outdoor" and add "beach" for those that were in that subgroup), and after doing a few I noticed the F3 tagging window wasn't showing the "parent" tag yet (wasn't showing "outdoor" nested under "beach"), and so I went back to the tag manager and realized they were wrong, so deleted the relationship and re-added them the right way and continued re-tagging. After a while I noticed it still hadn't synced, and realized it didn't seem to be progressing any more, and started triaging to see if it was a bug. None of them had siblings defined.
>>8664 >Thanks, can you point me to some example URLs for these? It looks like this is only affecting permanent booru. I'm using pic related posted in one of these threads. Here's a SFW example URL: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/post/3742726/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa It may be of note that the "direct file URL" is from IPFS, and the following onion gateway URL is added to the file's URLs as well: http://xbzszf4a4z46wjac7pgbheizjgvwaf3aydtjxg7vsn3onhlot6sppfad.onion/ipfs/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa The same file is available here with a correct modification time (2022-02-27): https://e621.net/posts/3197238 The modified time in the client shows 52 years 5 months, which is in January 1970. Not sure if there's an easy way to see the exact time.
>>8645 >but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech Couldn't you just make a temporary "import these files and use _ as _ to find alternates, then do _ if _" for now? Like "import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller"? I mean it sounds like too much when you write it out like that, but the underlying logic should be pretty simple.
https://www.youtube.com/watch?v=AQOfIENN2tk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Linux.-.Executable.tar.gz I had a good simple week making a clean release before my vacation. Everything is misc this week, nothing earth-shattering, just a bunch of cleanup and little stuff. If you have any wavpack files, try importing them! full list - the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too! - added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action - the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them - simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash - if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up - the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules - now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click. - did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area - all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly - may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates - misc UI text rewording and layout flag fixes - fixed some jank formatting on database migration help next week I am now off for a week. I think I need it! I'm going to play a ton of vidya, shitpost the big streams that are happening, fit some Wagner in, and get on top of outstanding IRL stuff. I'll be back to catch up my messages on Saturday the 18th. Thanks everyone!
trying to use Hydrus for the first time; is there a way to add subscription for videos specifically? So that it leaves out photos?
(480.53 KB 640x360 shitposting.gif)

>>8675 Have a nice vacation OP and watch out for fucking normies.
id:6549088 from gelbooru. (nsfw) with download dec. bomb deactivated. When downloading this specific picture, before it finishes downloading, it makes the program jump to 3 gb of ram until i close it. Is opens normally with browser, but spikes to 3 gb on hydrus. and since i only have 4 gb it makes the pc freeze. Just wanted to report on that. Also, no native enflish speaker here.
>>8679 forgot, using version 474
>>8668 Reporting in that v488 seems to have fixed both these bugs. There's no longer the thumbnail exception being logged, the startup time to get to a UI window is quicker, and the parent-sync status un-stuck itself. Hooray!
>>8645 This is about what I figured. I pulled the database from a dying hard drive a few months ago. Every integrity scan between now and then ran clean, but I had a suspicion something had gotten fucked up somewhere along the line. Since it's been a minute, any backups are either also corrupted, or too old to be useful. Luckily, re-constructing them hasn't been too painful. I made an "unknown tag:*anything*" search page, then right-click->search individual tags to see what's in them. Most have enough files in to give context to what it used to be, so I'll just replace it. It's been a good excuse to go through old files, clean up inconsistent tags, set new and better parent/sibling relationships, etc, so it's actualy been quite pleasing to my autisms. I had 80k files in with an unknown tag back when I started cleaning up, and now I'm down to just under 40k. I'm sure I've lost some artist/title tags from images with deleted sources, or old filenames, but all in all, it could be much worse.
Thanks man! Have a good vacation!
>>8676 if you're just subscribing to a booru, they will generally have a "video" tag. you can add "video" to the tag search.
>>8703 nope, not a booru. So there isn't a way to filter that. awh.
Is there any way to get Hydrus to automatically tag images with the tags present in the metadata? Specifically the tags metadata field, why whole collection was downloaded using Grabber.
>>8710 my*
>>8709 What website is it? You might be able to add to/alter the parser to spit out the file type by reading the json or file ending, then use a whitelist to only get certain file endings (i.e. videos)
I've been using hydrus for a while now and is in the process of importing all my files. Is there any downside to checking the "add filename? [namespace]" button while importing? Think i got over 300k images so it would create a lot of unique tags if that would be a problem.
About how long do you estimate it might take before hydrus will be able to support any files. I specifically need plaintext files and html files (odd, I know) if that makes a difference. The main thing is just that it'd be nice for me to have all my files together in hydrus instead of needing to keep my html and (especially) my text files separate from the pics and vids. Also. I'm curious. Why can't hydrus simply "support" all filetypes, by just having an "open externally" button for files that it doesn't have a viewer for? It already does that for things like flash files, afterall.
>>8627 >>8636 >>8646 It seems to be working now, not sure what changed but somehow arch doesn't always mount the samba directory anymore and needs a manual command on boot now, which it didnt before. Maybe it was some hiccup, maybe some package I happened to install as I installed more crap, maybe it was a samba bug that got updated.
Is there a way to reset the file history graph, under Help?
>>8668 >>8681 Great, thanks for letting me know! >>8671 Thank you. The modified date for that direct file was this: Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT I thought my 'this is a stupid date m8' check would catch this, but obviously not, so I will check it! Sorry for the trouble. I'll have ways to inspect and fix these numbers better in future. >>8674 I'm sorry to say I don't understand this: >"import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller" But if you mean broadly that you want some better metadata algebra for mass actions, I do hope to have more of this in future. In terms of copying metadata from one thing to another, I just need to clean up and unify and update the code. It is all a hellish mess from my original write of the duplicates system years ago, and it needs work to both function better and be easier to use
>>8676 >>8703 >>8709 >>8716 In the nearish future, I will add a filetype filter to 'file import options', just like Import Folders have, so you'll be able to do this. Sorry for the trouble here, this will be better in a bit! >>8679 >>8680 I'm sorry, are you sure you have the right id there? gif of the frog girl from boku no hero academia? I don't have any trouble importing or viewing this file, and by it looks it doesn't seem too bloated, although it is a 30MB gif, so I think your memory spike was something else that happened at the same time as (and probably blocked) the import. Normally, decompression bombs are png files, stuff like 12,000x18,000 patreon rewards and similar. I have had several reports of users with gigantic memory spikes recently, particularly related to looking at images in the media viewer. I am investigating this. Can you try importing/opening that file again in your client and let me know if the memory spike is repeatable? If not, please let me know if you still get memory spikes at other times, and more broadly, if future updates help the situation. Actually, now I think of it, if you were on 474, I may have fixed your gigantic memory issue in a recent update. I did some work on more cleanly flushing some database journal data, which was causing memory bloat a bit like you saw here, so please update and then let me know if you still get the problem. >>8688 Good luck!
>>8710 Not yet. I don't inspect EXIF much yet, but I expect some sort of retroactive parser in future. Or I wouldn't be surprised if a user figures out a Client API tool to do this. Unless you mean NTFS tags, in which case I am even less expert. I know there are some tools that can convert NTFS tags into xml files, and I know a user once did that and then munged those files into .txt files for tag import, but I've never done of that stuff myself. >>8721 If you do this, make a new tag service for your filename tags under services->manage services->add->local tag service. Call it 'filenames' or something. The downside is these tags are messy. 300k tags won't add much lag, maybe 0.5-2% slower file load kind of thing. But they will get in the way, and most users find they don't actually want them all that often. Putting them in another service puts them in a little box on their own where it is easier to hide, compartmentalise, and potentially delete them in future without affecting your 'real' search tags. >>8722 Not sure. It is number 6 on the 'big stuff' list here: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc 'Support more filetypes / arbitrary file import ', so I see it happening, and very likely within the next three years. I also want to store my .txt and .html files. I have thousands of fanfics from a sordid past life of mine that I want to categorise. The problem I need to overcome is that hydrus is currently predicated on the ability to infer a filetype based on file content alone. The reasons for this are some bullshit technical stuff mostly related to maintenance and weird downloads, but it is currently needed. If you toss a file called 'file.file' at hydrus, it needs to be able to figure out if it is a jpeg or mp4 just looking at its insides. Most media files have rigid formats, literally a few bytes at like 'offset 8 bytes, WEBP', that make it easy to recognise them very quickly. Text and HTML have very dynamic content, so figuring out what they are is more tricky. Before I allow all files, I may be able to straight up support text and html, but there will still be problems. HTML is doable since you basically run it through a parser and see if it raises an error, but then you have to determine if it was HTML or XML. I expect to start work on this soon since some formats (SVG and some other open-source image-editing formats) are just XML, so I'll start recognising the broad category of XML and then try recognising keywords or these little XML 'this is what I am' tags, I think they are called DTD or something, and we may just fall into HTML support by happy accident. Raw .txt is much more difficult. A one-byte text file with 'a' is a text as much as a book in japanese unicode is. I probably can't recognise that versus any other arbitrary format, although I can probably get a high confidence guess. Supporting arbitrary files will require some import and maintenance rejigging. I'll have to no longer know that I can always figure out the mime of a file, and I'll have to pass the mime along from the original file extension or whatever. ALSO there are secondary issues like, at the moment, if the hydrus downloader runs into an HTML file when it expected a jpeg (e.g. some kind of fucked up 404 message that gave 200 instead of 404, which happens sometimes), I raise an 'ignored' error and say 'I think this downloader needs to be taught how to parse this document'. But when we can support HTML, what do I do then? Do I import the HTML error page as a file? I'll have to do something to the import workflow in general to say when text/html is ok and not ok. I'm leaning towards allowing text files on hard drive import, and then disallowing it on downloader import unless the URL Class specifically specifies it, but how the hell I make that user-friendly I'm not sure yet. Anyway, sorry, I went on a bit there, but that's the basic background. It will come, but it will be a big job, so I need to clear out some other things first. I'm basically done with multiple local file services now, so I'm moving on to some server updates and janny workflow improvements for my next big job. We'll see if that takes me all the rest of this year, but I hope I can clear it out faster, and then move on to the next thing.
>>8723 Great, let me know how things go in future! >>8725 What part would you like to 'reset'? All the data it presents is built on real-world stuff in your client, like actual import and archive times. Do you want to change your import times, or maybe clear out your deleted file record?
I had a good week. I did a mix of cleanup and improvements to UI and an important bug fix for users who have had trouble syncing to the PTR. The release should be as normal tomorrow.
when trying to do a file relationship search, is there a way to search for same quality duplicates. I don't see any way to do that, and every time I look at the relationships of a file manually, it's always a better/worse pair. Does Hydrus just randomly assign one of the files as being better when you say that they're the same quality?
https://www.youtube.com/watch?v=6rboksqjPy4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Linux.-.Executable.tar.gz I had a good week getting back into the swing of things. I fixed some important bugs and improved some UI. highlights All the downloader pages--gallery, watcher, urls, and simple--have a revamped status system. All the text that shows how file or gallery downloads are going is now generated in a better way, with more error states (e.g. it will tell you when your gallery stopped because it hit the file limit, or when one of the emergency pause states under the network menu has kicked in), and logic in edge cases is improved. Everything is unified now, so the texts are the same across all pages. Also, if a gallery query or watched thread is 'pending', its text now reports that it is waiting for a work slot, rather than staying blank. There _shouldn't_ be any situations now where a downloader is unpaused with work to do but has blank status. If you use the multiple local file services system, the archive/delete filter now presents more options when you are done. If the files are in more than one local file service, you can choose where you delete them from, including all applicable. This was confusing and opaque before, so I hope this makes it more clear what is happening and gives you more choice. I _believe_ I have fixed an important bug some users were having with PTR processing. There was an annoying issue about a 'definitions' file being seen as a 'content' file, or vice versa, that the automatic maintenance could not fix. I finally managed to reproduce the issue and fixed it. I schedule a fix in the update this week, so if you have been hit by this, please wait for one more round of file maintenance 'metadata' scans, and then unpause the PTR one more time. Essentially, I think I fixed the automatic maintenance. Let me know how you get on! full list - downloader pages: - greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner - the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending! - when you pause mid-job, the 'pausing - status' text is generated is a little neater too - with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI - any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds) - the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however - a bunch of misc downloader page cleanup - . - archive/delete: - the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible - fixed archive/delete commit for users with the 'archived file delete lock' turned on - . - misc: - fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed - the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately - optimised the master tag update routine when you petition tags - the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record - thanks to a user, the 'getting started with files' help has had a pass - I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear - . - important repository processing fixes: - I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues - I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising - also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed - there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this next week I ended up doing more cleanup this week than I expected, but I'm happy to have the downloader pages reporting better. They were a real knot before. I want to spend a little admin time next week, triaging final multiple local file services work and planning future server improvements for when that is done, and then I think I'd like to focus on more small jobs, including some github issues.
>>8743 Yes, 'same quality' actually chooses the current file to be the better, just as if you clicked 'this is better', but with a different set of merge options. The first version of the duplicate system supported multiple true 'these are the same' relationships, but it was incredibly complicated to maintain and didn't lend itself to real world workflows, so in the end I reinvented the system to have a single 'king' that stands atop a blob of duplicates. I have some diagrams here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced I don't really like having the 'this is the same' ending up being a soft 'this is better', but I think it is an ok compromise for what we actually want, which is broadly to figure out the best of a group of files. If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. I may revisit this topic in a future iteration of duplicates, but I'm not sure what I really want beyond much better relationship visibility, so you can see how files are related to each other and navigate those relationships quickly. Can you say more why you wanted to see the same quality duplicate in this situation? Hearing that user story can help me plan workflows in future.
(426.25 KB 958x538 64b0.png)

(8.03 KB 503x125 ClipboardImage.png)

>>8151 What do I do for this? I'm just tyring to have my folder of 9,215 images tagged.
What installer does Hydrus use? I'm trying to set up an easy updating script with Chocolatey (since whoever maintains the winget repo is retarded).
>>8755 Figured it out, Github artifacts shows InnoSetup. Too bad Chocolatey's docs are half fucking fake and they don't do shit unless you give them money. This command might work, but choco's --install-arguments command doesn't work like the fuckwads claim it does. choco upgrade hydrus-network --ia='"/DIR=C:\x\Hydrus Network"'
(11.25 KB 644x68 ClipboardImage.png)

>>8756 No, actually, that command doesn't work, because the people behind chocolatey are lying fucking hoebags. Seeing this horseshit, after THEY THEMSELVES purposfully obfuscated this bullshit is FUCKING INFURIATING.
>>8745 The main thing I wanted to do is compare the number of files that were marked as a lower-quality duplicates across files from different url domains with files that aren't lower-quality duplicates (either kings, or alts, or no relationships) to see which domains tend to give me the highest ratio of files that end up being deleted later as bad-dupes, and which ones give me the lowest, so I know which ones I should be more adamant about downloading from, and which ones I should be more hesitant about. This doesn't really work that well if same-quality duplicates can also be considered "bad dupes" by hydrus, because that means I'm getting a bunch of files in the search that shouldn't be there, since they're not actually worse duplicates, but same-quality duplicates that hydrus just treats as worse arbitrarily. Basically, I was trying to create a ranking of sites that tend to give me the highest percentage of low-quality dupes and ones that give me the lowest. I can't do that if the information that hydrus has about file relationship is inaccurate though. It's also a bit confusing when I manually look at a file's relationships, because I always delete worse duplicates, but then I saw many files that are considered worse duplicates and I thought to myself "did I forget to delete it that time". Now this makes sense, but it still feels wrong to me somehow.
(2.77 KB 306x117 windozeerror.png)

>>8757 >2022 and still using windoze Time to dump the enemy' backdoor.
>>8753 The good catch-all solution here is to hit up services->review services and click 'refresh account' on the repository page. That forces all current errors to clear out and tries to do a basic network resync immediately. Assuming your internet connection and the server are ok again, it'll fix itself and you can upload again. >>8755 >>8756 >>8757 Yeah, Inno. There's some /silent or something commands I know you can give the installer to do it quietly, and in fact that's one reason the installer now defaults to not checking the 'open client' box on the last page, so some automatic installer a guy was making can work in the background. I'm afraid I am no expert in it though. If I can help you here, let me know what I can do. >>8758 Ah, yeah, sorry--there's no real detailed log kept or data structure made of your precise decisions. If you do always delete worse duplicates though, then I think you can get an analogue for this data you want. Any time you have a duplicate that is still in 'my files', you know that was set as 'same quality', since it wasn't deleted. Any time a duplicate is deleted, you know you set it as 'worse'. If you did something like: 'sort by modified time' (maybe a creator tag to reduce the number of results) system:file relationships: > 0 dupe relationships then you switch between 'my files' and 'all known files' (you need help->advanced mode on to see this), you'll see the local 'worse' (you set same quality) vs also the non-local worse (you set worse-and-delete), and see the difference. In future, btw, I'd like to have thumbnails know more about their duplicates so we can finally have 'sort files by duplicate status' and group them together a bit better in large file count pages. If you are trying to do this using manual database access in SQLite and want en masse statistical results, let me know. The database structure for this is a pain in the ass, and figuring out how to join it to my files vs all known files would be difficult going in blind.
>>8759 >Unironically being that guy Buddy, you just replied to a reply about easier updating with something that would make it ten times harder. Not to mention that hilariously dated meme. >>8760 Yeah, Choco passes /verysilent IIRC, and /DIR would work, but Powershell's quote parsing is fucking indecipherable, Choco's documention on the matter is outright wrong, and I can't 'sudo' in cmd. I'm considering writing a script to just produce update PRs for the Winget repo myself, since it's starting to seem like that would be easier, but I don't want to go through all of Github's API shit.
Pyside is nearly PyPy compatible (see https://bugreports.qt.io/browse/PYSIDE-535). What work would need to be done in Hydrus to support running under PyPy?

Quick Reply