Announcement

Collapse
No announcement yet.

High disk space usage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • High disk space usage

    Hi,

    I am using OX in a small setting: A single context with only two users. Thevertheless, today I had the problem that OX used my full disk space and I can't figure out why or how I can stop this. (My workaround was to delete some other files to get my server working again ...)

    Code:
    root{527}#> /opt/open-xchange/sbin/listfilestore -A XXXX -P YYYY
    id path                              size reserved  used max-entities cur-entities
     2 file:/var/customers/ox-filestore 60000      200 10618         5000            1
    
    root{528}#> du -sh /var/customers/ox-filestore
    51G     /var/customers/ox-filestore
    As you can see in the code block above, the OX file store is defined with a maximum size of 60GB. OX reports to use about 10GB of this space. But the real disk usage is much higher than that - about 51GB!
    What comes on top of this is, that when I download all files from OX drive from all acounts, the disk space consumed is only about 4.5GB. I know that OX saves some other files like image thumbnails and mail attachments in the storage - but can these files require more disk space than the actual contents of OX drive?

    Can somebody explain these differences to me?
    Is there a way to lower the disk space used by OX?

    On my researches, I came across the command line tool "deleteinvisible". Is this the solution to my problem? Can anybody tell me what "not visible data inside a context" is?

    I also noticed the high number of entries in the del_* database tables, e.g.:
    - del_infostore : 14095
    - del_infostore_document : 27341


    Thanks in advance
    Andreas
    Last edited by TheMagican; 10-13-2017, 09:05 PM.

  • #2
    You can try "clearpreviewcache" to see how much space you get back as first step.

    Comment


    • #3
      Thanks for your suggestion - unfortunately this didn't help.

      Code:
      root{503}#> /opt/open-xchange/sbin/clearpreviewcache -c 1 -A XXX -P YYY
      All cache entries cleared for context 1
      
      root{504}#> /opt/open-xchange/sbin/listfilestore -A XXX -P YYY
      id path                              size reserved  used max-entities cur-entities
       2 file:/var/customers/ox-filestore 60000      200 10813         5000            1
      
      root{505}#> du -sh /var/customers/ox-filestore
      57G     /var/customers/ox-filestore
      Note that the size of the file store constantly raised over the last days - about 1GB a day.

      Just as an addition: I am using Debian Jessie with the latest version of OX from the repos.

      Comment


      • #4
        I cannot explain this but certainly don't see it here.
        You probably need to go in the structure and see what type of files and content is saved there to get a hint.

        Comment


        • #5
          I examined the files in the file storage. There are obviously too many files to check them all manually, so I sorted them by size and had a look at the 50 biggest files.

          The first thing I noticed was that many of them have the same type (I executed "file" on them). After that, I copied some files of the same type and some other random files of those 50 to my local computer. I found for example a large file I uploaded to my account - but I deleted this file some weeks ago from OX and this file not in the OX "Trash" folder any more.

          I also found out that the multiple files of the same type are all old versions of the same file: My phone automatically creates a backup of some files on a daily basis and uploads it to OX. I found lots of old versions of this backup file.

          Is there some setting that causes OX to not physically delete files after I deleted them from OX? Are those "invisible" files that I can delete with "deleteinvisible"?


          Again many thanks in advance!

          Comment


          • #6
            Please, does anybody has a suggestion for me on how to solve or work around this problem?
            I had to delete more files on my server to keep it running - but I think this won't be possible a third time.

            Code:
            root{522}#> /opt/open-xchange/sbin/listfilestore -A XXX -P YYY
            id path                              size reserved  used max-entities cur-entities
             2 file:/var/customers/ox-filestore 60000      200 10647         5000            1
            
            root{523}#> du -sh /var/customers/ox-filestore
            67G     /var/customers/ox-filestore
            As you can see above, the consumed space is now larger than the maximum size the filestore should have.

            Is there potentially a setting that I messed up causing OX to not physically delete files?

            Besides the root-cause of the problem, is there a eventually a workaround like e.g. moving the filestore to a "new" location?

            I also tried the the "deleteinvisible" command without success.

            Is there a way to determine, which hashes in the filestore are still in use and which aren't - e.g. by querying the database. This would give me the possibility to script something to delete the old hashes ...


            Thanks in advance

            Comment


            • #7
              I had a look into the ox-database:

              The table del_infostore_document contains nearly 28k entries - but none of these entries has a file_store_location set. In fact, most of the columns are always NULL like e.g. the title, description or file name.

              In contrast, the infostore_document seems to contain valid references to the hashed files in my filestore. The number of file references is also nearly the same than the total number of files - besides the fact that for every file two rows exists, one with and one without a file reference. I think the difference is only caused by some files with more than one version.

              Would it be safe to delete all files from the filestore, except the ones in the infostore_document table?

              Edit:
              Seems at least the following tables/columns contain references to hashed files
              Code:
              infostore_document.file_store_location
              prg_contacts.vCardId
              snippet.refId
              Last edited by TheMagican; 10-26-2017, 08:06 PM.

              Comment


              • #8
                Unfortunately I could not wait any longer for a real solution and so I created the following script. It will dump the OX database to an easily searchable SQL file. For every file in the filestore, the script will check if there is some reference in the database. The result is a shell-script containing commands to move all unreferenced files away from the filestore. I backuped these files and after that deleted them from the server.

                Hopefully this didn't break my OX installation or caused file loss *fingers-crossed* For verification, I downloaded all calendar/contacts/tasks and files before and after I cleaned the server. Watching the logs didn't gave me any errors. The downloaded files were also identical (diff -rq folder-1 folder-2). So I guess, everything was alright (?) .

                Code:
                #!/bin/bash
                
                FILESTORE="/var/customers/ox-filestore/1_ctx_store"
                SAFE_FOLDER="/var/customers/ox-filestore/1_ctx_store_safe"
                GENERATED_FILE="../safe_cleanup.sh"
                DB_DUMP="../ox-database.sql"
                FILES="../files.txt"
                
                cd "$FILESTORE"
                echo "Dumping Database"
                mysqldump -u root --no-create-info ox-database_6 > "$DB_DUMP"
                
                echo "Listing hashed files in filestore"
                find -type f > "$FILES"
                
                echo '#!/bin/bash' > "$GENERATED_FILE"
                echo 'mkdir -p "'"$SAFE_FOLDER"'"' >> "$GENERATED_FILE"
                echo 'cd "'"$FILESTORE"'"' >> "$GENERATED_FILE"
                
                cat "$FILES" | while read f ; do
                        if grep -q "$(basename "$f")" "$DB_DUMP" ; then
                          echo '# valid file: '"$f" | tee -a "$GENERATED_FILE"
                        else
                          echo '# invalid file: '"$f" | tee -a "$GENERATED_FILE"
                
                          move_to="$SAFE_FOLDER/$(dirname "$f")"
                          echo "mkdir -p \"$move_to\"" | tee -a "$GENERATED_FILE"
                          echo "mv -i \"$f\" \"$move_to\"" | tee -a "$GENERATED_FILE"
                        fi
                done
                In my case, the filestore contained 6627 files and 413 of them were unreferenced. These unreferenced files consumed 58GB - while the full filestore had about 68GB!
                Besides the massive amount of wasted space, it was interesting to see, that the problem of OX to not physically delete files under some circumstances must be pretty old: Some of the files were dated to 8th November 2015.

                So my problem isn't accute any more - but the problem of OX sometimes not deleting files correctly still persists...

                I will have a look at the problematic files - eventually I can find something that they have in common to narrow down the real problem.

                Comment


                • #9
                  It seems, my script did work and no files got lost.

                  Unfortunately I still can't find the root-cause of the problem as I have already 14 new, unreferenced files
                  Any new ideas on this problem?

                  Comment


                  • #10
                    Could it be that those invalid files are outdated versions of files? Anyway, as they do not seem to appear in the db dump verbose, I'm not sure if they'd be referenced differently (e.g. by generating their filename as the hash sum?) ... would be nice if someone from OX could clarify/confirm whether it's a bug or a feature of OX

                    Comment


                    • #11
                      When I wrote my last post, I inspected the unreferenced files, I moved away from my server: I think nearly all of them were old versions of files or files, that I deleted (a long time ago).

                      But after getting into this issue today again, I think this is a bug in the WebDAV interface of OX. I am using a file synchronization program, that keeps a local folder and OX in sync using WebDAV. In my case, almost any changes to files in the OX file store are performed using WebDAV, not the web UI.

                      I am pretty sure, this bug is concerning (at least!) failed uploads using WebDAV: I had a look at the files, that are currently unreferenced and need to be deleted/moved. The newest file was modified yesterday evening. This file was a damaged ZIP file. I remember, that I tried to upload a very large (1,5GB) ZIP file yesterday using WebDAV and it aborted with HTTP 502.
                      When I execute
                      Code:
                      head -n 10000 $FILE | md5sum
                      on the damaged file on the server and on the original file, I get the same MD5 sum!

                      With a concrete date/time, I had a look at the OX logs and I actually found an internal error including a stacktrace - see attached file ox_webdav_upload_error.txt.

                      Another confusing thing is: When I upload a new version of a file using WebDAV, I don't see any new version of this file in OX web UI. In fact, almost all of the files in OX have only one version (the current).

                      @OX-Team / Wolfgang Rosenauer: I am a professional Java software engineer and Linux geek. If I can help in any way to track this issue down, please let me know!

                      Comment


                      • #12
                        At least the issue with our customer should not be related to Webdav, they almost exclusively use OX with the web interface ...

                        Comment

                        Working...
                        X