How can I find out which files are lost through a ddrescue recovery atempt?

Multi tool use
I am in the process of salvaging data from a 1 TB failing drive (asked about it in Procedure to replace a hard disk?). I have done ddrescue
from a system rescue USB with a resulting error size of 557568 B in 191 errors, probably all in /home
(I assume what it calls "errors" are not bad sectors, but consecutive sequences of them).
Now, the several guides I've seen around suggest doing e2fsck
on the new disk, and I expected this to somehow find that some files have been assigned "blank sectors/blocks", to the effect of at least knowing which files could not be saved whole. But no errors were found at all (I ran it without -y
to make sure I didn't miss anything). Now I am running it again with -c
, but at 95% no errors were found so far; I guess I have a new drive with some normal-looking files with zeroed or random pieces inside, undetectable until on day I open them with the corresponding software, or Linux Mint needs them.
Can I do anything with the old/new drives in order to obtain a list of possibly corrupted files? I don't know how many they could be, since that 191 could go across files, but at least the total size is not big; I am mostly concerned about a big bunch old family photos and videos (1+ MB each), the rest is probably irrelevant or was backed up recently.
Update: the new pass of e2fsck did give something new of which I understand nothing:
Block bitmap differences: +231216947 +(231216964--231216965) +231216970 +231217707 +231217852 +(231217870--231217871) +231218486
Fix<y>? yes
Free blocks count wrong for group #7056 (497, counted=488).
Fix<y>? yes
Free blocks count wrong (44259598, counted=44259589).
Fix<y>? yes
hard-disk data-recovery ddrescue
add a comment |
I am in the process of salvaging data from a 1 TB failing drive (asked about it in Procedure to replace a hard disk?). I have done ddrescue
from a system rescue USB with a resulting error size of 557568 B in 191 errors, probably all in /home
(I assume what it calls "errors" are not bad sectors, but consecutive sequences of them).
Now, the several guides I've seen around suggest doing e2fsck
on the new disk, and I expected this to somehow find that some files have been assigned "blank sectors/blocks", to the effect of at least knowing which files could not be saved whole. But no errors were found at all (I ran it without -y
to make sure I didn't miss anything). Now I am running it again with -c
, but at 95% no errors were found so far; I guess I have a new drive with some normal-looking files with zeroed or random pieces inside, undetectable until on day I open them with the corresponding software, or Linux Mint needs them.
Can I do anything with the old/new drives in order to obtain a list of possibly corrupted files? I don't know how many they could be, since that 191 could go across files, but at least the total size is not big; I am mostly concerned about a big bunch old family photos and videos (1+ MB each), the rest is probably irrelevant or was backed up recently.
Update: the new pass of e2fsck did give something new of which I understand nothing:
Block bitmap differences: +231216947 +(231216964--231216965) +231216970 +231217707 +231217852 +(231217870--231217871) +231218486
Fix<y>? yes
Free blocks count wrong for group #7056 (497, counted=488).
Fix<y>? yes
Free blocks count wrong (44259598, counted=44259589).
Fix<y>? yes
hard-disk data-recovery ddrescue
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here).e2fsck
doesn't help, the bad blocks will now just be empty.
– dirkt
Apr 26 '17 at 16:08
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
See this Q and the usage ofddrutility
that does pretty much what you want: askubuntu.com/q/904569/271
– Andrea Lazzarotto
Apr 26 '17 at 21:55
add a comment |
I am in the process of salvaging data from a 1 TB failing drive (asked about it in Procedure to replace a hard disk?). I have done ddrescue
from a system rescue USB with a resulting error size of 557568 B in 191 errors, probably all in /home
(I assume what it calls "errors" are not bad sectors, but consecutive sequences of them).
Now, the several guides I've seen around suggest doing e2fsck
on the new disk, and I expected this to somehow find that some files have been assigned "blank sectors/blocks", to the effect of at least knowing which files could not be saved whole. But no errors were found at all (I ran it without -y
to make sure I didn't miss anything). Now I am running it again with -c
, but at 95% no errors were found so far; I guess I have a new drive with some normal-looking files with zeroed or random pieces inside, undetectable until on day I open them with the corresponding software, or Linux Mint needs them.
Can I do anything with the old/new drives in order to obtain a list of possibly corrupted files? I don't know how many they could be, since that 191 could go across files, but at least the total size is not big; I am mostly concerned about a big bunch old family photos and videos (1+ MB each), the rest is probably irrelevant or was backed up recently.
Update: the new pass of e2fsck did give something new of which I understand nothing:
Block bitmap differences: +231216947 +(231216964--231216965) +231216970 +231217707 +231217852 +(231217870--231217871) +231218486
Fix<y>? yes
Free blocks count wrong for group #7056 (497, counted=488).
Fix<y>? yes
Free blocks count wrong (44259598, counted=44259589).
Fix<y>? yes
hard-disk data-recovery ddrescue
I am in the process of salvaging data from a 1 TB failing drive (asked about it in Procedure to replace a hard disk?). I have done ddrescue
from a system rescue USB with a resulting error size of 557568 B in 191 errors, probably all in /home
(I assume what it calls "errors" are not bad sectors, but consecutive sequences of them).
Now, the several guides I've seen around suggest doing e2fsck
on the new disk, and I expected this to somehow find that some files have been assigned "blank sectors/blocks", to the effect of at least knowing which files could not be saved whole. But no errors were found at all (I ran it without -y
to make sure I didn't miss anything). Now I am running it again with -c
, but at 95% no errors were found so far; I guess I have a new drive with some normal-looking files with zeroed or random pieces inside, undetectable until on day I open them with the corresponding software, or Linux Mint needs them.
Can I do anything with the old/new drives in order to obtain a list of possibly corrupted files? I don't know how many they could be, since that 191 could go across files, but at least the total size is not big; I am mostly concerned about a big bunch old family photos and videos (1+ MB each), the rest is probably irrelevant or was backed up recently.
Update: the new pass of e2fsck did give something new of which I understand nothing:
Block bitmap differences: +231216947 +(231216964--231216965) +231216970 +231217707 +231217852 +(231217870--231217871) +231218486
Fix<y>? yes
Free blocks count wrong for group #7056 (497, counted=488).
Fix<y>? yes
Free blocks count wrong (44259598, counted=44259589).
Fix<y>? yes
hard-disk data-recovery ddrescue
hard-disk data-recovery ddrescue
edited Apr 26 '17 at 13:32
terdon♦
129k31253428
129k31253428
asked Apr 26 '17 at 13:14
David SevillaDavid Sevilla
486
486
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here).e2fsck
doesn't help, the bad blocks will now just be empty.
– dirkt
Apr 26 '17 at 16:08
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
See this Q and the usage ofddrutility
that does pretty much what you want: askubuntu.com/q/904569/271
– Andrea Lazzarotto
Apr 26 '17 at 21:55
add a comment |
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here).e2fsck
doesn't help, the bad blocks will now just be empty.
– dirkt
Apr 26 '17 at 16:08
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
See this Q and the usage ofddrutility
that does pretty much what you want: askubuntu.com/q/904569/271
– Andrea Lazzarotto
Apr 26 '17 at 21:55
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
You'll need the block numbers of all encountered bad blocks (
ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). e2fsck
doesn't help, the bad blocks will now just be empty.– dirkt
Apr 26 '17 at 16:08
You'll need the block numbers of all encountered bad blocks (
ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). e2fsck
doesn't help, the bad blocks will now just be empty.– dirkt
Apr 26 '17 at 16:08
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
See this Q and the usage of
ddrutility
that does pretty much what you want: askubuntu.com/q/904569/271– Andrea Lazzarotto
Apr 26 '17 at 21:55
See this Q and the usage of
ddrutility
that does pretty much what you want: askubuntu.com/q/904569/271– Andrea Lazzarotto
Apr 26 '17 at 21:55
add a comment |
2 Answers
2
active
oldest
votes
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). You may want to script this if there are a lot of bad blocks.
e2fsck
doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information.
The bad blocks in the files will just be empty.
Edit
Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks:
$ dd if=/dev/zero of=fs bs=512 count=200
$ /sbin/mke2fs fs
$ ll fs
-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs
$ /sbin/tune2fs -l fs
...
Block count: 100
...
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it:
$ ddrescue -b512 fs fs.new fs.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 102400 B, errsize: 0 B, current rate: 102 kB/s
ipos: 65536 B, errors: 0, average rate: 102 kB/s
opos: 65536 B, run time: 1 s, successful read: 0 s ago
Finished
$ cat fs.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue fs fs.new fs.log
# Start time: 2017-04-27 10:04:03
# Current time: 2017-04-27 10:04:03
# Finished
# current_pos current_status
0x00010000 +
# pos size status
0x00000000 0x00019000 +
$ printf "%in" 0x00019000
102400
So the hex ddrescue
units are in bytes, not any blocks. Finally, let's see what debugfs
uses. First, make a file and find its contents:
$ sudo mount -o loop fs /mnt/tmp
$ sudo chmod go+rwx /mnt/tmp/
$ echo 'abcdefghijk' > /mnt/tmp/foo
$ sudo umount /mnt/tmp
$ hexdump -C fs
...
00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|
00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
So the byte address of the data is 0x5400
. Convert this to 1024-byte filesystem blocks:
$ printf "%in" 0x5400
21504
$ expr 21504 / 1024
21
and let's also try the block range while we are at it:
$ /sbin/debugfs fs
debugfs 1.43.3 (04-Sep-2016)
debugfs: testb 0
testb: Invalid block number 0
debugfs: testb 1
Block 1 marked in use
debugfs: testb 99
Block 99 not in use
debugfs: testb 100
Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fs
Block 100 not in use
debugfs: testb 21
Block 21 marked in use
debugfs: icheck 21
Block Inode number
21 12
debugfs: ncheck 12
Inode Pathname
12 //foo
So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000
from ddrescue
, assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start
210330128384 - 7815168 * 512 = 206328762368
Divide that by the tune2fs
block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples):
206328762368 / 4096 = 50373233.0
and that's the block you should test with debugfs
.
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing thedebugfs
stuff to the old, failing disk instead of the new one?
– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in theddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.
– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition fromfdisk
etc., and then subtract it from the absolute block numbers.
– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...fdisk
gives start=7815168, the first "-" block fromddrescue
is 0x30F8A71000, but subtraction gives 210322313216 whichtestb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.
– David Sevilla
Apr 26 '17 at 20:51
|
show 6 more comments
The easiest way, although not necessarily the fastest or most efficient way, would be to:
- Run ddrescue normally to rescue the whole drive, and be sure to preserve the mapfile.
- ReRun
ddrescue
in fill-mode to mark bad sectors with a unique
pattern. They reccomend something like this:
In order to alleviate false positives you want to use a pattern that would not normally exist in any file.ddrescue --fill-mode=- <(printf "BAD-SECTOR ") outfile mapfile
- Mount the rescued image/disk with it's native operating system.
- Use an appropriate operating system utility, like
e2fsck
on linux, to verify and possibly repair the filesystem directory structure. Any bad sectors that fall in filesystem structures first need to be resolved before you can go looking for all the file corruption.
Repairing directory structures is an art in and of it's self which is
out of this answers scope.
- Use an appropriate utility provided by the operating system, like
grep
, to scan all the files on the filesystem and list those which
contain the unique pattern that fill-mode marked them with. - If necessary, you can examine the files with the appropriate editor
to locate the position of the actual data loss by searching for the
unique pattern within the file(s).
This is operating system independent so I'm intentionally not giving details that vary depending on the specific filesystem type. I first had to do this on an NTFS filesystem using windows utilities, but it's the same idea on ext3/4, etc.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f361421%2fhow-can-i-find-out-which-files-are-lost-through-a-ddrescue-recovery-atempt%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). You may want to script this if there are a lot of bad blocks.
e2fsck
doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information.
The bad blocks in the files will just be empty.
Edit
Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks:
$ dd if=/dev/zero of=fs bs=512 count=200
$ /sbin/mke2fs fs
$ ll fs
-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs
$ /sbin/tune2fs -l fs
...
Block count: 100
...
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it:
$ ddrescue -b512 fs fs.new fs.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 102400 B, errsize: 0 B, current rate: 102 kB/s
ipos: 65536 B, errors: 0, average rate: 102 kB/s
opos: 65536 B, run time: 1 s, successful read: 0 s ago
Finished
$ cat fs.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue fs fs.new fs.log
# Start time: 2017-04-27 10:04:03
# Current time: 2017-04-27 10:04:03
# Finished
# current_pos current_status
0x00010000 +
# pos size status
0x00000000 0x00019000 +
$ printf "%in" 0x00019000
102400
So the hex ddrescue
units are in bytes, not any blocks. Finally, let's see what debugfs
uses. First, make a file and find its contents:
$ sudo mount -o loop fs /mnt/tmp
$ sudo chmod go+rwx /mnt/tmp/
$ echo 'abcdefghijk' > /mnt/tmp/foo
$ sudo umount /mnt/tmp
$ hexdump -C fs
...
00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|
00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
So the byte address of the data is 0x5400
. Convert this to 1024-byte filesystem blocks:
$ printf "%in" 0x5400
21504
$ expr 21504 / 1024
21
and let's also try the block range while we are at it:
$ /sbin/debugfs fs
debugfs 1.43.3 (04-Sep-2016)
debugfs: testb 0
testb: Invalid block number 0
debugfs: testb 1
Block 1 marked in use
debugfs: testb 99
Block 99 not in use
debugfs: testb 100
Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fs
Block 100 not in use
debugfs: testb 21
Block 21 marked in use
debugfs: icheck 21
Block Inode number
21 12
debugfs: ncheck 12
Inode Pathname
12 //foo
So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000
from ddrescue
, assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start
210330128384 - 7815168 * 512 = 206328762368
Divide that by the tune2fs
block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples):
206328762368 / 4096 = 50373233.0
and that's the block you should test with debugfs
.
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing thedebugfs
stuff to the old, failing disk instead of the new one?
– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in theddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.
– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition fromfdisk
etc., and then subtract it from the absolute block numbers.
– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...fdisk
gives start=7815168, the first "-" block fromddrescue
is 0x30F8A71000, but subtraction gives 210322313216 whichtestb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.
– David Sevilla
Apr 26 '17 at 20:51
|
show 6 more comments
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). You may want to script this if there are a lot of bad blocks.
e2fsck
doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information.
The bad blocks in the files will just be empty.
Edit
Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks:
$ dd if=/dev/zero of=fs bs=512 count=200
$ /sbin/mke2fs fs
$ ll fs
-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs
$ /sbin/tune2fs -l fs
...
Block count: 100
...
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it:
$ ddrescue -b512 fs fs.new fs.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 102400 B, errsize: 0 B, current rate: 102 kB/s
ipos: 65536 B, errors: 0, average rate: 102 kB/s
opos: 65536 B, run time: 1 s, successful read: 0 s ago
Finished
$ cat fs.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue fs fs.new fs.log
# Start time: 2017-04-27 10:04:03
# Current time: 2017-04-27 10:04:03
# Finished
# current_pos current_status
0x00010000 +
# pos size status
0x00000000 0x00019000 +
$ printf "%in" 0x00019000
102400
So the hex ddrescue
units are in bytes, not any blocks. Finally, let's see what debugfs
uses. First, make a file and find its contents:
$ sudo mount -o loop fs /mnt/tmp
$ sudo chmod go+rwx /mnt/tmp/
$ echo 'abcdefghijk' > /mnt/tmp/foo
$ sudo umount /mnt/tmp
$ hexdump -C fs
...
00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|
00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
So the byte address of the data is 0x5400
. Convert this to 1024-byte filesystem blocks:
$ printf "%in" 0x5400
21504
$ expr 21504 / 1024
21
and let's also try the block range while we are at it:
$ /sbin/debugfs fs
debugfs 1.43.3 (04-Sep-2016)
debugfs: testb 0
testb: Invalid block number 0
debugfs: testb 1
Block 1 marked in use
debugfs: testb 99
Block 99 not in use
debugfs: testb 100
Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fs
Block 100 not in use
debugfs: testb 21
Block 21 marked in use
debugfs: icheck 21
Block Inode number
21 12
debugfs: ncheck 12
Inode Pathname
12 //foo
So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000
from ddrescue
, assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start
210330128384 - 7815168 * 512 = 206328762368
Divide that by the tune2fs
block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples):
206328762368 / 4096 = 50373233.0
and that's the block you should test with debugfs
.
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing thedebugfs
stuff to the old, failing disk instead of the new one?
– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in theddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.
– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition fromfdisk
etc., and then subtract it from the absolute block numbers.
– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...fdisk
gives start=7815168, the first "-" block fromddrescue
is 0x30F8A71000, but subtraction gives 210322313216 whichtestb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.
– David Sevilla
Apr 26 '17 at 20:51
|
show 6 more comments
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). You may want to script this if there are a lot of bad blocks.
e2fsck
doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information.
The bad blocks in the files will just be empty.
Edit
Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks:
$ dd if=/dev/zero of=fs bs=512 count=200
$ /sbin/mke2fs fs
$ ll fs
-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs
$ /sbin/tune2fs -l fs
...
Block count: 100
...
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it:
$ ddrescue -b512 fs fs.new fs.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 102400 B, errsize: 0 B, current rate: 102 kB/s
ipos: 65536 B, errors: 0, average rate: 102 kB/s
opos: 65536 B, run time: 1 s, successful read: 0 s ago
Finished
$ cat fs.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue fs fs.new fs.log
# Start time: 2017-04-27 10:04:03
# Current time: 2017-04-27 10:04:03
# Finished
# current_pos current_status
0x00010000 +
# pos size status
0x00000000 0x00019000 +
$ printf "%in" 0x00019000
102400
So the hex ddrescue
units are in bytes, not any blocks. Finally, let's see what debugfs
uses. First, make a file and find its contents:
$ sudo mount -o loop fs /mnt/tmp
$ sudo chmod go+rwx /mnt/tmp/
$ echo 'abcdefghijk' > /mnt/tmp/foo
$ sudo umount /mnt/tmp
$ hexdump -C fs
...
00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|
00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
So the byte address of the data is 0x5400
. Convert this to 1024-byte filesystem blocks:
$ printf "%in" 0x5400
21504
$ expr 21504 / 1024
21
and let's also try the block range while we are at it:
$ /sbin/debugfs fs
debugfs 1.43.3 (04-Sep-2016)
debugfs: testb 0
testb: Invalid block number 0
debugfs: testb 1
Block 1 marked in use
debugfs: testb 99
Block 99 not in use
debugfs: testb 100
Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fs
Block 100 not in use
debugfs: testb 21
Block 21 marked in use
debugfs: icheck 21
Block Inode number
21 12
debugfs: ncheck 12
Inode Pathname
12 //foo
So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000
from ddrescue
, assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start
210330128384 - 7815168 * 512 = 206328762368
Divide that by the tune2fs
block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples):
206328762368 / 4096 = 50373233.0
and that's the block you should test with debugfs
.
You'll need the block numbers of all encountered bad blocks (ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here). You may want to script this if there are a lot of bad blocks.
e2fsck
doesn't help, it just checks consistency of the file system itself, so it will only act of the bad blocks contain "adminstrative" file system information.
The bad blocks in the files will just be empty.
Edit
Ok, let's figure out the block size thingy. Let's make a trial filesystem with 512-byte device blocks:
$ dd if=/dev/zero of=fs bs=512 count=200
$ /sbin/mke2fs fs
$ ll fs
-rw-r--r-- 1 dirk dirk 102400 Apr 27 10:03 fs
$ /sbin/tune2fs -l fs
...
Block count: 100
...
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
So the filesystem block size is 1024, and we've 100 of those filesystem blocks (and 200 512-byte device blocks). Rescue it:
$ ddrescue -b512 fs fs.new fs.log
GNU ddrescue 1.19
Press Ctrl-C to interrupt
rescued: 102400 B, errsize: 0 B, current rate: 102 kB/s
ipos: 65536 B, errors: 0, average rate: 102 kB/s
opos: 65536 B, run time: 1 s, successful read: 0 s ago
Finished
$ cat fs.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue fs fs.new fs.log
# Start time: 2017-04-27 10:04:03
# Current time: 2017-04-27 10:04:03
# Finished
# current_pos current_status
0x00010000 +
# pos size status
0x00000000 0x00019000 +
$ printf "%in" 0x00019000
102400
So the hex ddrescue
units are in bytes, not any blocks. Finally, let's see what debugfs
uses. First, make a file and find its contents:
$ sudo mount -o loop fs /mnt/tmp
$ sudo chmod go+rwx /mnt/tmp/
$ echo 'abcdefghijk' > /mnt/tmp/foo
$ sudo umount /mnt/tmp
$ hexdump -C fs
...
00005400 61 62 63 64 65 66 67 68 69 6a 6b 0a 00 00 00 00 |abcdefghijk.....|
00005410 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
So the byte address of the data is 0x5400
. Convert this to 1024-byte filesystem blocks:
$ printf "%in" 0x5400
21504
$ expr 21504 / 1024
21
and let's also try the block range while we are at it:
$ /sbin/debugfs fs
debugfs 1.43.3 (04-Sep-2016)
debugfs: testb 0
testb: Invalid block number 0
debugfs: testb 1
Block 1 marked in use
debugfs: testb 99
Block 99 not in use
debugfs: testb 100
Illegal block number passed to ext2fs_test_block_bitmap #100 for block bitmap for fs
Block 100 not in use
debugfs: testb 21
Block 21 marked in use
debugfs: icheck 21
Block Inode number
21 12
debugfs: ncheck 12
Inode Pathname
12 //foo
So that works out as expected, except block 0 is invalid, probably because the file system metadata is there. So, for your byte address 0x30F8A71000
from ddrescue
, assuming you worked on the whole disk and not a partition, we subtract the byte address of the partition start
210330128384 - 7815168 * 512 = 206328762368
Divide that by the tune2fs
block size to get the filesystem block (note that since multiple physical, possibly damaged, blocks make up a filesystem block, numbers needn't be exact multiples):
206328762368 / 4096 = 50373233.0
and that's the block you should test with debugfs
.
edited Apr 27 '17 at 10:06
answered Apr 26 '17 at 19:32
dirktdirkt
16.7k21336
16.7k21336
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing thedebugfs
stuff to the old, failing disk instead of the new one?
– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in theddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.
– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition fromfdisk
etc., and then subtract it from the absolute block numbers.
– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...fdisk
gives start=7815168, the first "-" block fromddrescue
is 0x30F8A71000, but subtraction gives 210322313216 whichtestb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.
– David Sevilla
Apr 26 '17 at 20:51
|
show 6 more comments
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing thedebugfs
stuff to the old, failing disk instead of the new one?
– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in theddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.
– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition fromfdisk
etc., and then subtract it from the absolute block numbers.
– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...fdisk
gives start=7815168, the first "-" block fromddrescue
is 0x30F8A71000, but subtraction gives 210322313216 whichtestb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.
– David Sevilla
Apr 26 '17 at 20:51
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing the
debugfs
stuff to the old, failing disk instead of the new one?– David Sevilla
Apr 26 '17 at 19:39
Great. Now I need a little help figuring out those numbers (my first attempts are not giving me anything useful), I'll look around and open a new question on that if needed. But first, maybe I should be doing the
debugfs
stuff to the old, failing disk instead of the new one?– David Sevilla
Apr 26 '17 at 19:39
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
No, use the new one resp. the image if you've made one. Be careful not to mount the new disk and change anything on it before you have identified the files.
– dirkt
Apr 26 '17 at 19:45
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in the
ddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.– David Sevilla
Apr 26 '17 at 19:48
Ok, that made sense. Now I need to figure out the correspondence between the binary numbers in the
ddrescue
log file and the blocks in the partition (which is not the first one in the disk). The page you suggested is a good start, but I need to do more than what is said there.– David Sevilla
Apr 26 '17 at 19:48
You just need the block number of the start of the partition from
fdisk
etc., and then subtract it from the absolute block numbers.– dirkt
Apr 26 '17 at 20:38
You just need the block number of the start of the partition from
fdisk
etc., and then subtract it from the absolute block numbers.– dirkt
Apr 26 '17 at 20:38
Well, I tried that before...
fdisk
gives start=7815168, the first "-" block from ddrescue
is 0x30F8A71000, but subtraction gives 210322313216 which testb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.– David Sevilla
Apr 26 '17 at 20:51
Well, I tried that before...
fdisk
gives start=7815168, the first "-" block from ddrescue
is 0x30F8A71000, but subtraction gives 210322313216 which testb
complains about: "Illegal block number ... for /dev/sc5". I also tried dividing that position by 512(=0x200) or even by 4096(=0x1000) (the latter not making sense because the other positions are not multiples of 1000, only 200). I guess I'm messing up the units somehow.– David Sevilla
Apr 26 '17 at 20:51
|
show 6 more comments
The easiest way, although not necessarily the fastest or most efficient way, would be to:
- Run ddrescue normally to rescue the whole drive, and be sure to preserve the mapfile.
- ReRun
ddrescue
in fill-mode to mark bad sectors with a unique
pattern. They reccomend something like this:
In order to alleviate false positives you want to use a pattern that would not normally exist in any file.ddrescue --fill-mode=- <(printf "BAD-SECTOR ") outfile mapfile
- Mount the rescued image/disk with it's native operating system.
- Use an appropriate operating system utility, like
e2fsck
on linux, to verify and possibly repair the filesystem directory structure. Any bad sectors that fall in filesystem structures first need to be resolved before you can go looking for all the file corruption.
Repairing directory structures is an art in and of it's self which is
out of this answers scope.
- Use an appropriate utility provided by the operating system, like
grep
, to scan all the files on the filesystem and list those which
contain the unique pattern that fill-mode marked them with. - If necessary, you can examine the files with the appropriate editor
to locate the position of the actual data loss by searching for the
unique pattern within the file(s).
This is operating system independent so I'm intentionally not giving details that vary depending on the specific filesystem type. I first had to do this on an NTFS filesystem using windows utilities, but it's the same idea on ext3/4, etc.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
The easiest way, although not necessarily the fastest or most efficient way, would be to:
- Run ddrescue normally to rescue the whole drive, and be sure to preserve the mapfile.
- ReRun
ddrescue
in fill-mode to mark bad sectors with a unique
pattern. They reccomend something like this:
In order to alleviate false positives you want to use a pattern that would not normally exist in any file.ddrescue --fill-mode=- <(printf "BAD-SECTOR ") outfile mapfile
- Mount the rescued image/disk with it's native operating system.
- Use an appropriate operating system utility, like
e2fsck
on linux, to verify and possibly repair the filesystem directory structure. Any bad sectors that fall in filesystem structures first need to be resolved before you can go looking for all the file corruption.
Repairing directory structures is an art in and of it's self which is
out of this answers scope.
- Use an appropriate utility provided by the operating system, like
grep
, to scan all the files on the filesystem and list those which
contain the unique pattern that fill-mode marked them with. - If necessary, you can examine the files with the appropriate editor
to locate the position of the actual data loss by searching for the
unique pattern within the file(s).
This is operating system independent so I'm intentionally not giving details that vary depending on the specific filesystem type. I first had to do this on an NTFS filesystem using windows utilities, but it's the same idea on ext3/4, etc.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
The easiest way, although not necessarily the fastest or most efficient way, would be to:
- Run ddrescue normally to rescue the whole drive, and be sure to preserve the mapfile.
- ReRun
ddrescue
in fill-mode to mark bad sectors with a unique
pattern. They reccomend something like this:
In order to alleviate false positives you want to use a pattern that would not normally exist in any file.ddrescue --fill-mode=- <(printf "BAD-SECTOR ") outfile mapfile
- Mount the rescued image/disk with it's native operating system.
- Use an appropriate operating system utility, like
e2fsck
on linux, to verify and possibly repair the filesystem directory structure. Any bad sectors that fall in filesystem structures first need to be resolved before you can go looking for all the file corruption.
Repairing directory structures is an art in and of it's self which is
out of this answers scope.
- Use an appropriate utility provided by the operating system, like
grep
, to scan all the files on the filesystem and list those which
contain the unique pattern that fill-mode marked them with. - If necessary, you can examine the files with the appropriate editor
to locate the position of the actual data loss by searching for the
unique pattern within the file(s).
This is operating system independent so I'm intentionally not giving details that vary depending on the specific filesystem type. I first had to do this on an NTFS filesystem using windows utilities, but it's the same idea on ext3/4, etc.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
The easiest way, although not necessarily the fastest or most efficient way, would be to:
- Run ddrescue normally to rescue the whole drive, and be sure to preserve the mapfile.
- ReRun
ddrescue
in fill-mode to mark bad sectors with a unique
pattern. They reccomend something like this:
In order to alleviate false positives you want to use a pattern that would not normally exist in any file.ddrescue --fill-mode=- <(printf "BAD-SECTOR ") outfile mapfile
- Mount the rescued image/disk with it's native operating system.
- Use an appropriate operating system utility, like
e2fsck
on linux, to verify and possibly repair the filesystem directory structure. Any bad sectors that fall in filesystem structures first need to be resolved before you can go looking for all the file corruption.
Repairing directory structures is an art in and of it's self which is
out of this answers scope.
- Use an appropriate utility provided by the operating system, like
grep
, to scan all the files on the filesystem and list those which
contain the unique pattern that fill-mode marked them with. - If necessary, you can examine the files with the appropriate editor
to locate the position of the actual data loss by searching for the
unique pattern within the file(s).
This is operating system independent so I'm intentionally not giving details that vary depending on the specific filesystem type. I first had to do this on an NTFS filesystem using windows utilities, but it's the same idea on ext3/4, etc.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
edited 2 days ago
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
answered 2 days ago
tlumtlum
1011
1011
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
tlum is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f361421%2fhow-can-i-find-out-which-files-are-lost-through-a-ddrescue-recovery-atempt%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
trc82nQfZEXL H nnCHxOr,Ca7rfsH,983
From what I read here and there, I understand a bit the "Block bitmap differences" stuff, but I fail to understand if I could use it for my problem of finding the corrupted files.
– David Sevilla
Apr 26 '17 at 15:59
You'll need the block numbers of all encountered bad blocks (
ddrescue
should have given you a list, I hope you saved it), and then you'll need to find out which files make use of these blocks (see e.g. here).e2fsck
doesn't help, the bad blocks will now just be empty.– dirkt
Apr 26 '17 at 16:08
If you mean the mapfile it produces, I do. Do you want to put your comment as an answer so I can accept it?
– David Sevilla
Apr 26 '17 at 16:31
See this Q and the usage of
ddrutility
that does pretty much what you want: askubuntu.com/q/904569/271– Andrea Lazzarotto
Apr 26 '17 at 21:55