Tar to tape : block number and blocking factor
The 64 kiB blocksize is to maximize throughput and avoid "shoe shining".
mt -f /dev/nst0 setblk 64k
tar -c -v -R -b128 -f /dev/nst0 test_dir
returns:
bloc 0 : test_dir/
bloc 1 : test_dir/file_1.bin
bloc 204802 : test_dir/file_2.bin
bloc 2252803 : test_dir/file_3.bin
bloc 4300804 : test_dir/file_4.bin
...
But the block number in the tar
output corresponds in 512 B blocksize, despite blocking factor giving 64 kiB blocksize (128 * 512).
And then, regardless of blocksize of the mt command (variable, 64 kiB).
The goal would to random access in the Tar-tape. Convert 64 kiB blocksize to 512 implies to trim.
Is there a way to matching records size of tar
and mt
?
linux tar tape
add a comment |
The 64 kiB blocksize is to maximize throughput and avoid "shoe shining".
mt -f /dev/nst0 setblk 64k
tar -c -v -R -b128 -f /dev/nst0 test_dir
returns:
bloc 0 : test_dir/
bloc 1 : test_dir/file_1.bin
bloc 204802 : test_dir/file_2.bin
bloc 2252803 : test_dir/file_3.bin
bloc 4300804 : test_dir/file_4.bin
...
But the block number in the tar
output corresponds in 512 B blocksize, despite blocking factor giving 64 kiB blocksize (128 * 512).
And then, regardless of blocksize of the mt command (variable, 64 kiB).
The goal would to random access in the Tar-tape. Convert 64 kiB blocksize to 512 implies to trim.
Is there a way to matching records size of tar
and mt
?
linux tar tape
Excerpts of man pages ofst
command (more detailed asmt
) : italic_Many programs (e.g.,tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic
– idrevettenome
Sep 25 '15 at 13:56
But as I mentioned above, I tried withmt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])
– idrevettenome
Sep 25 '15 at 14:05
add a comment |
The 64 kiB blocksize is to maximize throughput and avoid "shoe shining".
mt -f /dev/nst0 setblk 64k
tar -c -v -R -b128 -f /dev/nst0 test_dir
returns:
bloc 0 : test_dir/
bloc 1 : test_dir/file_1.bin
bloc 204802 : test_dir/file_2.bin
bloc 2252803 : test_dir/file_3.bin
bloc 4300804 : test_dir/file_4.bin
...
But the block number in the tar
output corresponds in 512 B blocksize, despite blocking factor giving 64 kiB blocksize (128 * 512).
And then, regardless of blocksize of the mt command (variable, 64 kiB).
The goal would to random access in the Tar-tape. Convert 64 kiB blocksize to 512 implies to trim.
Is there a way to matching records size of tar
and mt
?
linux tar tape
The 64 kiB blocksize is to maximize throughput and avoid "shoe shining".
mt -f /dev/nst0 setblk 64k
tar -c -v -R -b128 -f /dev/nst0 test_dir
returns:
bloc 0 : test_dir/
bloc 1 : test_dir/file_1.bin
bloc 204802 : test_dir/file_2.bin
bloc 2252803 : test_dir/file_3.bin
bloc 4300804 : test_dir/file_4.bin
...
But the block number in the tar
output corresponds in 512 B blocksize, despite blocking factor giving 64 kiB blocksize (128 * 512).
And then, regardless of blocksize of the mt command (variable, 64 kiB).
The goal would to random access in the Tar-tape. Convert 64 kiB blocksize to 512 implies to trim.
Is there a way to matching records size of tar
and mt
?
linux tar tape
linux tar tape
edited Sep 22 '15 at 10:06
bertieb
5,672112542
5,672112542
asked Sep 22 '15 at 9:18
idrevettenomeidrevettenome
113
113
Excerpts of man pages ofst
command (more detailed asmt
) : italic_Many programs (e.g.,tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic
– idrevettenome
Sep 25 '15 at 13:56
But as I mentioned above, I tried withmt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])
– idrevettenome
Sep 25 '15 at 14:05
add a comment |
Excerpts of man pages ofst
command (more detailed asmt
) : italic_Many programs (e.g.,tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic
– idrevettenome
Sep 25 '15 at 13:56
But as I mentioned above, I tried withmt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])
– idrevettenome
Sep 25 '15 at 14:05
Excerpts of man pages of
st
command (more detailed as mt
) : italic_Many programs (e.g., tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic– idrevettenome
Sep 25 '15 at 13:56
Excerpts of man pages of
st
command (more detailed as mt
) : italic_Many programs (e.g., tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic– idrevettenome
Sep 25 '15 at 13:56
But as I mentioned above, I tried with
mt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])– idrevettenome
Sep 25 '15 at 14:05
But as I mentioned above, I tried with
mt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])– idrevettenome
Sep 25 '15 at 14:05
add a comment |
2 Answers
2
active
oldest
votes
UPDATE: Arg! So much for tests. Just did a full backup and tar with its default block/record size ran about 25% slower. So I'm back to using -b2048. Will try -b1024 and see how it goes.
Okay, just ran some tests. My advice is to set the tape block size to 0 (variable) and choose a tar block/record of your choice. I tested using a tape block size of 1M and 512K with matching tar record sizes (-b2048=1M -b1024=512K) and then I set the tape block size to 0 and tested using tar -b2048 and -b1024 and there was no difference. Then I ran a test with 'setblk 0' again but with a default tar block size of 512 (no -bxxxx in other words) for total record length of 10240 bytes and still no difference in performance.
I'm using a Quantum LTO-5. As long as you're substantially above 512 bytes (the LTO-5 default, I believe) you should be okay and it's unlikely you will experience any shoe-shining. IMO, the only reason to set the drive block size (instead of variable) is when its ignoring the software block size (record size in case of tar).
Note: the default block size for tar is 512 bytes x 20 for a total "record size" of 10240 bytes. Btw, my tests all finished within 12 seconds which totals ~141,000,000 bytes / second, the LTO-5 max throughput.
add a comment |
My results with --blocking-factor using tar to tape on multiple 200 to 300 GB size tar archives.
--blocking-factor 256 throughput 62 to 68 MiB/sec.
--blocking-factor 1024 throughput 87 MiB/sec.
I intend to do more experimentation with even larger blocking factors.
Above obtained with default hardware block size (I think variable) and no compression.
My equipment listed below:
HP EH958B StorageWorks Ultrium 3000 LTO-5 1.5/3TB SAS (Serial Attached SCSI) Half-Height External Tape Drive LTO5
TDK LTO-5 Ultrium Data Cartridge 1.5 TB / 3.0 TB LTO Ultrium-5 Tape
ATTO Technology ExpressSAS H680 PCIe 2.0 Low Profile 6Gb/s SAS HBA Card (External Ports) P/N: ESAS-H680-000
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f976465%2ftar-to-tape-block-number-and-blocking-factor%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
UPDATE: Arg! So much for tests. Just did a full backup and tar with its default block/record size ran about 25% slower. So I'm back to using -b2048. Will try -b1024 and see how it goes.
Okay, just ran some tests. My advice is to set the tape block size to 0 (variable) and choose a tar block/record of your choice. I tested using a tape block size of 1M and 512K with matching tar record sizes (-b2048=1M -b1024=512K) and then I set the tape block size to 0 and tested using tar -b2048 and -b1024 and there was no difference. Then I ran a test with 'setblk 0' again but with a default tar block size of 512 (no -bxxxx in other words) for total record length of 10240 bytes and still no difference in performance.
I'm using a Quantum LTO-5. As long as you're substantially above 512 bytes (the LTO-5 default, I believe) you should be okay and it's unlikely you will experience any shoe-shining. IMO, the only reason to set the drive block size (instead of variable) is when its ignoring the software block size (record size in case of tar).
Note: the default block size for tar is 512 bytes x 20 for a total "record size" of 10240 bytes. Btw, my tests all finished within 12 seconds which totals ~141,000,000 bytes / second, the LTO-5 max throughput.
add a comment |
UPDATE: Arg! So much for tests. Just did a full backup and tar with its default block/record size ran about 25% slower. So I'm back to using -b2048. Will try -b1024 and see how it goes.
Okay, just ran some tests. My advice is to set the tape block size to 0 (variable) and choose a tar block/record of your choice. I tested using a tape block size of 1M and 512K with matching tar record sizes (-b2048=1M -b1024=512K) and then I set the tape block size to 0 and tested using tar -b2048 and -b1024 and there was no difference. Then I ran a test with 'setblk 0' again but with a default tar block size of 512 (no -bxxxx in other words) for total record length of 10240 bytes and still no difference in performance.
I'm using a Quantum LTO-5. As long as you're substantially above 512 bytes (the LTO-5 default, I believe) you should be okay and it's unlikely you will experience any shoe-shining. IMO, the only reason to set the drive block size (instead of variable) is when its ignoring the software block size (record size in case of tar).
Note: the default block size for tar is 512 bytes x 20 for a total "record size" of 10240 bytes. Btw, my tests all finished within 12 seconds which totals ~141,000,000 bytes / second, the LTO-5 max throughput.
add a comment |
UPDATE: Arg! So much for tests. Just did a full backup and tar with its default block/record size ran about 25% slower. So I'm back to using -b2048. Will try -b1024 and see how it goes.
Okay, just ran some tests. My advice is to set the tape block size to 0 (variable) and choose a tar block/record of your choice. I tested using a tape block size of 1M and 512K with matching tar record sizes (-b2048=1M -b1024=512K) and then I set the tape block size to 0 and tested using tar -b2048 and -b1024 and there was no difference. Then I ran a test with 'setblk 0' again but with a default tar block size of 512 (no -bxxxx in other words) for total record length of 10240 bytes and still no difference in performance.
I'm using a Quantum LTO-5. As long as you're substantially above 512 bytes (the LTO-5 default, I believe) you should be okay and it's unlikely you will experience any shoe-shining. IMO, the only reason to set the drive block size (instead of variable) is when its ignoring the software block size (record size in case of tar).
Note: the default block size for tar is 512 bytes x 20 for a total "record size" of 10240 bytes. Btw, my tests all finished within 12 seconds which totals ~141,000,000 bytes / second, the LTO-5 max throughput.
UPDATE: Arg! So much for tests. Just did a full backup and tar with its default block/record size ran about 25% slower. So I'm back to using -b2048. Will try -b1024 and see how it goes.
Okay, just ran some tests. My advice is to set the tape block size to 0 (variable) and choose a tar block/record of your choice. I tested using a tape block size of 1M and 512K with matching tar record sizes (-b2048=1M -b1024=512K) and then I set the tape block size to 0 and tested using tar -b2048 and -b1024 and there was no difference. Then I ran a test with 'setblk 0' again but with a default tar block size of 512 (no -bxxxx in other words) for total record length of 10240 bytes and still no difference in performance.
I'm using a Quantum LTO-5. As long as you're substantially above 512 bytes (the LTO-5 default, I believe) you should be okay and it's unlikely you will experience any shoe-shining. IMO, the only reason to set the drive block size (instead of variable) is when its ignoring the software block size (record size in case of tar).
Note: the default block size for tar is 512 bytes x 20 for a total "record size" of 10240 bytes. Btw, my tests all finished within 12 seconds which totals ~141,000,000 bytes / second, the LTO-5 max throughput.
edited Feb 8 '16 at 17:33
Dan Pritts
855713
855713
answered Jan 30 '16 at 4:35
rayzinpwrrayzinpwr
11
11
add a comment |
add a comment |
My results with --blocking-factor using tar to tape on multiple 200 to 300 GB size tar archives.
--blocking-factor 256 throughput 62 to 68 MiB/sec.
--blocking-factor 1024 throughput 87 MiB/sec.
I intend to do more experimentation with even larger blocking factors.
Above obtained with default hardware block size (I think variable) and no compression.
My equipment listed below:
HP EH958B StorageWorks Ultrium 3000 LTO-5 1.5/3TB SAS (Serial Attached SCSI) Half-Height External Tape Drive LTO5
TDK LTO-5 Ultrium Data Cartridge 1.5 TB / 3.0 TB LTO Ultrium-5 Tape
ATTO Technology ExpressSAS H680 PCIe 2.0 Low Profile 6Gb/s SAS HBA Card (External Ports) P/N: ESAS-H680-000
add a comment |
My results with --blocking-factor using tar to tape on multiple 200 to 300 GB size tar archives.
--blocking-factor 256 throughput 62 to 68 MiB/sec.
--blocking-factor 1024 throughput 87 MiB/sec.
I intend to do more experimentation with even larger blocking factors.
Above obtained with default hardware block size (I think variable) and no compression.
My equipment listed below:
HP EH958B StorageWorks Ultrium 3000 LTO-5 1.5/3TB SAS (Serial Attached SCSI) Half-Height External Tape Drive LTO5
TDK LTO-5 Ultrium Data Cartridge 1.5 TB / 3.0 TB LTO Ultrium-5 Tape
ATTO Technology ExpressSAS H680 PCIe 2.0 Low Profile 6Gb/s SAS HBA Card (External Ports) P/N: ESAS-H680-000
add a comment |
My results with --blocking-factor using tar to tape on multiple 200 to 300 GB size tar archives.
--blocking-factor 256 throughput 62 to 68 MiB/sec.
--blocking-factor 1024 throughput 87 MiB/sec.
I intend to do more experimentation with even larger blocking factors.
Above obtained with default hardware block size (I think variable) and no compression.
My equipment listed below:
HP EH958B StorageWorks Ultrium 3000 LTO-5 1.5/3TB SAS (Serial Attached SCSI) Half-Height External Tape Drive LTO5
TDK LTO-5 Ultrium Data Cartridge 1.5 TB / 3.0 TB LTO Ultrium-5 Tape
ATTO Technology ExpressSAS H680 PCIe 2.0 Low Profile 6Gb/s SAS HBA Card (External Ports) P/N: ESAS-H680-000
My results with --blocking-factor using tar to tape on multiple 200 to 300 GB size tar archives.
--blocking-factor 256 throughput 62 to 68 MiB/sec.
--blocking-factor 1024 throughput 87 MiB/sec.
I intend to do more experimentation with even larger blocking factors.
Above obtained with default hardware block size (I think variable) and no compression.
My equipment listed below:
HP EH958B StorageWorks Ultrium 3000 LTO-5 1.5/3TB SAS (Serial Attached SCSI) Half-Height External Tape Drive LTO5
TDK LTO-5 Ultrium Data Cartridge 1.5 TB / 3.0 TB LTO Ultrium-5 Tape
ATTO Technology ExpressSAS H680 PCIe 2.0 Low Profile 6Gb/s SAS HBA Card (External Ports) P/N: ESAS-H680-000
edited Feb 23 '16 at 14:59
karel
9,32993239
9,32993239
answered Feb 23 '16 at 14:56
AbnerAbner
11
11
add a comment |
add a comment |
Thanks for contributing an answer to Super User!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f976465%2ftar-to-tape-block-number-and-blocking-factor%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Excerpts of man pages of
st
command (more detailed asmt
) : italic_Many programs (e.g.,tar(1)
) allow the user to specify the blocking factor on the command line. Note that this determines the physical block size on tape only in variable-block mode._italic– idrevettenome
Sep 25 '15 at 13:56
But as I mentioned above, I tried with
mt -f /dev/nst0 setblk 0
. Viewed on forum : italic_tar/dd/whatever blocksize != (SCSI) tape device driver block _italic ([linuxmisc.com/14-unix-administering/b290ded6513059e2.htm])– idrevettenome
Sep 25 '15 at 14:05