Database accidentally deleted with a bash script [duplicate]
This question already has an answer here:
Monday morning mistake: sudo rm -rf --no-preserve-root /
10 answers
Edit: a follow-up question: Recover a mongo database deleted by rm.
My developer committed a huge mistake and we cannot find our Mongo database anywhere in the server.
He logged into the server, and saved the following shell under ~/crontab/mongod_back.sh
:
#!/bin/sh
DUMP=mongodump
OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录
TAR_DIR=/data/backup/mongod // 备份文件正式目录
DATE=`date +%Y_%m_%d_%H_%M_%S` // 备份文件将以备份对间保存
DB_USER=Guitang // 数库操作员
DB_PASS=qq■■■■■■■■■■■■■■■■■■■■■ // 数掘库操作员密码
DAYS=14 // 保留最新14天的份
TARBAK="mongod_bak_$DATE.tar.gz" // 备份文件命名格式
cd $OUT_DIR // 创建文件夹
rm -rf $OUT_DIR/* // 清空临时目录
mkdir -p $OUT_DIR/$DATE // 创建本次备份文件夹
$DUMP -d wecard -u $DB_USER -p $DB_PASS -o $OUT_DIR/$DATE // 执行备份命令
tar -zcvf $TAR_DIR/$TAR_BAK $OUT_DIR/$DATE // 将份文件打包放入正式
find $TAR_DIR/ -mtime +$DAYS -delete // 除14天前的旧备
And then he ran it and it outputted permission denied
messages, so he pressed Ctrl+C
. The server shut down automatically. He tried to restart it but got a grub error:
He contacted AliCloud, the engineer connected the disk to another working server so that he could check the disk. Looks like some folders are gone, including /data/
where the mongodb is!
- We don't understand how the script could destroy the disk including
/data/
; - And of course, is it possible to get the
/data/
back?
PS: He did not take snapshot of the disk before.
PS2: As people mention "backups" a lot, we have lots of important users and data coming these 2 days, the purpose of this action was to backup them (for the first time), then they turned out to be entirely deleted.
filesystems shell ubuntu-14.04 data-recovery disaster-recovery
marked as duplicate by Jenny D, womble♦ 4 hours ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
|
show 5 more comments
This question already has an answer here:
Monday morning mistake: sudo rm -rf --no-preserve-root /
10 answers
Edit: a follow-up question: Recover a mongo database deleted by rm.
My developer committed a huge mistake and we cannot find our Mongo database anywhere in the server.
He logged into the server, and saved the following shell under ~/crontab/mongod_back.sh
:
#!/bin/sh
DUMP=mongodump
OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录
TAR_DIR=/data/backup/mongod // 备份文件正式目录
DATE=`date +%Y_%m_%d_%H_%M_%S` // 备份文件将以备份对间保存
DB_USER=Guitang // 数库操作员
DB_PASS=qq■■■■■■■■■■■■■■■■■■■■■ // 数掘库操作员密码
DAYS=14 // 保留最新14天的份
TARBAK="mongod_bak_$DATE.tar.gz" // 备份文件命名格式
cd $OUT_DIR // 创建文件夹
rm -rf $OUT_DIR/* // 清空临时目录
mkdir -p $OUT_DIR/$DATE // 创建本次备份文件夹
$DUMP -d wecard -u $DB_USER -p $DB_PASS -o $OUT_DIR/$DATE // 执行备份命令
tar -zcvf $TAR_DIR/$TAR_BAK $OUT_DIR/$DATE // 将份文件打包放入正式
find $TAR_DIR/ -mtime +$DAYS -delete // 除14天前的旧备
And then he ran it and it outputted permission denied
messages, so he pressed Ctrl+C
. The server shut down automatically. He tried to restart it but got a grub error:
He contacted AliCloud, the engineer connected the disk to another working server so that he could check the disk. Looks like some folders are gone, including /data/
where the mongodb is!
- We don't understand how the script could destroy the disk including
/data/
; - And of course, is it possible to get the
/data/
back?
PS: He did not take snapshot of the disk before.
PS2: As people mention "backups" a lot, we have lots of important users and data coming these 2 days, the purpose of this action was to backup them (for the first time), then they turned out to be entirely deleted.
filesystems shell ubuntu-14.04 data-recovery disaster-recovery
marked as duplicate by Jenny D, womble♦ 4 hours ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3
Your script has no error checking. If the linecd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be/
. This is why you have backups - use them.
– Jenny D
17 hours ago
He run the shell under~/crontab/
, how couldrm
orfind -delete
delete folders under/
?
– SoftTimur
17 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
7
Wow - did this script get into your version control system? Did it go through peer review?rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)
– Lightness Races in Orbit
10 hours ago
1
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago
|
show 5 more comments
This question already has an answer here:
Monday morning mistake: sudo rm -rf --no-preserve-root /
10 answers
Edit: a follow-up question: Recover a mongo database deleted by rm.
My developer committed a huge mistake and we cannot find our Mongo database anywhere in the server.
He logged into the server, and saved the following shell under ~/crontab/mongod_back.sh
:
#!/bin/sh
DUMP=mongodump
OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录
TAR_DIR=/data/backup/mongod // 备份文件正式目录
DATE=`date +%Y_%m_%d_%H_%M_%S` // 备份文件将以备份对间保存
DB_USER=Guitang // 数库操作员
DB_PASS=qq■■■■■■■■■■■■■■■■■■■■■ // 数掘库操作员密码
DAYS=14 // 保留最新14天的份
TARBAK="mongod_bak_$DATE.tar.gz" // 备份文件命名格式
cd $OUT_DIR // 创建文件夹
rm -rf $OUT_DIR/* // 清空临时目录
mkdir -p $OUT_DIR/$DATE // 创建本次备份文件夹
$DUMP -d wecard -u $DB_USER -p $DB_PASS -o $OUT_DIR/$DATE // 执行备份命令
tar -zcvf $TAR_DIR/$TAR_BAK $OUT_DIR/$DATE // 将份文件打包放入正式
find $TAR_DIR/ -mtime +$DAYS -delete // 除14天前的旧备
And then he ran it and it outputted permission denied
messages, so he pressed Ctrl+C
. The server shut down automatically. He tried to restart it but got a grub error:
He contacted AliCloud, the engineer connected the disk to another working server so that he could check the disk. Looks like some folders are gone, including /data/
where the mongodb is!
- We don't understand how the script could destroy the disk including
/data/
; - And of course, is it possible to get the
/data/
back?
PS: He did not take snapshot of the disk before.
PS2: As people mention "backups" a lot, we have lots of important users and data coming these 2 days, the purpose of this action was to backup them (for the first time), then they turned out to be entirely deleted.
filesystems shell ubuntu-14.04 data-recovery disaster-recovery
This question already has an answer here:
Monday morning mistake: sudo rm -rf --no-preserve-root /
10 answers
Edit: a follow-up question: Recover a mongo database deleted by rm.
My developer committed a huge mistake and we cannot find our Mongo database anywhere in the server.
He logged into the server, and saved the following shell under ~/crontab/mongod_back.sh
:
#!/bin/sh
DUMP=mongodump
OUT_DIR=/data/backup/mongod/tmp // 备份文件临时目录
TAR_DIR=/data/backup/mongod // 备份文件正式目录
DATE=`date +%Y_%m_%d_%H_%M_%S` // 备份文件将以备份对间保存
DB_USER=Guitang // 数库操作员
DB_PASS=qq■■■■■■■■■■■■■■■■■■■■■ // 数掘库操作员密码
DAYS=14 // 保留最新14天的份
TARBAK="mongod_bak_$DATE.tar.gz" // 备份文件命名格式
cd $OUT_DIR // 创建文件夹
rm -rf $OUT_DIR/* // 清空临时目录
mkdir -p $OUT_DIR/$DATE // 创建本次备份文件夹
$DUMP -d wecard -u $DB_USER -p $DB_PASS -o $OUT_DIR/$DATE // 执行备份命令
tar -zcvf $TAR_DIR/$TAR_BAK $OUT_DIR/$DATE // 将份文件打包放入正式
find $TAR_DIR/ -mtime +$DAYS -delete // 除14天前的旧备
And then he ran it and it outputted permission denied
messages, so he pressed Ctrl+C
. The server shut down automatically. He tried to restart it but got a grub error:
He contacted AliCloud, the engineer connected the disk to another working server so that he could check the disk. Looks like some folders are gone, including /data/
where the mongodb is!
- We don't understand how the script could destroy the disk including
/data/
; - And of course, is it possible to get the
/data/
back?
PS: He did not take snapshot of the disk before.
PS2: As people mention "backups" a lot, we have lots of important users and data coming these 2 days, the purpose of this action was to backup them (for the first time), then they turned out to be entirely deleted.
This question already has an answer here:
Monday morning mistake: sudo rm -rf --no-preserve-root /
10 answers
filesystems shell ubuntu-14.04 data-recovery disaster-recovery
filesystems shell ubuntu-14.04 data-recovery disaster-recovery
edited 1 hour ago
SoftTimur
asked 17 hours ago
SoftTimurSoftTimur
1188
1188
marked as duplicate by Jenny D, womble♦ 4 hours ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by Jenny D, womble♦ 4 hours ago
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
3
Your script has no error checking. If the linecd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be/
. This is why you have backups - use them.
– Jenny D
17 hours ago
He run the shell under~/crontab/
, how couldrm
orfind -delete
delete folders under/
?
– SoftTimur
17 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
7
Wow - did this script get into your version control system? Did it go through peer review?rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)
– Lightness Races in Orbit
10 hours ago
1
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago
|
show 5 more comments
3
Your script has no error checking. If the linecd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be/
. This is why you have backups - use them.
– Jenny D
17 hours ago
He run the shell under~/crontab/
, how couldrm
orfind -delete
delete folders under/
?
– SoftTimur
17 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
7
Wow - did this script get into your version control system? Did it go through peer review?rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)
– Lightness Races in Orbit
10 hours ago
1
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago
3
3
Your script has no error checking. If the line
cd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be /
. This is why you have backups - use them.– Jenny D
17 hours ago
Your script has no error checking. If the line
cd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be /
. This is why you have backups - use them.– Jenny D
17 hours ago
He run the shell under
~/crontab/
, how could rm
or find -delete
delete folders under /
?– SoftTimur
17 hours ago
He run the shell under
~/crontab/
, how could rm
or find -delete
delete folders under /
?– SoftTimur
17 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
7
7
Wow - did this script get into your version control system? Did it go through peer review?
rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)– Lightness Races in Orbit
10 hours ago
Wow - did this script get into your version control system? Did it go through peer review?
rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)– Lightness Races in Orbit
10 hours ago
1
1
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago
|
show 5 more comments
3 Answers
3
active
oldest
votes
Easy enough. The //
sequence isn't a comment in bash (#
is).
The statement OUT_DIR=x // text
had no effect*.
Thus what was finally executed was rm -rf /*
. The directories that the user couldn't remove gave permission errors, but some directories placed directly underneath /
apparently could be removed. You need to restore from backup.
* The peculiar form of bash statement A=b c d e f
is roughly similar to:
export A=b
c d e f
unset A
Hence script suceeded to do this:
export OUT_DIR=/data/mongo/tmp
// some text # gives error as `//` isn't an executable file!
unset OUT_DIR
add a comment |
1) He erroneously assumed that //
was a bash comment. It is not, only #
is.
The shell interpreted // text
as a normal command, and did not find a binary called //
, and did nothing.
In bash, when you have a variable assignment (OUT_DIR=/data/backup/mongod/tmp
) directly preceding a command (// text
), it only sets the variable while running the command. Therefore, it unsets OUT_DIR
immediately, and when the rm line is reached, OUT_DIR
is now unset, and rm -rf /
is now called, deleting everything you have permission to delete.
2) The solution is the same as all rm -rf /
cases: restore from backup. There is no other solution because you do not have physical access to the hard drive.
New contributor
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
Possible forensics, professional hard drive recovery methods. I know this because I know thatrm -rf
is not extremely secure, and doesn't overwrite the hard drive.
– Ray Wu
9 hours ago
2
@SoftTimurrm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.
– Lightness Races in Orbit
9 hours ago
add a comment |
1) Bash comments start with #. Sorry for your loss.
2) Restore from backup is the only way to proceed here, unfortunately.
New contributor
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Easy enough. The //
sequence isn't a comment in bash (#
is).
The statement OUT_DIR=x // text
had no effect*.
Thus what was finally executed was rm -rf /*
. The directories that the user couldn't remove gave permission errors, but some directories placed directly underneath /
apparently could be removed. You need to restore from backup.
* The peculiar form of bash statement A=b c d e f
is roughly similar to:
export A=b
c d e f
unset A
Hence script suceeded to do this:
export OUT_DIR=/data/mongo/tmp
// some text # gives error as `//` isn't an executable file!
unset OUT_DIR
add a comment |
Easy enough. The //
sequence isn't a comment in bash (#
is).
The statement OUT_DIR=x // text
had no effect*.
Thus what was finally executed was rm -rf /*
. The directories that the user couldn't remove gave permission errors, but some directories placed directly underneath /
apparently could be removed. You need to restore from backup.
* The peculiar form of bash statement A=b c d e f
is roughly similar to:
export A=b
c d e f
unset A
Hence script suceeded to do this:
export OUT_DIR=/data/mongo/tmp
// some text # gives error as `//` isn't an executable file!
unset OUT_DIR
add a comment |
Easy enough. The //
sequence isn't a comment in bash (#
is).
The statement OUT_DIR=x // text
had no effect*.
Thus what was finally executed was rm -rf /*
. The directories that the user couldn't remove gave permission errors, but some directories placed directly underneath /
apparently could be removed. You need to restore from backup.
* The peculiar form of bash statement A=b c d e f
is roughly similar to:
export A=b
c d e f
unset A
Hence script suceeded to do this:
export OUT_DIR=/data/mongo/tmp
// some text # gives error as `//` isn't an executable file!
unset OUT_DIR
Easy enough. The //
sequence isn't a comment in bash (#
is).
The statement OUT_DIR=x // text
had no effect*.
Thus what was finally executed was rm -rf /*
. The directories that the user couldn't remove gave permission errors, but some directories placed directly underneath /
apparently could be removed. You need to restore from backup.
* The peculiar form of bash statement A=b c d e f
is roughly similar to:
export A=b
c d e f
unset A
Hence script suceeded to do this:
export OUT_DIR=/data/mongo/tmp
// some text # gives error as `//` isn't an executable file!
unset OUT_DIR
edited 9 hours ago
Lightness Races in Orbit
273416
273416
answered 16 hours ago
kubanczykkubanczyk
10.4k22845
10.4k22845
add a comment |
add a comment |
1) He erroneously assumed that //
was a bash comment. It is not, only #
is.
The shell interpreted // text
as a normal command, and did not find a binary called //
, and did nothing.
In bash, when you have a variable assignment (OUT_DIR=/data/backup/mongod/tmp
) directly preceding a command (// text
), it only sets the variable while running the command. Therefore, it unsets OUT_DIR
immediately, and when the rm line is reached, OUT_DIR
is now unset, and rm -rf /
is now called, deleting everything you have permission to delete.
2) The solution is the same as all rm -rf /
cases: restore from backup. There is no other solution because you do not have physical access to the hard drive.
New contributor
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
Possible forensics, professional hard drive recovery methods. I know this because I know thatrm -rf
is not extremely secure, and doesn't overwrite the hard drive.
– Ray Wu
9 hours ago
2
@SoftTimurrm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.
– Lightness Races in Orbit
9 hours ago
add a comment |
1) He erroneously assumed that //
was a bash comment. It is not, only #
is.
The shell interpreted // text
as a normal command, and did not find a binary called //
, and did nothing.
In bash, when you have a variable assignment (OUT_DIR=/data/backup/mongod/tmp
) directly preceding a command (// text
), it only sets the variable while running the command. Therefore, it unsets OUT_DIR
immediately, and when the rm line is reached, OUT_DIR
is now unset, and rm -rf /
is now called, deleting everything you have permission to delete.
2) The solution is the same as all rm -rf /
cases: restore from backup. There is no other solution because you do not have physical access to the hard drive.
New contributor
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
Possible forensics, professional hard drive recovery methods. I know this because I know thatrm -rf
is not extremely secure, and doesn't overwrite the hard drive.
– Ray Wu
9 hours ago
2
@SoftTimurrm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.
– Lightness Races in Orbit
9 hours ago
add a comment |
1) He erroneously assumed that //
was a bash comment. It is not, only #
is.
The shell interpreted // text
as a normal command, and did not find a binary called //
, and did nothing.
In bash, when you have a variable assignment (OUT_DIR=/data/backup/mongod/tmp
) directly preceding a command (// text
), it only sets the variable while running the command. Therefore, it unsets OUT_DIR
immediately, and when the rm line is reached, OUT_DIR
is now unset, and rm -rf /
is now called, deleting everything you have permission to delete.
2) The solution is the same as all rm -rf /
cases: restore from backup. There is no other solution because you do not have physical access to the hard drive.
New contributor
1) He erroneously assumed that //
was a bash comment. It is not, only #
is.
The shell interpreted // text
as a normal command, and did not find a binary called //
, and did nothing.
In bash, when you have a variable assignment (OUT_DIR=/data/backup/mongod/tmp
) directly preceding a command (// text
), it only sets the variable while running the command. Therefore, it unsets OUT_DIR
immediately, and when the rm line is reached, OUT_DIR
is now unset, and rm -rf /
is now called, deleting everything you have permission to delete.
2) The solution is the same as all rm -rf /
cases: restore from backup. There is no other solution because you do not have physical access to the hard drive.
New contributor
New contributor
answered 9 hours ago
Ray WuRay Wu
611
611
New contributor
New contributor
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
Possible forensics, professional hard drive recovery methods. I know this because I know thatrm -rf
is not extremely secure, and doesn't overwrite the hard drive.
– Ray Wu
9 hours ago
2
@SoftTimurrm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.
– Lightness Races in Orbit
9 hours ago
add a comment |
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
Possible forensics, professional hard drive recovery methods. I know this because I know thatrm -rf
is not extremely secure, and doesn't overwrite the hard drive.
– Ray Wu
9 hours ago
2
@SoftTimurrm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.
– Lightness Races in Orbit
9 hours ago
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
why having physical access to the hard drive may help to restore?
– SoftTimur
9 hours ago
1
1
Possible forensics, professional hard drive recovery methods. I know this because I know that
rm -rf
is not extremely secure, and doesn't overwrite the hard drive.– Ray Wu
9 hours ago
Possible forensics, professional hard drive recovery methods. I know this because I know that
rm -rf
is not extremely secure, and doesn't overwrite the hard drive.– Ray Wu
9 hours ago
2
2
@SoftTimur
rm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.– Lightness Races in Orbit
9 hours ago
@SoftTimur
rm
usually just "unlinks" files but the data is still physically there until overwritten. This is why professionals can "undelete" sometimes if they have physical access and you haven't done lots of things with the disk after the catastrophe occurred. If you don't have backups, that's the best you can hope for.– Lightness Races in Orbit
9 hours ago
add a comment |
1) Bash comments start with #. Sorry for your loss.
2) Restore from backup is the only way to proceed here, unfortunately.
New contributor
add a comment |
1) Bash comments start with #. Sorry for your loss.
2) Restore from backup is the only way to proceed here, unfortunately.
New contributor
add a comment |
1) Bash comments start with #. Sorry for your loss.
2) Restore from backup is the only way to proceed here, unfortunately.
New contributor
1) Bash comments start with #. Sorry for your loss.
2) Restore from backup is the only way to proceed here, unfortunately.
New contributor
New contributor
answered 9 hours ago
RMPJRMPJ
211
211
New contributor
New contributor
add a comment |
add a comment |
3
Your script has no error checking. If the line
cd $OUT_DIR
fails, it's going to delete everything in the current path, which may well be/
. This is why you have backups - use them.– Jenny D
17 hours ago
He run the shell under
~/crontab/
, how couldrm
orfind -delete
delete folders under/
?– SoftTimur
17 hours ago
Make a raw backup of the full hard disk before yo do anything, this will increase your low changes for data recovery
– Ferrybig
13 hours ago
7
Wow - did this script get into your version control system? Did it go through peer review?
rm -rf $OUT_DIR/*
really? And why was the script not tested on a non-production server? Once you have restored from backup you have many critical procedural failings to address here before automating anything else. I hope you're not too hard on your developer over it, as a result (though they also have quite a bit to answer for)– Lightness Races in Orbit
10 hours ago
1
Re: this was to be your backup script, never test a new procedure against the only copy of your data. Prior to your very first backup, create a separate test database, put in some fake data, and restore test that.
– John Mahowald
6 hours ago