Trimming blank space from images
$begingroup$
I am working on scanned documents (ID card, Driver licenses, ...). The problem I faced while I apply some preprocessing on them is that the documents occupy just a small area of the image, all the rest area is whether white/blank space or noised space. For that reason I wanted to develop a Python code that automatically trims the unwanted area and keeps only the zone where the document is located (without I predefine the resolution). That's possible with using findContours()
from OpenCV. However, not all the documents (especially the old ones) have clear contour and not all the blank space is white, so this will not work.
The idea that came to me is:
- Read the image and convert it to gray-scale.
- Apply the
bitwise_not()
function from OpenCV to separate the background from the foreground.
Apply adaptive mean threshold to remove as much possible of noise (and eventually to whiten the background).
At this level, I have the background almost white and the document is in black but containing some white gaps.
I applied erosion to fill the gaps.
- Read each row of the image and if 20% of it contains black, then keep it, if it is white, delete it. And do the same with each column of the image.
- Crop the image according to the min and max of the index of the black lines and columns.
Here is my code with some comments:
import cv2
import numpy as np
def crop(filename):
#Read the image
img = cv2.imread(filename)
#Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Separate the background from the foreground
bit = cv2.bitwise_not(gray)
#Apply adaptive mean thresholding
amtImage = cv2.adaptiveThreshold(bit, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 15)
#Apply erosion to fill the gaps
kernel = np.ones((15,15),np.uint8)
erosion = cv2.erode(amtImage,kernel,iterations = 2)
#Take the height and width of the image
(height, width) = img.shape[0:2]
#Ignore the limits/extremities of the document (sometimes are black, so they distract the algorithm)
image = erosion[50:height - 50, 50: width - 50]
(nheight, nwidth) = image.shape[0:2]
#Create a list to save the indexes of lines containing more than 20% of black.
index =
for x in range (0, nheight):
line =
for y in range(0, nwidth):
line2 =
if (image[x, y] < 150):
line.append(image[x, y])
if (len(line) / nwidth > 0.2):
index.append(x)
#Create a list to save the indexes of columns containing more than 15% of black.
index2 =
for a in range(0, nwidth):
line2 =
for b in range(0, nheight):
if image[b, a] < 150:
line2.append(image[b, a])
if (len(line2) / nheight > 0.15):
index2.append(a)
#Crop the original image according to the max and min of black lines and columns.
img = img[min(index):max(index) + min(250, (height - max(index))* 10 // 11) , max(0, min(index2)): max(index2) + min(250, (width - max(index2)) * 10 // 11)]
#Save the image
cv2.imwrite('res_' + filename, img)
Here is an example. I used an image from the internet to avoid any confidentiality problem. It is to notice here that the image quality is much better (the white space does not contain noise) than the examples I work on.
Input: 1920x1080
Output: 801x623
I tested this code with different documents, and it works well. The problem is that it takes a lot of time to process a single document (because of the loops and reading each pixel of the image twice: once with lines and the second with columns). I am sure that it is possible to do some modifications to optimize the code and reduce the processing time. But I am very beginner with Python and code optimization.
Maybe using numpy to process the matrix calculations or optimizing the loops would improve the code quality.
python image opencv
New contributor
$endgroup$
add a comment |
$begingroup$
I am working on scanned documents (ID card, Driver licenses, ...). The problem I faced while I apply some preprocessing on them is that the documents occupy just a small area of the image, all the rest area is whether white/blank space or noised space. For that reason I wanted to develop a Python code that automatically trims the unwanted area and keeps only the zone where the document is located (without I predefine the resolution). That's possible with using findContours()
from OpenCV. However, not all the documents (especially the old ones) have clear contour and not all the blank space is white, so this will not work.
The idea that came to me is:
- Read the image and convert it to gray-scale.
- Apply the
bitwise_not()
function from OpenCV to separate the background from the foreground.
Apply adaptive mean threshold to remove as much possible of noise (and eventually to whiten the background).
At this level, I have the background almost white and the document is in black but containing some white gaps.
I applied erosion to fill the gaps.
- Read each row of the image and if 20% of it contains black, then keep it, if it is white, delete it. And do the same with each column of the image.
- Crop the image according to the min and max of the index of the black lines and columns.
Here is my code with some comments:
import cv2
import numpy as np
def crop(filename):
#Read the image
img = cv2.imread(filename)
#Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Separate the background from the foreground
bit = cv2.bitwise_not(gray)
#Apply adaptive mean thresholding
amtImage = cv2.adaptiveThreshold(bit, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 15)
#Apply erosion to fill the gaps
kernel = np.ones((15,15),np.uint8)
erosion = cv2.erode(amtImage,kernel,iterations = 2)
#Take the height and width of the image
(height, width) = img.shape[0:2]
#Ignore the limits/extremities of the document (sometimes are black, so they distract the algorithm)
image = erosion[50:height - 50, 50: width - 50]
(nheight, nwidth) = image.shape[0:2]
#Create a list to save the indexes of lines containing more than 20% of black.
index =
for x in range (0, nheight):
line =
for y in range(0, nwidth):
line2 =
if (image[x, y] < 150):
line.append(image[x, y])
if (len(line) / nwidth > 0.2):
index.append(x)
#Create a list to save the indexes of columns containing more than 15% of black.
index2 =
for a in range(0, nwidth):
line2 =
for b in range(0, nheight):
if image[b, a] < 150:
line2.append(image[b, a])
if (len(line2) / nheight > 0.15):
index2.append(a)
#Crop the original image according to the max and min of black lines and columns.
img = img[min(index):max(index) + min(250, (height - max(index))* 10 // 11) , max(0, min(index2)): max(index2) + min(250, (width - max(index2)) * 10 // 11)]
#Save the image
cv2.imwrite('res_' + filename, img)
Here is an example. I used an image from the internet to avoid any confidentiality problem. It is to notice here that the image quality is much better (the white space does not contain noise) than the examples I work on.
Input: 1920x1080
Output: 801x623
I tested this code with different documents, and it works well. The problem is that it takes a lot of time to process a single document (because of the loops and reading each pixel of the image twice: once with lines and the second with columns). I am sure that it is possible to do some modifications to optimize the code and reduce the processing time. But I am very beginner with Python and code optimization.
Maybe using numpy to process the matrix calculations or optimizing the loops would improve the code quality.
python image opencv
New contributor
$endgroup$
1
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
1
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago
add a comment |
$begingroup$
I am working on scanned documents (ID card, Driver licenses, ...). The problem I faced while I apply some preprocessing on them is that the documents occupy just a small area of the image, all the rest area is whether white/blank space or noised space. For that reason I wanted to develop a Python code that automatically trims the unwanted area and keeps only the zone where the document is located (without I predefine the resolution). That's possible with using findContours()
from OpenCV. However, not all the documents (especially the old ones) have clear contour and not all the blank space is white, so this will not work.
The idea that came to me is:
- Read the image and convert it to gray-scale.
- Apply the
bitwise_not()
function from OpenCV to separate the background from the foreground.
Apply adaptive mean threshold to remove as much possible of noise (and eventually to whiten the background).
At this level, I have the background almost white and the document is in black but containing some white gaps.
I applied erosion to fill the gaps.
- Read each row of the image and if 20% of it contains black, then keep it, if it is white, delete it. And do the same with each column of the image.
- Crop the image according to the min and max of the index of the black lines and columns.
Here is my code with some comments:
import cv2
import numpy as np
def crop(filename):
#Read the image
img = cv2.imread(filename)
#Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Separate the background from the foreground
bit = cv2.bitwise_not(gray)
#Apply adaptive mean thresholding
amtImage = cv2.adaptiveThreshold(bit, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 15)
#Apply erosion to fill the gaps
kernel = np.ones((15,15),np.uint8)
erosion = cv2.erode(amtImage,kernel,iterations = 2)
#Take the height and width of the image
(height, width) = img.shape[0:2]
#Ignore the limits/extremities of the document (sometimes are black, so they distract the algorithm)
image = erosion[50:height - 50, 50: width - 50]
(nheight, nwidth) = image.shape[0:2]
#Create a list to save the indexes of lines containing more than 20% of black.
index =
for x in range (0, nheight):
line =
for y in range(0, nwidth):
line2 =
if (image[x, y] < 150):
line.append(image[x, y])
if (len(line) / nwidth > 0.2):
index.append(x)
#Create a list to save the indexes of columns containing more than 15% of black.
index2 =
for a in range(0, nwidth):
line2 =
for b in range(0, nheight):
if image[b, a] < 150:
line2.append(image[b, a])
if (len(line2) / nheight > 0.15):
index2.append(a)
#Crop the original image according to the max and min of black lines and columns.
img = img[min(index):max(index) + min(250, (height - max(index))* 10 // 11) , max(0, min(index2)): max(index2) + min(250, (width - max(index2)) * 10 // 11)]
#Save the image
cv2.imwrite('res_' + filename, img)
Here is an example. I used an image from the internet to avoid any confidentiality problem. It is to notice here that the image quality is much better (the white space does not contain noise) than the examples I work on.
Input: 1920x1080
Output: 801x623
I tested this code with different documents, and it works well. The problem is that it takes a lot of time to process a single document (because of the loops and reading each pixel of the image twice: once with lines and the second with columns). I am sure that it is possible to do some modifications to optimize the code and reduce the processing time. But I am very beginner with Python and code optimization.
Maybe using numpy to process the matrix calculations or optimizing the loops would improve the code quality.
python image opencv
New contributor
$endgroup$
I am working on scanned documents (ID card, Driver licenses, ...). The problem I faced while I apply some preprocessing on them is that the documents occupy just a small area of the image, all the rest area is whether white/blank space or noised space. For that reason I wanted to develop a Python code that automatically trims the unwanted area and keeps only the zone where the document is located (without I predefine the resolution). That's possible with using findContours()
from OpenCV. However, not all the documents (especially the old ones) have clear contour and not all the blank space is white, so this will not work.
The idea that came to me is:
- Read the image and convert it to gray-scale.
- Apply the
bitwise_not()
function from OpenCV to separate the background from the foreground.
Apply adaptive mean threshold to remove as much possible of noise (and eventually to whiten the background).
At this level, I have the background almost white and the document is in black but containing some white gaps.
I applied erosion to fill the gaps.
- Read each row of the image and if 20% of it contains black, then keep it, if it is white, delete it. And do the same with each column of the image.
- Crop the image according to the min and max of the index of the black lines and columns.
Here is my code with some comments:
import cv2
import numpy as np
def crop(filename):
#Read the image
img = cv2.imread(filename)
#Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#Separate the background from the foreground
bit = cv2.bitwise_not(gray)
#Apply adaptive mean thresholding
amtImage = cv2.adaptiveThreshold(bit, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 15)
#Apply erosion to fill the gaps
kernel = np.ones((15,15),np.uint8)
erosion = cv2.erode(amtImage,kernel,iterations = 2)
#Take the height and width of the image
(height, width) = img.shape[0:2]
#Ignore the limits/extremities of the document (sometimes are black, so they distract the algorithm)
image = erosion[50:height - 50, 50: width - 50]
(nheight, nwidth) = image.shape[0:2]
#Create a list to save the indexes of lines containing more than 20% of black.
index =
for x in range (0, nheight):
line =
for y in range(0, nwidth):
line2 =
if (image[x, y] < 150):
line.append(image[x, y])
if (len(line) / nwidth > 0.2):
index.append(x)
#Create a list to save the indexes of columns containing more than 15% of black.
index2 =
for a in range(0, nwidth):
line2 =
for b in range(0, nheight):
if image[b, a] < 150:
line2.append(image[b, a])
if (len(line2) / nheight > 0.15):
index2.append(a)
#Crop the original image according to the max and min of black lines and columns.
img = img[min(index):max(index) + min(250, (height - max(index))* 10 // 11) , max(0, min(index2)): max(index2) + min(250, (width - max(index2)) * 10 // 11)]
#Save the image
cv2.imwrite('res_' + filename, img)
Here is an example. I used an image from the internet to avoid any confidentiality problem. It is to notice here that the image quality is much better (the white space does not contain noise) than the examples I work on.
Input: 1920x1080
Output: 801x623
I tested this code with different documents, and it works well. The problem is that it takes a lot of time to process a single document (because of the loops and reading each pixel of the image twice: once with lines and the second with columns). I am sure that it is possible to do some modifications to optimize the code and reduce the processing time. But I am very beginner with Python and code optimization.
Maybe using numpy to process the matrix calculations or optimizing the loops would improve the code quality.
python image opencv
python image opencv
New contributor
New contributor
edited 57 mins ago
Jamal♦
30.3k11117227
30.3k11117227
New contributor
asked yesterday
singriumsingrium
1266
1266
New contributor
New contributor
1
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
1
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago
add a comment |
1
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
1
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago
1
1
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
1
1
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
singrium is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f212391%2ftrimming-blank-space-from-images%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
singrium is a new contributor. Be nice, and check out our Code of Conduct.
singrium is a new contributor. Be nice, and check out our Code of Conduct.
singrium is a new contributor. Be nice, and check out our Code of Conduct.
singrium is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Code Review Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f212391%2ftrimming-blank-space-from-images%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
@Graipher, I added an example. The Image I used is from the internet to avoid any problem.
$endgroup$
– singrium
yesterday
1
$begingroup$
I don't have time at this moment for a full review, but you don't have to loop over the image twice. You start looping over it from each side (4 times), but stop at the first line which is of the id. To identify a line, you also don't need to append the value of the pixel to a new list, but just increment a counter
$endgroup$
– Maarten Fabré
12 hours ago
$begingroup$
@MaartenFabré, that seems so smart! Thank you for these hints, I'll work on them.
$endgroup$
– singrium
12 hours ago