Whats the best way to handle refactoring a big file?
Im currently working on a bigger project which unfortunately has some files where software quality guidelines where not always followed. This includes big files (read 2000-4000 lines) which clearly contain multiple distinct functionalities.
Now I want to refactor these big files into multiple small ones. The issue is, since they are so big, multiple people (me included) on different branches are working on these files. So I cant really branch from develop and refactor, since merging these refactorings with other peoples changes will become difficult.
We could of course require everyone to merge back to develop, "freeze" the files (ie. dont allow anyone to edit them anymore), refactor, and then "unfreeze". But this is not really good either, since this would require everyone to basically stop their work on these files until refactoring is done.
So is there a way to refactor, dont require anyone else to stop working (for to long) or merge back their feature branches to develop?
git refactoring code-quality
New contributor
|
show 2 more comments
Im currently working on a bigger project which unfortunately has some files where software quality guidelines where not always followed. This includes big files (read 2000-4000 lines) which clearly contain multiple distinct functionalities.
Now I want to refactor these big files into multiple small ones. The issue is, since they are so big, multiple people (me included) on different branches are working on these files. So I cant really branch from develop and refactor, since merging these refactorings with other peoples changes will become difficult.
We could of course require everyone to merge back to develop, "freeze" the files (ie. dont allow anyone to edit them anymore), refactor, and then "unfreeze". But this is not really good either, since this would require everyone to basically stop their work on these files until refactoring is done.
So is there a way to refactor, dont require anyone else to stop working (for to long) or merge back their feature branches to develop?
git refactoring code-quality
New contributor
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
2
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
1
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago
|
show 2 more comments
Im currently working on a bigger project which unfortunately has some files where software quality guidelines where not always followed. This includes big files (read 2000-4000 lines) which clearly contain multiple distinct functionalities.
Now I want to refactor these big files into multiple small ones. The issue is, since they are so big, multiple people (me included) on different branches are working on these files. So I cant really branch from develop and refactor, since merging these refactorings with other peoples changes will become difficult.
We could of course require everyone to merge back to develop, "freeze" the files (ie. dont allow anyone to edit them anymore), refactor, and then "unfreeze". But this is not really good either, since this would require everyone to basically stop their work on these files until refactoring is done.
So is there a way to refactor, dont require anyone else to stop working (for to long) or merge back their feature branches to develop?
git refactoring code-quality
New contributor
Im currently working on a bigger project which unfortunately has some files where software quality guidelines where not always followed. This includes big files (read 2000-4000 lines) which clearly contain multiple distinct functionalities.
Now I want to refactor these big files into multiple small ones. The issue is, since they are so big, multiple people (me included) on different branches are working on these files. So I cant really branch from develop and refactor, since merging these refactorings with other peoples changes will become difficult.
We could of course require everyone to merge back to develop, "freeze" the files (ie. dont allow anyone to edit them anymore), refactor, and then "unfreeze". But this is not really good either, since this would require everyone to basically stop their work on these files until refactoring is done.
So is there a way to refactor, dont require anyone else to stop working (for to long) or merge back their feature branches to develop?
git refactoring code-quality
git refactoring code-quality
New contributor
New contributor
New contributor
asked 12 hours ago
HoffHoff
564
564
New contributor
New contributor
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
2
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
1
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago
|
show 2 more comments
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
2
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
1
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
2
2
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
1
1
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago
|
show 2 more comments
6 Answers
6
active
oldest
votes
You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts.
This is part of a general problem with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context.
The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do.
That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work.
You may also want to prevent large swaths of dependent code by making API-compatible changes. For example, if you want to extract some functionality into a separate module:
- Extract the functionality into a separate module.
- Change the old functions to forward their calls to the new API.
- Over time, port dependent code to the new API.
- Finally, you can delete the old functions.
This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt.
But in the end, any solution will require you to coordinate with your team.
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
add a comment |
Do the refactoring in smaller steps. Lets say your large file has the name Foo
:
Add a new empty file
Bar
- commit it to "trunk".Find a small portion of the code in
Foo
which can be moved over toBar
. Apply the move, update from trunk, build and test the code, and commit to "trunk".Repeat step 2 until
Foo
andBar
have equal size (or whatever size you prefer)
That way, next time your team mates update their branches from trunk, they get your changes in "small portions" and can merge them one-by-one, which is a lot easier than having to merge a full split in one step. Same holds when in step 2 you get a merge conflict because someone else in between updated trunk.
This won't eliminate merge conflicts, but it restricts each conflict to a small area of code, which is way more manageable.
And of course - communicate the refactoring in the team. Inform your mates what you are doing, so they know why they have to expect merge conflicts for the particular file.
This is especially useful with gitsrerere
option enabled
– D. Ben Knoble
14 mins ago
add a comment |
You are thinking of splitting the file as an atomic operation, but there are intermediate changes you can make. The file gradually became huge over time, it can gradually become small over time.
Pick a part that hasn't had to change in a long time (git blame
can help with this), and split that off first. Get that change merged into everyone's branches, then pick the next easiest part to split. Maybe even splitting one part is too big a step and you should just do some rearranging within the large file first.
If people aren't frequently merging back to develop, you should encourage that, then after they merge, take that opportunity to split off the parts they just changed. Or ask them to do the splitting off as part of the pull request review.
The idea is to slowly move toward your goal. It will feel like progress is slow, but then suddenly you'll realize your code is a lot better. It takes a long time to turn an ocean liner.
add a comment |
Fixing this problem requires buy-in from the other teams because you're trying to change a shared resource (the code itself). That being said, I think there's a way to "migrate away" from having huge monolithic files without disrupting people.
I would also recommend not targeting all the huge files at once unless the number of huge files is growing uncontrollably in addition to the sizes of individual files.
Refactoring large files like this frequently causes unexpected problems. The first step is to stop the big files from accumulating additional functionality beyond what's currently in master or in development branches.
I think the best way to do this is with commit hooks that block certain additions to the large files by default, but can be overruled with a magical comment in the commit message, like @bigfileok
or something. It's important to be able to overrule the policy in a way that's painless but trackable. Ideally, you should be able to run the commit hook locally and it should tell you how to override this particular error in the error message itself.
The commit hook could check for new classes or do other static analysis (ad hoc or not). You can also just pick a line or character count that's 10% larger than the file currently is and say that the large file can't grow beyond the new limit. You can also reject individual commits that grow the new file by too many lines or too many characters or w/e.
Once the large file stops accumulating new functionality, you can refactor things out of it one at a time (and reduce the tresholds enforced by the commit hooks at the same time to prevent it from growing again).
Eventually, the large files will be small enough that the commit hooks can be completely removed.
add a comment |
Wait until hometime. Split the file, commit and merge to master.
Other people will have to pull the changes into their feature branches in the morning like any other change.
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
add a comment |
I'm going to suggest a different than normal solution to this problem.
Use this as a team code event. Have everyone check-in their code who can, then help others who are still working with the file. Once everyone relevant has their code checked in, find a conference room with a projector and work together to start moving things around and into new files.
You may want to set a specific amount of time to this, so that it doesn't end up being a week worth of arguments with no end in sight. Instead, this might even be a weekly 1-2 hour event until you all get things looking how it needs to be. Maybe you only need 1-2 hours to refactor the file. You won't know until you try, likely.
This has the benefit of everyone being on the same page (no pun intended) with the refactoring, but it can also help you avoid mistakes as well as get input from others about possible method groupings to maintain, if necessary.
Doing it this way can be considered to have a built-in code review, if you do that sort of thing. This allows the appropriate amount of devs to sign off on your code as soon as you get it checked in and ready for their review. You might still want them to check the code for anything you missed, but it goes a long ways to making sure the review process is shorter.
This may not work in all situations, teams, or companies, as the work isn't distributed in a way that makes this happen easily. It can also be (incorrectly) construed as a misuse of dev time. This group code needs buy-in from the manager as well as the refactor itself.
To help sell this idea to your manager, mention the code review bit as well as everyone knowing where thing are from the beginning. Preventing devs from losing time searching a host of new files can be worthwhile to avoid. Also, preventing devs from getting POed about where things ended up or "completely missing" is usually a good thing. (The fewer the meltdowns the better, IMO.)
Once you get one file refactored this way, you may be able to more easily get approval for more refactors, if it was successful and useful.
However you decide to do your refactor, good luck!
New contributor
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "131"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: false,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Hoff is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f389380%2fwhats-the-best-way-to-handle-refactoring-a-big-file%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
StackExchange.ready(function () {
$("#show-editor-button input, #show-editor-button button").click(function () {
var showEditor = function() {
$("#show-editor-button").hide();
$("#post-form").removeClass("dno");
StackExchange.editor.finallyInit();
};
var useFancy = $(this).data('confirm-use-fancy');
if(useFancy == 'True') {
var popupTitle = $(this).data('confirm-fancy-title');
var popupBody = $(this).data('confirm-fancy-body');
var popupAccept = $(this).data('confirm-fancy-accept-button');
$(this).loadPopup({
url: '/post/self-answer-popup',
loaded: function(popup) {
var pTitle = $(popup).find('h2');
var pBody = $(popup).find('.popup-body');
var pSubmit = $(popup).find('.popup-submit');
pTitle.text(popupTitle);
pBody.html(popupBody);
pSubmit.val(popupAccept).click(showEditor);
}
})
} else{
var confirmText = $(this).data('confirm-text');
if (confirmText ? confirm(confirmText) : true) {
showEditor();
}
}
});
});
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts.
This is part of a general problem with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context.
The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do.
That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work.
You may also want to prevent large swaths of dependent code by making API-compatible changes. For example, if you want to extract some functionality into a separate module:
- Extract the functionality into a separate module.
- Change the old functions to forward their calls to the new API.
- Over time, port dependent code to the new API.
- Finally, you can delete the old functions.
This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt.
But in the end, any solution will require you to coordinate with your team.
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
add a comment |
You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts.
This is part of a general problem with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context.
The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do.
That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work.
You may also want to prevent large swaths of dependent code by making API-compatible changes. For example, if you want to extract some functionality into a separate module:
- Extract the functionality into a separate module.
- Change the old functions to forward their calls to the new API.
- Over time, port dependent code to the new API.
- Finally, you can delete the old functions.
This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt.
But in the end, any solution will require you to coordinate with your team.
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
add a comment |
You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts.
This is part of a general problem with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context.
The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do.
That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work.
You may also want to prevent large swaths of dependent code by making API-compatible changes. For example, if you want to extract some functionality into a separate module:
- Extract the functionality into a separate module.
- Change the old functions to forward their calls to the new API.
- Over time, port dependent code to the new API.
- Finally, you can delete the old functions.
This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt.
But in the end, any solution will require you to coordinate with your team.
You have correctly understood that this is not so much a technical as a social problem: if you want to avoid excessive merge conflicts, the team needs to collaborate in a way that avoids these conflicts.
This is part of a general problem with Git, in that branching is very easy but merging can still take a lot of effort. Development teams tend to launch a lot of branches and are then surprised that merging them is difficult, possibly because they are trying to emulate the Git Flow without understanding its context.
The general rule to fast and easy merges is to prevent big differences from accumulating, in particular that feature branches should be very short lived (hours or days, not months). A development team that is able to rapidly integrate their changes will see fewer merge conflicts. If some code isn't yet production ready, it might be possible to integrate it but deactivate it through a feature flag. As soon as the code has been integrated into your master branch, it becomes accessible to the kind of refactoring you are trying to do.
That might be too much for your immediate problem. But it may be feasible to ask colleagues to merge their changes that impact this file until the end of the week so that you can perform the refactoring. If they wait longer, they'll have to deal with the merge conflicts themselves. That's not impossible, it's just avoidable work.
You may also want to prevent large swaths of dependent code by making API-compatible changes. For example, if you want to extract some functionality into a separate module:
- Extract the functionality into a separate module.
- Change the old functions to forward their calls to the new API.
- Over time, port dependent code to the new API.
- Finally, you can delete the old functions.
This multi-step process can avoid many merge conflicts. In particular, there will only be conflicts if someone else is also changing the functionality you extracted. The cost of this approach is that it's much slower than changing everything at once, and that you temporarily have two duplicate APIs. This isn't so bad until something urgent interrupts this refactoring, the duplication is forgotten or deprioritized, and you end up with a bunch of tech debt.
But in the end, any solution will require you to coordinate with your team.
answered 11 hours ago
amonamon
89.4k21170261
89.4k21170261
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
add a comment |
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
So, if I got It right, the advice is smaller developments, frequent commits (push) and daily (to say something) merges. Right? In other words, to change the SDLC cadence
– Laiv
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
@Laiv Unfortunately that is all extremely general advice, but some ideas out of the agile-ish space like Continuous Integration clearly have their merits. Teams that work together (and integrate their work frequently) will have an easier time making large cross-cutting changes than teams that only work alongside each other. This isn't necessarily about the SDLC at large, more about the collaboration within the team. Some approaches make working alongside more feasible (think Open/Closed Principle, microservices) but OP's team isn't there yet.
– amon
11 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
If it's about collaboration. Would it be possible to get involved the team in the refactor? Seems to me that right now it's only one-dev's job. Multiple files of +4K LOC sounds like too much refactor for a single person to do. Or too much responsibility
– Laiv
10 hours ago
6
6
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
I wouldn't go so far as to say a feature branch needs to have a short lifetime -- merely that it should not diverge from its parent branch for long periods of time. Regularly merging changes from the parent branch into the feature branch works in those cases where the feature branch needs to stick around longer. Still, it's a good idea to keep feature branches around no longer than necessary.
– Dan Lyons
10 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
@Laiv In my experience, it makes sense to discuss a post-refactoring design with the team beforehand, but it's usually easiest if a single person makes the changes to the code. Otherwise, you're back to the problem that you have to merge stuff. The 4k lines sounds like a lot, but it's really not for targeted refactorings like extract-class. (I'd shill Martin Fowler's Refactoring book so hard here if I had read it.) But 4k lines is a lot only for untargeted refactorings like “let's see how I can improve this”.
– amon
8 hours ago
add a comment |
Do the refactoring in smaller steps. Lets say your large file has the name Foo
:
Add a new empty file
Bar
- commit it to "trunk".Find a small portion of the code in
Foo
which can be moved over toBar
. Apply the move, update from trunk, build and test the code, and commit to "trunk".Repeat step 2 until
Foo
andBar
have equal size (or whatever size you prefer)
That way, next time your team mates update their branches from trunk, they get your changes in "small portions" and can merge them one-by-one, which is a lot easier than having to merge a full split in one step. Same holds when in step 2 you get a merge conflict because someone else in between updated trunk.
This won't eliminate merge conflicts, but it restricts each conflict to a small area of code, which is way more manageable.
And of course - communicate the refactoring in the team. Inform your mates what you are doing, so they know why they have to expect merge conflicts for the particular file.
This is especially useful with gitsrerere
option enabled
– D. Ben Knoble
14 mins ago
add a comment |
Do the refactoring in smaller steps. Lets say your large file has the name Foo
:
Add a new empty file
Bar
- commit it to "trunk".Find a small portion of the code in
Foo
which can be moved over toBar
. Apply the move, update from trunk, build and test the code, and commit to "trunk".Repeat step 2 until
Foo
andBar
have equal size (or whatever size you prefer)
That way, next time your team mates update their branches from trunk, they get your changes in "small portions" and can merge them one-by-one, which is a lot easier than having to merge a full split in one step. Same holds when in step 2 you get a merge conflict because someone else in between updated trunk.
This won't eliminate merge conflicts, but it restricts each conflict to a small area of code, which is way more manageable.
And of course - communicate the refactoring in the team. Inform your mates what you are doing, so they know why they have to expect merge conflicts for the particular file.
This is especially useful with gitsrerere
option enabled
– D. Ben Knoble
14 mins ago
add a comment |
Do the refactoring in smaller steps. Lets say your large file has the name Foo
:
Add a new empty file
Bar
- commit it to "trunk".Find a small portion of the code in
Foo
which can be moved over toBar
. Apply the move, update from trunk, build and test the code, and commit to "trunk".Repeat step 2 until
Foo
andBar
have equal size (or whatever size you prefer)
That way, next time your team mates update their branches from trunk, they get your changes in "small portions" and can merge them one-by-one, which is a lot easier than having to merge a full split in one step. Same holds when in step 2 you get a merge conflict because someone else in between updated trunk.
This won't eliminate merge conflicts, but it restricts each conflict to a small area of code, which is way more manageable.
And of course - communicate the refactoring in the team. Inform your mates what you are doing, so they know why they have to expect merge conflicts for the particular file.
Do the refactoring in smaller steps. Lets say your large file has the name Foo
:
Add a new empty file
Bar
- commit it to "trunk".Find a small portion of the code in
Foo
which can be moved over toBar
. Apply the move, update from trunk, build and test the code, and commit to "trunk".Repeat step 2 until
Foo
andBar
have equal size (or whatever size you prefer)
That way, next time your team mates update their branches from trunk, they get your changes in "small portions" and can merge them one-by-one, which is a lot easier than having to merge a full split in one step. Same holds when in step 2 you get a merge conflict because someone else in between updated trunk.
This won't eliminate merge conflicts, but it restricts each conflict to a small area of code, which is way more manageable.
And of course - communicate the refactoring in the team. Inform your mates what you are doing, so they know why they have to expect merge conflicts for the particular file.
edited 9 hours ago
answered 10 hours ago
Doc BrownDoc Brown
136k23251404
136k23251404
This is especially useful with gitsrerere
option enabled
– D. Ben Knoble
14 mins ago
add a comment |
This is especially useful with gitsrerere
option enabled
– D. Ben Knoble
14 mins ago
This is especially useful with gits
rerere
option enabled– D. Ben Knoble
14 mins ago
This is especially useful with gits
rerere
option enabled– D. Ben Knoble
14 mins ago
add a comment |
You are thinking of splitting the file as an atomic operation, but there are intermediate changes you can make. The file gradually became huge over time, it can gradually become small over time.
Pick a part that hasn't had to change in a long time (git blame
can help with this), and split that off first. Get that change merged into everyone's branches, then pick the next easiest part to split. Maybe even splitting one part is too big a step and you should just do some rearranging within the large file first.
If people aren't frequently merging back to develop, you should encourage that, then after they merge, take that opportunity to split off the parts they just changed. Or ask them to do the splitting off as part of the pull request review.
The idea is to slowly move toward your goal. It will feel like progress is slow, but then suddenly you'll realize your code is a lot better. It takes a long time to turn an ocean liner.
add a comment |
You are thinking of splitting the file as an atomic operation, but there are intermediate changes you can make. The file gradually became huge over time, it can gradually become small over time.
Pick a part that hasn't had to change in a long time (git blame
can help with this), and split that off first. Get that change merged into everyone's branches, then pick the next easiest part to split. Maybe even splitting one part is too big a step and you should just do some rearranging within the large file first.
If people aren't frequently merging back to develop, you should encourage that, then after they merge, take that opportunity to split off the parts they just changed. Or ask them to do the splitting off as part of the pull request review.
The idea is to slowly move toward your goal. It will feel like progress is slow, but then suddenly you'll realize your code is a lot better. It takes a long time to turn an ocean liner.
add a comment |
You are thinking of splitting the file as an atomic operation, but there are intermediate changes you can make. The file gradually became huge over time, it can gradually become small over time.
Pick a part that hasn't had to change in a long time (git blame
can help with this), and split that off first. Get that change merged into everyone's branches, then pick the next easiest part to split. Maybe even splitting one part is too big a step and you should just do some rearranging within the large file first.
If people aren't frequently merging back to develop, you should encourage that, then after they merge, take that opportunity to split off the parts they just changed. Or ask them to do the splitting off as part of the pull request review.
The idea is to slowly move toward your goal. It will feel like progress is slow, but then suddenly you'll realize your code is a lot better. It takes a long time to turn an ocean liner.
You are thinking of splitting the file as an atomic operation, but there are intermediate changes you can make. The file gradually became huge over time, it can gradually become small over time.
Pick a part that hasn't had to change in a long time (git blame
can help with this), and split that off first. Get that change merged into everyone's branches, then pick the next easiest part to split. Maybe even splitting one part is too big a step and you should just do some rearranging within the large file first.
If people aren't frequently merging back to develop, you should encourage that, then after they merge, take that opportunity to split off the parts they just changed. Or ask them to do the splitting off as part of the pull request review.
The idea is to slowly move toward your goal. It will feel like progress is slow, but then suddenly you'll realize your code is a lot better. It takes a long time to turn an ocean liner.
answered 10 hours ago
Karl BielefeldtKarl Bielefeldt
121k32215414
121k32215414
add a comment |
add a comment |
Fixing this problem requires buy-in from the other teams because you're trying to change a shared resource (the code itself). That being said, I think there's a way to "migrate away" from having huge monolithic files without disrupting people.
I would also recommend not targeting all the huge files at once unless the number of huge files is growing uncontrollably in addition to the sizes of individual files.
Refactoring large files like this frequently causes unexpected problems. The first step is to stop the big files from accumulating additional functionality beyond what's currently in master or in development branches.
I think the best way to do this is with commit hooks that block certain additions to the large files by default, but can be overruled with a magical comment in the commit message, like @bigfileok
or something. It's important to be able to overrule the policy in a way that's painless but trackable. Ideally, you should be able to run the commit hook locally and it should tell you how to override this particular error in the error message itself.
The commit hook could check for new classes or do other static analysis (ad hoc or not). You can also just pick a line or character count that's 10% larger than the file currently is and say that the large file can't grow beyond the new limit. You can also reject individual commits that grow the new file by too many lines or too many characters or w/e.
Once the large file stops accumulating new functionality, you can refactor things out of it one at a time (and reduce the tresholds enforced by the commit hooks at the same time to prevent it from growing again).
Eventually, the large files will be small enough that the commit hooks can be completely removed.
add a comment |
Fixing this problem requires buy-in from the other teams because you're trying to change a shared resource (the code itself). That being said, I think there's a way to "migrate away" from having huge monolithic files without disrupting people.
I would also recommend not targeting all the huge files at once unless the number of huge files is growing uncontrollably in addition to the sizes of individual files.
Refactoring large files like this frequently causes unexpected problems. The first step is to stop the big files from accumulating additional functionality beyond what's currently in master or in development branches.
I think the best way to do this is with commit hooks that block certain additions to the large files by default, but can be overruled with a magical comment in the commit message, like @bigfileok
or something. It's important to be able to overrule the policy in a way that's painless but trackable. Ideally, you should be able to run the commit hook locally and it should tell you how to override this particular error in the error message itself.
The commit hook could check for new classes or do other static analysis (ad hoc or not). You can also just pick a line or character count that's 10% larger than the file currently is and say that the large file can't grow beyond the new limit. You can also reject individual commits that grow the new file by too many lines or too many characters or w/e.
Once the large file stops accumulating new functionality, you can refactor things out of it one at a time (and reduce the tresholds enforced by the commit hooks at the same time to prevent it from growing again).
Eventually, the large files will be small enough that the commit hooks can be completely removed.
add a comment |
Fixing this problem requires buy-in from the other teams because you're trying to change a shared resource (the code itself). That being said, I think there's a way to "migrate away" from having huge monolithic files without disrupting people.
I would also recommend not targeting all the huge files at once unless the number of huge files is growing uncontrollably in addition to the sizes of individual files.
Refactoring large files like this frequently causes unexpected problems. The first step is to stop the big files from accumulating additional functionality beyond what's currently in master or in development branches.
I think the best way to do this is with commit hooks that block certain additions to the large files by default, but can be overruled with a magical comment in the commit message, like @bigfileok
or something. It's important to be able to overrule the policy in a way that's painless but trackable. Ideally, you should be able to run the commit hook locally and it should tell you how to override this particular error in the error message itself.
The commit hook could check for new classes or do other static analysis (ad hoc or not). You can also just pick a line or character count that's 10% larger than the file currently is and say that the large file can't grow beyond the new limit. You can also reject individual commits that grow the new file by too many lines or too many characters or w/e.
Once the large file stops accumulating new functionality, you can refactor things out of it one at a time (and reduce the tresholds enforced by the commit hooks at the same time to prevent it from growing again).
Eventually, the large files will be small enough that the commit hooks can be completely removed.
Fixing this problem requires buy-in from the other teams because you're trying to change a shared resource (the code itself). That being said, I think there's a way to "migrate away" from having huge monolithic files without disrupting people.
I would also recommend not targeting all the huge files at once unless the number of huge files is growing uncontrollably in addition to the sizes of individual files.
Refactoring large files like this frequently causes unexpected problems. The first step is to stop the big files from accumulating additional functionality beyond what's currently in master or in development branches.
I think the best way to do this is with commit hooks that block certain additions to the large files by default, but can be overruled with a magical comment in the commit message, like @bigfileok
or something. It's important to be able to overrule the policy in a way that's painless but trackable. Ideally, you should be able to run the commit hook locally and it should tell you how to override this particular error in the error message itself.
The commit hook could check for new classes or do other static analysis (ad hoc or not). You can also just pick a line or character count that's 10% larger than the file currently is and say that the large file can't grow beyond the new limit. You can also reject individual commits that grow the new file by too many lines or too many characters or w/e.
Once the large file stops accumulating new functionality, you can refactor things out of it one at a time (and reduce the tresholds enforced by the commit hooks at the same time to prevent it from growing again).
Eventually, the large files will be small enough that the commit hooks can be completely removed.
answered 7 hours ago
Gregory NisbetGregory Nisbet
1556
1556
add a comment |
add a comment |
Wait until hometime. Split the file, commit and merge to master.
Other people will have to pull the changes into their feature branches in the morning like any other change.
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
add a comment |
Wait until hometime. Split the file, commit and merge to master.
Other people will have to pull the changes into their feature branches in the morning like any other change.
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
add a comment |
Wait until hometime. Split the file, commit and merge to master.
Other people will have to pull the changes into their feature branches in the morning like any other change.
Wait until hometime. Split the file, commit and merge to master.
Other people will have to pull the changes into their feature branches in the morning like any other change.
answered 12 hours ago
EwanEwan
42.6k33594
42.6k33594
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
add a comment |
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
2
2
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
Still would mean they would have to merge my refactorings with their changes though...
– Hoff
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
somewhat related: suggestion about uncluttering file structure
– Nick Alexeev
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
they are going to have to merge that big file with one another anyway. merging with your split version might actually reduce the total pain
– Ewan
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
Well, they actually have to deal with merges anyways if they all are changing these files.
– Laiv
11 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
This has the problem of "Surprise, I broke all your stuff." The OP needs to get buy-in and approval before doing this, and doing it at a scheduled time that no one else has the file "in progress" would help.
– computercarguy
4 hours ago
add a comment |
I'm going to suggest a different than normal solution to this problem.
Use this as a team code event. Have everyone check-in their code who can, then help others who are still working with the file. Once everyone relevant has their code checked in, find a conference room with a projector and work together to start moving things around and into new files.
You may want to set a specific amount of time to this, so that it doesn't end up being a week worth of arguments with no end in sight. Instead, this might even be a weekly 1-2 hour event until you all get things looking how it needs to be. Maybe you only need 1-2 hours to refactor the file. You won't know until you try, likely.
This has the benefit of everyone being on the same page (no pun intended) with the refactoring, but it can also help you avoid mistakes as well as get input from others about possible method groupings to maintain, if necessary.
Doing it this way can be considered to have a built-in code review, if you do that sort of thing. This allows the appropriate amount of devs to sign off on your code as soon as you get it checked in and ready for their review. You might still want them to check the code for anything you missed, but it goes a long ways to making sure the review process is shorter.
This may not work in all situations, teams, or companies, as the work isn't distributed in a way that makes this happen easily. It can also be (incorrectly) construed as a misuse of dev time. This group code needs buy-in from the manager as well as the refactor itself.
To help sell this idea to your manager, mention the code review bit as well as everyone knowing where thing are from the beginning. Preventing devs from losing time searching a host of new files can be worthwhile to avoid. Also, preventing devs from getting POed about where things ended up or "completely missing" is usually a good thing. (The fewer the meltdowns the better, IMO.)
Once you get one file refactored this way, you may be able to more easily get approval for more refactors, if it was successful and useful.
However you decide to do your refactor, good luck!
New contributor
add a comment |
I'm going to suggest a different than normal solution to this problem.
Use this as a team code event. Have everyone check-in their code who can, then help others who are still working with the file. Once everyone relevant has their code checked in, find a conference room with a projector and work together to start moving things around and into new files.
You may want to set a specific amount of time to this, so that it doesn't end up being a week worth of arguments with no end in sight. Instead, this might even be a weekly 1-2 hour event until you all get things looking how it needs to be. Maybe you only need 1-2 hours to refactor the file. You won't know until you try, likely.
This has the benefit of everyone being on the same page (no pun intended) with the refactoring, but it can also help you avoid mistakes as well as get input from others about possible method groupings to maintain, if necessary.
Doing it this way can be considered to have a built-in code review, if you do that sort of thing. This allows the appropriate amount of devs to sign off on your code as soon as you get it checked in and ready for their review. You might still want them to check the code for anything you missed, but it goes a long ways to making sure the review process is shorter.
This may not work in all situations, teams, or companies, as the work isn't distributed in a way that makes this happen easily. It can also be (incorrectly) construed as a misuse of dev time. This group code needs buy-in from the manager as well as the refactor itself.
To help sell this idea to your manager, mention the code review bit as well as everyone knowing where thing are from the beginning. Preventing devs from losing time searching a host of new files can be worthwhile to avoid. Also, preventing devs from getting POed about where things ended up or "completely missing" is usually a good thing. (The fewer the meltdowns the better, IMO.)
Once you get one file refactored this way, you may be able to more easily get approval for more refactors, if it was successful and useful.
However you decide to do your refactor, good luck!
New contributor
add a comment |
I'm going to suggest a different than normal solution to this problem.
Use this as a team code event. Have everyone check-in their code who can, then help others who are still working with the file. Once everyone relevant has their code checked in, find a conference room with a projector and work together to start moving things around and into new files.
You may want to set a specific amount of time to this, so that it doesn't end up being a week worth of arguments with no end in sight. Instead, this might even be a weekly 1-2 hour event until you all get things looking how it needs to be. Maybe you only need 1-2 hours to refactor the file. You won't know until you try, likely.
This has the benefit of everyone being on the same page (no pun intended) with the refactoring, but it can also help you avoid mistakes as well as get input from others about possible method groupings to maintain, if necessary.
Doing it this way can be considered to have a built-in code review, if you do that sort of thing. This allows the appropriate amount of devs to sign off on your code as soon as you get it checked in and ready for their review. You might still want them to check the code for anything you missed, but it goes a long ways to making sure the review process is shorter.
This may not work in all situations, teams, or companies, as the work isn't distributed in a way that makes this happen easily. It can also be (incorrectly) construed as a misuse of dev time. This group code needs buy-in from the manager as well as the refactor itself.
To help sell this idea to your manager, mention the code review bit as well as everyone knowing where thing are from the beginning. Preventing devs from losing time searching a host of new files can be worthwhile to avoid. Also, preventing devs from getting POed about where things ended up or "completely missing" is usually a good thing. (The fewer the meltdowns the better, IMO.)
Once you get one file refactored this way, you may be able to more easily get approval for more refactors, if it was successful and useful.
However you decide to do your refactor, good luck!
New contributor
I'm going to suggest a different than normal solution to this problem.
Use this as a team code event. Have everyone check-in their code who can, then help others who are still working with the file. Once everyone relevant has their code checked in, find a conference room with a projector and work together to start moving things around and into new files.
You may want to set a specific amount of time to this, so that it doesn't end up being a week worth of arguments with no end in sight. Instead, this might even be a weekly 1-2 hour event until you all get things looking how it needs to be. Maybe you only need 1-2 hours to refactor the file. You won't know until you try, likely.
This has the benefit of everyone being on the same page (no pun intended) with the refactoring, but it can also help you avoid mistakes as well as get input from others about possible method groupings to maintain, if necessary.
Doing it this way can be considered to have a built-in code review, if you do that sort of thing. This allows the appropriate amount of devs to sign off on your code as soon as you get it checked in and ready for their review. You might still want them to check the code for anything you missed, but it goes a long ways to making sure the review process is shorter.
This may not work in all situations, teams, or companies, as the work isn't distributed in a way that makes this happen easily. It can also be (incorrectly) construed as a misuse of dev time. This group code needs buy-in from the manager as well as the refactor itself.
To help sell this idea to your manager, mention the code review bit as well as everyone knowing where thing are from the beginning. Preventing devs from losing time searching a host of new files can be worthwhile to avoid. Also, preventing devs from getting POed about where things ended up or "completely missing" is usually a good thing. (The fewer the meltdowns the better, IMO.)
Once you get one file refactored this way, you may be able to more easily get approval for more refactors, if it was successful and useful.
However you decide to do your refactor, good luck!
New contributor
New contributor
answered 3 hours ago
computercarguycomputercarguy
1012
1012
New contributor
New contributor
add a comment |
add a comment |
Hoff is a new contributor. Be nice, and check out our Code of Conduct.
Hoff is a new contributor. Be nice, and check out our Code of Conduct.
Hoff is a new contributor. Be nice, and check out our Code of Conduct.
Hoff is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Software Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f389380%2fwhats-the-best-way-to-handle-refactoring-a-big-file%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
stackoverflow.com/questions/1897585/…
– Robert Andrzejuk
11 hours ago
2
I think this also depends on the programming language used.
– Robert Andrzejuk
11 hours ago
1
I like "small incremental" checkins. Unless someone isn't keeping their copy of the repo fresh, this practice will minimize merge conflicts for everyone.
– Matt Raffel
7 hours ago
If your refactoring can be automated, at least partially, then it is possible to repeat it on the other branches, which is still annoying but at least not impossible to merge.
– Simon Richter
6 hours ago
I've found methods that were longer than those files....
– computercarguy
4 hours ago