New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] repack: add --filter=<filter-spec> #1206
base: master
Are you sure you want to change the base?
Conversation
4d032f6
to
5efc2fa
Compare
|
/preview |
|
Preview email sent as pull.1206.git.git.1643247787.gitgitgadget@gmail.com |
|
/preview |
|
Preview email sent as pull.1206.git.git.1643247992.gitgitgadget@gmail.com |
|
/submit |
|
Submitted as pull.1206.git.git.1643248180.gitgitgadget@gmail.com To fetch this version into To fetch this version to local tag |
| @@ -126,6 +126,11 @@ depth is 4095. | |||
| a larger and slower repository; see the discussion in | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Derrick Stolee wrote (reply to this):
On 1/26/2022 8:49 PM, John Cai via GitGitGadget wrote:
> From: John Cai <johncai86@gmail.com>
>
> Currently, repack does not work with partial clones. When repack is run
> on a partially cloned repository, it grabs all missing objects from
> promisor remotes. This also means that when gc is run for repository
> maintenance on a partially cloned repository, it will end up getting
> missing objects, which is not what we want.
This shouldn't be what is happening. Do you have a demonstration of
this happening? repack_promisor_objects() should be avoiding following
links outside of promisor packs so we can safely 'git gc' in a partial
clone without downloading all reachable blobs.
> In order to make repack work with partial clone, teach repack a new
> option --filter, which takes a <filter-spec> argument. repack will skip
> any objects that are matched by <filter-spec> similar to how the clone
> command will skip fetching certain objects.
This is a bit misleading, since 'git clone' doesn't "skip fetching",
but instead requests a filter and the server can choose to write a
pack-file using that filter. I'm not sure if it's worth how pedantic
I'm being here.
The thing that I find confusing here is that you are adding an option
that could be run on a _full_ repository. If I have a set of packs
and none of them are promisor (I have every reachable object), then
what is the end result after 'git repack -adf --filter=blob:none'?
Those existing pack-files shouldn't be deleted because they have
objects that are not in the newly-created pack-file.
I'd like to see some additional clarity on this before continuing
to review this series.
> The final goal of this feature, is to be able to store objects on a
> server other than the regular git server itself.
>
> There are several scripts added so we can test the process of using a
> remote helper to upload blobs to an http server:
>
> - t/lib-httpd/list.sh lists blobs uploaded to the http server.
> - t/lib-httpd/upload.sh uploads blobs to the http server.
> - t/t0410/git-remote-testhttpgit a remote helper that can access blobs
> onto from an http server. Copied over from t/t5801/git-remote-testhttpgit
> and modified to upload blobs to an http server.
> - t/t0410/lib-http-promisor.sh convenience functions for uploading
> blobs
I think these changes to the tests should be extracted to a new
patch where this can be discussed in more detail. I didn't look
too closely at them because I want to focus on whether this
--filter option is a good direction for 'git repack'.
> OPT_STRING_LIST(0, "keep-pack", &keep_pack_list, N_("name"),
> @@ -819,6 +824,11 @@ int cmd_repack(int argc, const char **argv, const char *prefix)
> if (line.len != the_hash_algo->hexsz)
> die(_("repack: Expecting full hex object ID lines only from pack-objects."));
> string_list_append(&names, line.buf);
> + if (po_args.filter) {
> + char *promisor_name = mkpathdup("%s-%s.promisor", packtmp,
> + line.buf);
> + write_promisor_file(promisor_name, NULL, 0);
This code is duplicated in repack_promisor_objects(), so it would be
good to extract that logic into a helper method called by both places.
> + }
> }
> fclose(out);
> ret = finish_command(&cmd);
> diff --git a/t/t7700-repack.sh b/t/t7700-repack.sh
> index e489869dd94..78cc1858cb6 100755
> --- a/t/t7700-repack.sh
> +++ b/t/t7700-repack.sh
> @@ -237,6 +237,26 @@ test_expect_success 'auto-bitmaps do not complain if unavailable' '
> test_must_be_empty actual
> '
>
> +test_expect_success 'repack with filter does not fetch from remote' '
> + rm -rf server client &&
> + test_create_repo server &&
> + git -C server config uploadpack.allowFilter true &&
> + git -C server config uploadpack.allowAnySHA1InWant true &&
> + echo content1 >server/file1 &&
> + git -C server add file1 &&
> + git -C server commit -m initial_commit &&
> + expected="?$(git -C server rev-parse :file1)" &&
> + git clone --bare --no-local server client &&
You could use "file:://$(pwd)/server" here instead of "server".
> + git -C client config remote.origin.promisor true &&
> + git -C client -c repack.writebitmaps=false repack -a -d --filter=blob:none &&
This isn't testing what you want it to test, because your initial
clone doesn't use --filter=blob:none, so you already have all of
the objects in the client. You would never trigger a need for a
fetch from the remote.
> + git -C client rev-list --objects --all --missing=print >objects &&
> + grep "$expected" objects &&
> + git -C client repack -a -d &&
> + expected="$(git -C server rev-parse :file1)" &&
This is signalling to me that you are looking for a remote fetch
now that you are repacking everything, and that can only happen
if you deleted objects from the client during your first repack.
That seems incorrect.
> + git -C client rev-list --objects --all --missing=print >objects &&
> + grep "$expected" objects
> +'
Based on my current understanding, this patch seems unnecessary (repacks
should already be doing the right thing when in the presence of a partial
clone) and incorrect (we should not delete existing reachable objects
when repacking with a filter).
I look forward to hearing more about your intended use of this feature so
we can land on a better way to solve the problems you are having.
Thanks,
-Stolee
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, John Cai wrote (reply to this):
Hi Stolee,
Thanks for taking the time to review this patch! I added some points of clarification
down below.
On 27 Jan 2022, at 10:03, Derrick Stolee wrote:
> On 1/26/2022 8:49 PM, John Cai via GitGitGadget wrote:
>> From: John Cai <johncai86@gmail.com>
>>
>> Currently, repack does not work with partial clones. When repack is run
>> on a partially cloned repository, it grabs all missing objects from
>> promisor remotes. This also means that when gc is run for repository
>> maintenance on a partially cloned repository, it will end up getting
>> missing objects, which is not what we want.
>
> This shouldn't be what is happening. Do you have a demonstration of
> this happening? repack_promisor_objects() should be avoiding following
> links outside of promisor packs so we can safely 'git gc' in a partial
> clone without downloading all reachable blobs.
You're right, sorry I was mistaken about this detail of how partial clones work.
>
>> In order to make repack work with partial clone, teach repack a new
>> option --filter, which takes a <filter-spec> argument. repack will skip
>> any objects that are matched by <filter-spec> similar to how the clone
>> command will skip fetching certain objects.
>
> This is a bit misleading, since 'git clone' doesn't "skip fetching",
> but instead requests a filter and the server can choose to write a
> pack-file using that filter. I'm not sure if it's worth how pedantic
> I'm being here.
Thanks for the more precise description of the mechanics of partial clone.
I'll improve the wording in the next version of this patch series.
>
> The thing that I find confusing here is that you are adding an option
> that could be run on a _full_ repository. If I have a set of packs
> and none of them are promisor (I have every reachable object), then
> what is the end result after 'git repack -adf --filter=blob:none'?
> Those existing pack-files shouldn't be deleted because they have
> objects that are not in the newly-created pack-file.
>
> I'd like to see some additional clarity on this before continuing
> to review this series.
Apologies for the lack of clarity. Indeed, I can see why this is the most important
detail of this patch to provide enough context on, as it involves deleting
objects from a full repository as you said.
To back up a little, the goal is to be able to offload large
blobs to a separate http server. Christian Couder has a demo [1] that shows
this in detail.
If we had the following:
A. an http server to use as a generalized object store
B. a server update hook that uploads large blobs to 1.
C. a git server
D. a regular job that runs `git repack --filter` to remove large
blobs from C.
Clients would need to configure both C) and A) as promisor remotes to
be able to get everything. When they push new large blobs, they can
still push them to C), as B) will upload them to A), and D) will
regularly remove those large blobs from C).
This way with a little bit of client and server configuration, we can have
a native way to support offloading large files without git LFS.
It would be more flexible as you can easily tweak which blobs are considered large
files by tweaking B) and D).
>
>> The final goal of this feature, is to be able to store objects on a
>> server other than the regular git server itself.
>>
>> There are several scripts added so we can test the process of using a
>> remote helper to upload blobs to an http server:
>>
>> - t/lib-httpd/list.sh lists blobs uploaded to the http server.
>> - t/lib-httpd/upload.sh uploads blobs to the http server.
>> - t/t0410/git-remote-testhttpgit a remote helper that can access blobs
>> onto from an http server. Copied over from t/t5801/git-remote-testhttpgit
>> and modified to upload blobs to an http server.
>> - t/t0410/lib-http-promisor.sh convenience functions for uploading
>> blobs
>
> I think these changes to the tests should be extracted to a new
> patch where this can be discussed in more detail. I didn't look
> too closely at them because I want to focus on whether this
> --filter option is a good direction for 'git repack'.
>
>> OPT_STRING_LIST(0, "keep-pack", &keep_pack_list, N_("name"),
>> @@ -819,6 +824,11 @@ int cmd_repack(int argc, const char **argv, const char *prefix)
>> if (line.len != the_hash_algo->hexsz)
>> die(_("repack: Expecting full hex object ID lines only from pack-objects."));
>> string_list_append(&names, line.buf);
>> + if (po_args.filter) {
>> + char *promisor_name = mkpathdup("%s-%s.promisor", packtmp,
>> + line.buf);
>> + write_promisor_file(promisor_name, NULL, 0);
>
> This code is duplicated in repack_promisor_objects(), so it would be
> good to extract that logic into a helper method called by both places.
Thanks for pointing this out. I'll incorporate this into the next version.
>
>> + }
>> }
>> fclose(out);
>> ret = finish_command(&cmd);
>
>> diff --git a/t/t7700-repack.sh b/t/t7700-repack.sh
>> index e489869dd94..78cc1858cb6 100755
>> --- a/t/t7700-repack.sh
>> +++ b/t/t7700-repack.sh
>> @@ -237,6 +237,26 @@ test_expect_success 'auto-bitmaps do not complain if unavailable' '
>> test_must_be_empty actual
>> '
>>
>> +test_expect_success 'repack with filter does not fetch from remote' '
>> + rm -rf server client &&
>> + test_create_repo server &&
>> + git -C server config uploadpack.allowFilter true &&
>> + git -C server config uploadpack.allowAnySHA1InWant true &&
>> + echo content1 >server/file1 &&
>> + git -C server add file1 &&
>> + git -C server commit -m initial_commit &&
>> + expected="?$(git -C server rev-parse :file1)" &&
>> + git clone --bare --no-local server client &&
>
> You could use "file:://$(pwd)/server" here instead of "server".
good point, thanks
>
>> + git -C client config remote.origin.promisor true &&
>> + git -C client -c repack.writebitmaps=false repack -a -d --filter=blob:none &&
> This isn't testing what you want it to test, because your initial
> clone doesn't use --filter=blob:none, so you already have all of
> the objects in the client. You would never trigger a need for a
> fetch from the remote.
right, so this test is actually testing that repack --filter would shed objects to show
that it can be used as D) as a regular cleanup job for git servers that utilize another
http server to host large blobs.
>
>> + git -C client rev-list --objects --all --missing=print >objects &&
>> + grep "$expected" objects &&
>> + git -C client repack -a -d &&
>> + expected="$(git -C server rev-parse :file1)" &&
>
> This is signalling to me that you are looking for a remote fetch
> now that you are repacking everything, and that can only happen
> if you deleted objects from the client during your first repack.
> That seems incorrect.
>
>> + git -C client rev-list --objects --all --missing=print >objects &&
>> + grep "$expected" objects
>> +'
>
> Based on my current understanding, this patch seems unnecessary (repacks
> should already be doing the right thing when in the presence of a partial
> clone) and incorrect (we should not delete existing reachable objects
> when repacking with a filter).
>
> I look forward to hearing more about your intended use of this feature so
> we can land on a better way to solve the problems you are having.
Thanks for the callouts on the big picture of this proposed change. Looking
forward to getting your thoughts on this!
>
> Thanks,
> -Stolee
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Christian Couder wrote (reply to this):
On Sat, Jan 29, 2022 at 8:14 PM John Cai <johncai86@gmail.com> wrote:
> Apologies for the lack of clarity. Indeed, I can see why this is the most important
> detail of this patch to provide enough context on, as it involves deleting
> objects from a full repository as you said.
>
> To back up a little, the goal is to be able to offload large
> blobs to a separate http server. Christian Couder has a demo [1] that shows
> this in detail.
You might have forgotten to provide a link for [1], also I am not sure
if you wanted to link to the repo:
https://gitlab.com/chriscool/partial-clone-demo/
or the demo itself in the repo:
https://gitlab.com/chriscool/partial-clone-demo/-/blob/master/http-promisor/server_demo.txt
> If we had the following:
> A. an http server to use as a generalized object store
> B. a server update hook that uploads large blobs to 1.
s/1./A./
> C. a git server
> D. a regular job that runs `git repack --filter` to remove large
> blobs from C.
>
> Clients would need to configure both C) and A) as promisor remotes to
Maybe s/C)/C./ and s/A)/A./
Also note that configuring A. as a promisor remote requires a remote helper.
> be able to get everything. When they push new large blobs, they can
> still push them to C), as B) will upload them to A), and D) will
> regularly remove those large blobs from C).
>
> This way with a little bit of client and server configuration, we can have
> a native way to support offloading large files without git LFS.
> It would be more flexible as you can easily tweak which blobs are considered large
> files by tweaking B) and D).
Yeah, that's the idea of the demo.
Thanks for working on this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, John Cai wrote (reply to this):
Sorry forgot to include the link to Christian's demo. included below
On 29 Jan 2022, at 14:14, John Cai wrote:
> Hi Stolee,
>
> Thanks for taking the time to review this patch! I added some points of clarification
> down below.
>
> On 27 Jan 2022, at 10:03, Derrick Stolee wrote:
>
>> On 1/26/2022 8:49 PM, John Cai via GitGitGadget wrote:
>>> From: John Cai <johncai86@gmail.com>
>>>
>>> Currently, repack does not work with partial clones. When repack is run
>>> on a partially cloned repository, it grabs all missing objects from
>>> promisor remotes. This also means that when gc is run for repository
>>> maintenance on a partially cloned repository, it will end up getting
>>> missing objects, which is not what we want.
>>
>> This shouldn't be what is happening. Do you have a demonstration of
>> this happening? repack_promisor_objects() should be avoiding following
>> links outside of promisor packs so we can safely 'git gc' in a partial
>> clone without downloading all reachable blobs.
>
> You're right, sorry I was mistaken about this detail of how partial clones work.
>>
>>> In order to make repack work with partial clone, teach repack a new
>>> option --filter, which takes a <filter-spec> argument. repack will skip
>>> any objects that are matched by <filter-spec> similar to how the clone
>>> command will skip fetching certain objects.
>>
>> This is a bit misleading, since 'git clone' doesn't "skip fetching",
>> but instead requests a filter and the server can choose to write a
>> pack-file using that filter. I'm not sure if it's worth how pedantic
>> I'm being here.
>
> Thanks for the more precise description of the mechanics of partial clone.
> I'll improve the wording in the next version of this patch series.
>
>>
>> The thing that I find confusing here is that you are adding an option
>> that could be run on a _full_ repository. If I have a set of packs
>> and none of them are promisor (I have every reachable object), then
>> what is the end result after 'git repack -adf --filter=blob:none'?
>> Those existing pack-files shouldn't be deleted because they have
>> objects that are not in the newly-created pack-file.
>>
>> I'd like to see some additional clarity on this before continuing
>> to review this series.
>
> Apologies for the lack of clarity. Indeed, I can see why this is the most important
> detail of this patch to provide enough context on, as it involves deleting
> objects from a full repository as you said.
>
> To back up a little, the goal is to be able to offload large
> blobs to a separate http server. Christian Couder has a demo [1] that shows
> this in detail.
>
> If we had the following:
> A. an http server to use as a generalized object store
> B. a server update hook that uploads large blobs to 1.
> C. a git server
> D. a regular job that runs `git repack --filter` to remove large
> blobs from C.
>
> Clients would need to configure both C) and A) as promisor remotes to
> be able to get everything. When they push new large blobs, they can
> still push them to C), as B) will upload them to A), and D) will
> regularly remove those large blobs from C).
>
> This way with a little bit of client and server configuration, we can have
> a native way to support offloading large files without git LFS.
> It would be more flexible as you can easily tweak which blobs are considered large
> files by tweaking B) and D).
>
[1] https://gitlab.com/chriscool/partial-clone-demo/-/blob/master/http-promisor/server_demo.txt
>>
>>> The final goal of this feature, is to be able to store objects on a
>>> server other than the regular git server itself.
>>>
>>> There are several scripts added so we can test the process of using a
>>> remote helper to upload blobs to an http server:
>>>
>>> - t/lib-httpd/list.sh lists blobs uploaded to the http server.
>>> - t/lib-httpd/upload.sh uploads blobs to the http server.
>>> - t/t0410/git-remote-testhttpgit a remote helper that can access blobs
>>> onto from an http server. Copied over from t/t5801/git-remote-testhttpgit
>>> and modified to upload blobs to an http server.
>>> - t/t0410/lib-http-promisor.sh convenience functions for uploading
>>> blobs
>>
>> I think these changes to the tests should be extracted to a new
>> patch where this can be discussed in more detail. I didn't look
>> too closely at them because I want to focus on whether this
>> --filter option is a good direction for 'git repack'.
>>
>>> OPT_STRING_LIST(0, "keep-pack", &keep_pack_list, N_("name"),
>>> @@ -819,6 +824,11 @@ int cmd_repack(int argc, const char **argv, const char *prefix)
>>> if (line.len != the_hash_algo->hexsz)
>>> die(_("repack: Expecting full hex object ID lines only from pack-objects."));
>>> string_list_append(&names, line.buf);
>>> + if (po_args.filter) {
>>> + char *promisor_name = mkpathdup("%s-%s.promisor", packtmp,
>>> + line.buf);
>>> + write_promisor_file(promisor_name, NULL, 0);
>>
>> This code is duplicated in repack_promisor_objects(), so it would be
>> good to extract that logic into a helper method called by both places.
>
> Thanks for pointing this out. I'll incorporate this into the next version.
>>
>>> + }
>>> }
>>> fclose(out);
>>> ret = finish_command(&cmd);
>>
>>> diff --git a/t/t7700-repack.sh b/t/t7700-repack.sh
>>> index e489869dd94..78cc1858cb6 100755
>>> --- a/t/t7700-repack.sh
>>> +++ b/t/t7700-repack.sh
>>> @@ -237,6 +237,26 @@ test_expect_success 'auto-bitmaps do not complain if unavailable' '
>>> test_must_be_empty actual
>>> '
>>>
>>> +test_expect_success 'repack with filter does not fetch from remote' '
>>> + rm -rf server client &&
>>> + test_create_repo server &&
>>> + git -C server config uploadpack.allowFilter true &&
>>> + git -C server config uploadpack.allowAnySHA1InWant true &&
>>> + echo content1 >server/file1 &&
>>> + git -C server add file1 &&
>>> + git -C server commit -m initial_commit &&
>>> + expected="?$(git -C server rev-parse :file1)" &&
>>> + git clone --bare --no-local server client &&
>>
>> You could use "file:://$(pwd)/server" here instead of "server".
>
> good point, thanks
>
>>
>>> + git -C client config remote.origin.promisor true &&
>>> + git -C client -c repack.writebitmaps=false repack -a -d --filter=blob:none &&
>> This isn't testing what you want it to test, because your initial
>> clone doesn't use --filter=blob:none, so you already have all of
>> the objects in the client. You would never trigger a need for a
>> fetch from the remote.
>
> right, so this test is actually testing that repack --filter would shed objects to show
> that it can be used as D) as a regular cleanup job for git servers that utilize another
> http server to host large blobs.
>
>>
>>> + git -C client rev-list --objects --all --missing=print >objects &&
>>> + grep "$expected" objects &&
>>> + git -C client repack -a -d &&
>>> + expected="$(git -C server rev-parse :file1)" &&
>>
>> This is signalling to me that you are looking for a remote fetch
>> now that you are repacking everything, and that can only happen
>> if you deleted objects from the client during your first repack.
>> That seems incorrect.
>>
>>> + git -C client rev-list --objects --all --missing=print >objects &&
>>> + grep "$expected" objects
>>> +'
>>
>> Based on my current understanding, this patch seems unnecessary (repacks
>> should already be doing the right thing when in the presence of a partial
>> clone) and incorrect (we should not delete existing reachable objects
>> when repacking with a filter).
>>
>> I look forward to hearing more about your intended use of this feature so
>> we can land on a better way to solve the problems you are having.
>
> Thanks for the callouts on the big picture of this proposed change. Looking
> forward to getting your thoughts on this!
>>
>> Thanks,
>> -Stolee
|
User |
|
There are issues in commit 7fec293: |
|
There are issues in commit 7fec293: |
9dbfdea
to
307deba
Compare
|
There are issues in commit 844dc6e: |
|
There are issues in commit 844dc6e: |
|
/preview |
|
There are issues in commit 844dc6e: |
9535ce7 taught pack-objects to use filtering, but added a requirement of the --stdout since a partial clone mechanism was not yet in place to handle missing objects. Since then, changes like 9e27bea and others added support to dynamically fetch objects that were missing. Remove the --stdout requirement so that in the next commit, repack can pass --filter to pack-objects to omit certain objects from the packfile. Based-on-patch-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: John Cai <johncai86@gmail.com>
In order to use a separate http server as a remote to offload large blobs, imagine the following: A. an http server to use as a generalized object store. B. a server update hook that uploads large blobs to (A). C. a git server D. a remote helper that knows how to download objects from the http server E. a regular job that runs `git repack --filter` to remove large blobs from (C). Clients would need to configure both (C) and (A) as promisor remotes to be able to get everything. When they push new large blobs, they can still push them to (C), as (B) will upload them to (A), and (E) will regularly remove those large blobs from (C). This way with a little bit of client and server configuration, we can have a native way to support offloading large files without git LFS. It would be more flexible as you can easily tweak which blobs are considered large files by tweaking (B) and (E). A fuller demo can be found at http://tiny.cc/object_storage_demo Based-on-patch-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: John Cai <johncai86@gmail.com>
When a git server (A) is being used alongside an http server (B) remote that stores large blobs, and a client fetches objects from both (A) as well as (B), we do not want (A) to fetch missing objects during object traversal. Add a config value uploadpack.allowmissingpromisor that, when set to true, will allow (A) to skip fetching missing objects. Based-on-patch-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: John Cai <johncai86@gmail.com>
This patch adds tests to test both repack --filter functionality in isolation (in t7700-repack.sh) as well as how it can be used to offload large blobs (in t0410-partial-clone.sh) There are several scripts added so we can test the process of using a remote helper to upload blobs to an http server. - t/lib-httpd/list.sh lists blobs uploaded to the http server. - t/lib-httpd/upload.sh uploads blobs to the http server. - t/t0410/git-remote-testhttpgit a remote helper that can access blobs onto from an http server. Copied over from t/t5801/git-remote-testhttpgit and modified to upload blobs to an http server. - t/t0410/lib-http-promisor.sh convenience functions for uploading blobs Based-on-patch-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: John Cai <johncai86@gmail.com>
|
/preview |
|
Preview email sent as pull.1206.v2.git.git.1644371988.gitgitgadget@gmail.com |
|
/submit |
|
Submitted as pull.1206.v2.git.git.1644372606.gitgitgadget@gmail.com To fetch this version into To fetch this version to local tag |
|
On the Git mailing list, John Cai wrote (reply to this): |
|
/preview |
|
Preview email sent as pull.1206.v3.git.git.1644812927.gitgitgadget@gmail.com |
|
/preview |
|
Preview email sent as pull.1206.v3.git.git.1644813626.gitgitgadget@gmail.com |
|
/submit |
|
Error: d76faa1 was already submitted |
|
On the Git mailing list, Robert Coup wrote (reply to this): |
|
User |
| @@ -136,6 +136,8 @@ prepare_httpd() { | |||
| install_script error-smart-http.sh | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Robert Coup wrote (reply to this):
Hi John,
Minor, but should we use oid rather than sha1 in the list.sh/upload.sh
scripts? wrt sha256 slowly coming along the pipe.
> diff --git a/t/t7700-repack.sh b/t/t7700-repack.sh
> index e489869dd94..78cc1858cb6 100755
> --- a/t/t7700-repack.sh
> +++ b/t/t7700-repack.sh
> @@ -237,6 +237,26 @@ test_expect_success 'auto-bitmaps do not complain if unavailable' '
> test_must_be_empty actual
> '
>
> +test_expect_success 'repack with filter does not fetch from remote' '
> + rm -rf server client &&
> + test_create_repo server &&
> + git -C server config uploadpack.allowFilter true &&
> + git -C server config uploadpack.allowAnySHA1InWant true &&
> + echo content1 >server/file1 &&
> + git -C server add file1 &&
> + git -C server commit -m initial_commit &&
> + expected="?$(git -C server rev-parse :file1)" &&
> + git clone --bare --no-local server client &&
> + git -C client config remote.origin.promisor true &&
> + git -C client -c repack.writebitmaps=false repack -a -d --filter=blob:none &&
Does writing bitmaps have any effect/interaction here?
> + git -C client rev-list --objects --all --missing=print >objects &&
> + grep "$expected" objects &&
This is testing the object that was cloned initially is gone after the
repack, ok.
> + git -C client repack -a -d &&
> + expected="$(git -C server rev-parse :file1)" &&
> + git -C client rev-list --objects --all --missing=print >objects &&
> + grep "$expected" objects
But I'm not sure what you're testing here? A repack wouldn't fetch
missing objects for a promisor pack anyway... and because there's no
'^' in the pattern the grep will succeed regardless of whether the
object is missing/present.
Rob :)
|
On the Git mailing list, John Cai wrote (reply to this): |
|
On the Git mailing list, Taylor Blau wrote (reply to this): |
|
User |
|
On the Git mailing list, John Cai wrote (reply to this): |
|
On the Git mailing list, Taylor Blau wrote (reply to this): |
|
On the Git mailing list, John Cai wrote (reply to this): |
|
On the Git mailing list, Taylor Blau wrote (reply to this): |
This patch series makes partial clone more useful by making it possible to run repack to remove objects from a repository (replacing it with promisor objects). This is useful when we want to offload large blobs from a git server onto another git server, or even use an http server through a remote helper.
This was originally submitted as a patch series [A] that did not have a clear explanation of the goal or motivation for the change. Based on Stolee's feedback, added RFC to the subject title to indicate that feedback is needed on the direction of this patch before diving into the details of implementation.
In [B], a --refilter option on fetch and fetch-pack is being discussed where either a less restrictive or more restrictive filter can be used. In the more restrictive case, the objects that already exist will not be deleted. But, one can imagine that users might want the ability to delete objects when they apply a more restrictive filter in order to save space, and this patch series would also allow that.
There are a couple of things we need to adjust to make this possible. This patch has three parts.
Changes since v2:
Changes since v1:
A. https://lore.kernel.org/git/a62a007f-7c61-68eb-c0e6-548dc9b6f671@gmail.com/
B. https://lore.kernel.org/git/pull.1138.git.1643730593.gitgitgadget@gmail.com/
cc: Christian Couder christian.couder@gmail.com
cc: Derrick Stolee stolee@gmail.com
cc: Robert Coup robert@coup.net.nz
cc: Robert Coup robert.coup@koordinates.com
cc: Taylor Blau me@ttaylorr.com