| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
This adds support to pass options to the underlying command that is used
by downloader. Useful for retrieving data with server-side checking for
user login or passwords, use a proxy or use specific options for cloning
a repository via git and hg.
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "--no-patch" option used by the git downloader appeared on git
1.8.4. Systems with older git versions show an error and fall back to
the wget downloader, which isn't suitable for all the cases.
Signed-off-by: Enrique Ocaña González <eocanha@igalia.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we now manually create git archives, we removed all .git-related
files. However, we also exclude empty directories.
This means that a directory which only had a .gitignore file is excluded
from the archive.
Fixes:
http://autobuild.buildroot.org/results/2aa/2aa8954311f009988880d27b6e48af91bc74c346/
http://autobuild.buildroot.org/results/b45/b45cceea99b9860ccf1c925eeda498a823b30903/
http://autobuild.buildroot.org/results/5ae/5ae336052fd32057d9631649279e142a81f5651f/
http://autobuild.buildroot.org/results/5fc/5fc3abf4a1aea677f576e16c49253d00720a8bef/
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new package variable that packages can set to specify that they
need git submodules.
Only accept this option if the download method is git, as we can not get
submodules via an http download (via wget).
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Aleksandar Simeonov <aleksandar@barix.com>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Tested-By: Nicolas Cavallari <nicolas.cavallari@green-communications.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some git repositories may be split into a master repository and
submodules. Up until now, we did not have support for submodules,
because we were using bare clones, in which it is not possible to
update the list of submodules.
Now that we are using plain clones with a working copy, we can retrieve
the submdoules.
Add an option to the git download helper to kick the update of
submodules, so that they are only fetched for those packages that
require them. Also document the existing -q option at the same time.
Submodules have a .git file at their root, which contains the path to
the real .git directory of the master repository. Since we remove it,
there is no point in keeping those .git files either.
Note: this is currently unused, but will be enabled with the follow-up
patch that adds the necessary parts in the pkg-generic and pkg-download
infrastructures.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We currently use git-archive to generate the tarball. This is all handy
and dandy, but git-archive does not support submodules. In the follow-up
patch, we're going to handle submodules, so we would not be able to use
git-archive.
Instead, we manually generate the archive:
- extract the tree to the requested cset,
- get the date of the commit to store in the archive,
- store only numeric owners,
- store owner and group as 0 (zero, although any arbitrary value would
have been fine, as long as it's a constant),
- sort the files to store in the archive.
We also get rid of the .git directory, because there is no reason to
keep it in the context of Buildroot. Some people would love to keep it
so as to speed up later downloads when updating a package, but that is
not really doable. For example:
- use current Buildroot
- it would need foo-12345, so do a clone and keep the .git in the
generated tarball
- update Buildroot
- it would need foo-98765
For that second clone, how could we know we would have to first extract
foo-12345 ? So, the .git in the archive is pretty much useless for
Buildroot.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we are using bare clones, so as to minimise the disk usage,
most notably for largeish repositories such as the one for the Linux
kernel, which can go beyond the 1GiB barrier.
However, this precludes updating (and thus using) the submodules, if
any, of the repositories, as a working copy is required to use
submodules (becaue we need to know the list of submodules, where to find
them, where to clone them, what cset to checkout, and all those is
dependent upon the checked out cset of the father repository).
Switch to using /plain/ clones with a working copy.
This means that the extra refs used by some forges (like pull-requests
for Github, or changes for gerrit...) are no longer fetched as part of
the clone, because git does not offer to do a mirror clone when there is
a working copy.
Instead, we have to fetch those special refs by hand. Since there is no
easy solution to know whether the cset the user asked for is such a
special ref or not, we just try to always fetch the cset requested by
the user; if this fails, we assume that this is not a special ref (most
probably, it is a sha1) and we defer the check to the archive creation,
which would fail if the requested cset is missing anyway.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The way we use it, gzip will store the current time in the header, which
leads to unreproducible archives.
Fix that by telling gzip to not store the name and date of the file it
compresses, with the -n option. Since it compresses its stdin, there was
already no filename stored; now there's even no date stored.
Note: gzip has had -n since at least 1.2.4, released in 1993, so
virtually every gzip out there nowadays has it.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Allows user to specify other access methods than :pserver:anonymous@
on CVS repositories. This shall be defined in the <pkg>_SITE variable.
[Thomas:
- as suggested by Yann, quote the variable expansion
- as suggested by Yann, use a regexp match
- tweak commit log]
Signed-off-by: Joao Mano <joao@datacom.ind.br>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In efe7f68 (support/download: generate reproducible Bazaar archives),
bzr was instructed to store files with the timestamp set to the date
they were last modified in the repository, instead of the current date,
using the --per-file-timestamp option.
However, this option has been added only in bzr-2.2 (August 2010) which
is not available on older distros.
We fix that by not using --per-file-timestamp when the bzr version is
older than 2.2, in which case we just generate the archive with the
current date set on files.
This means the archive is thus non-reproducible, and if a hash is
available for that archive, the hash will not match, and Buildroot will
try to download from the mirror (if any) or fail (if no mirror).
Fixes:
http://autobuild.buildroot.org/results/51f/51f4ff5462c15a85937d411f457096224d00fdcd
http://autobuild.buildroot.org/results/b88/b8828b5fbc16128408c2f44169ac23de7e34d770
http://autobuild.buildroot.org/results/fb4/fb4b0fb2131b40c18273dbe5e51b393cb6df18ec
...
[Peter: simplify sed invocation]
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similarly to what has previously been done for the Hg download backend,
instruct bzr to generate the archive on stdout, so that we can generate
reproducible archives.
When instructing bzr to generate the output file by itself, it uses a
temporary file that is then fed to gzip, which in turn stores the
timestamp of that file in the generated archive, whereas when the output
is generated on stdout, there is no timestamp, so the archive is then
reproducible.
Bizarely enough, we can tell 'bazaar' not to generate a bazaar in the
archive. Cool, uh? ;-]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When hg directly creates the output file, the hash for that file changes
everytime.
However, if we just tell hg to output the archive on stdout and we do
the redirect to the file, then the archive is reproducible.
(The reason is that in the first case, a temporary file is created and
then compressed, and gzip is adding the filename and its timestamp in
the gzip header, while in the second case, there is no temporary file,
and thus no timestamp and thus it is reproducible.)
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Yegor Yefremov <yegorslists@googlemail.com>
Tested-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some users may provide custom download commands with spaces in their
arguments, like so:
BR2_HG="hg --config foo.bar='some space-separated value'"
However, the way we currently call those commands does not account
for the extra quotes, and each space-separated part of the command is
interpreted as separate arguments.
Fix that by calling 'eval' on the commands.
Because of the eval, we must further quote our own arguments, to avoid
the eval further splitting them in case there are spaces (even though
we do not support paths with spaces, better be clean from the onset to
avoid breakage in the future).
We change all the wrappers to use a wrapper-function, even those with
a single call, so they all look alike.
Note that we do not single-quote some of the variables, like ${verbose}
because it can be empty and we really do not want to generate an
empty-string argument. That's not a problem, as ${verbose} would not
normally contain space-separated values (it could get set to something
like '-q -v' but in that case we'd still want two arguments, so that's
fine).
Reported-by: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When specifying BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION, a user may want to
specify the SHA of a reference different than a branch or tag.
For instance, Gerrit stores the patchsets under refs/changes/xx/xxx, and
Github stores the pull requests under refs/pull/xxx/head.
When cloning a repository with --bare, you don't fetch these references.
This patch uses --mirror for a full clone, in order to give the user
access to all references of the Git repository.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: "Maxime Hadjinlian" <maxime.hadjinlian@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the version of a package is a Mercurial tag, the download fails,
with:
abort: unknown revision 'X.Y.Z'!
This is because, in Mercurial, tags are commits like the others, and
when we clone, we actively request a tag. But then, the server
"dereferences" that tag and sends us the revision pointed to by that
tag. Of course, since the tag is a commit after the revision we got,
we do not have the revision adding the tag.
So, we just have to download the full repository to be sure we have
the tags in our local clone.
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In 50c8b7e (support/download: support -q in all download backends), the
backend were made to respect the quietness of the main Makefile, when -s
is poassed on the 'make' command line. In doing so, they were all made
to be verbose by default.
However, the verbosity of some of the tools, like scp, is very high, and
is in fact intended for debug purposes.
Drop being verbose by default, just use whatever each tool deems normal
output. Only respect the quietness requested by the user.
Reported-by: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This is useful when a tag is not avaiable.
Also fix support for Fedora where the command "cvs -r :<version>" doesn't work.
Signed-off-by: Fabio Porcedda <fabio.porcedda@gmail.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following commit 95a572282e87 (pkg-infra: move the git download helper to a
script, 2014-07-02), move the comment describing the shallow clone trickery as
well. Merge this comment with the existing helper comment that was added in
7e40a1103a91 (support/download: convert git to use the wrapper, 2014-08-03).
Rename $($(PKG)_DL_VERSION) to ${cset} to match the helper code context.
Cc: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that custom external toolchains to be downloaded properly instruct
to not fail on a missing hash, restore the mandatory hash check for
everything else.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Acked-by: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In very constrained cases, it might be needed to not fail if a hash is
missing. This is notably the case for custom external toolchains to be
downloaded, because we do have a .hash file for external toolchains,
but we obviously can not have hashes for all existing custom toolchains
(he, "custom"!).
So, add a way to avoid failing in that case.
>From the Makefile, we export the list of files for which not to check
the hash. Then, from the check-hash script, if no check was done, and
the file we were trying to match in in this exclusion list, we just exit
without error.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Acked-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Tested-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
changes v6 -> v7:
- /beautify/ the pattern in the case clause
Changed v5 -> v6: (Arnout)
- fix the pattern in the case clause
Changes v4 -> v5:
- micro-optimisation, use case-esac instead of a for-loop (Arnout)
- typoes (Arnout)
Changes v3 -> v4:
- drop the magic value, use a list of excluded files (Arnout)
Changes v1 -> v2:
- fix typoes in commit log
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Tested-by: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
| |
[Thomas: fix issues noticed by Arnout:
- Rewrap the linux/Config.in paragraph
- Revert the "is a toolchain dependency" -> "has a toolchain
dependency" change from pkg-generic.mk, as the original was
correct.]
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When downloading from a repository, we explicitly pass no hash file,
because we can't check hashes in that case.
However, we're still printing a message that there is a missign hash
file.
Beside being a bit annoying (since we can't do anything about it), it
may also be wrong, especially for packages for which we support multiple
versions, with some being downloaded via a git clone and others as
tarballs.
Just print no warning when the path to the hash file is empty.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the user selects a custom toolchain to be downloaded, there's no
hash for that toolchain, so the download fails, now that hashes are
mandatory.
Fix that by simply exiting as if there was no error, until we have a
better fix...
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of silently accepting a missing .hash file, print a warning.
This can be grepped from a build log, to find packages that still have
no hash, with the long-term goal of adding hashes for all packages.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At the time we introduced hashes, we did not want to be too harsh in the
beginning, and give people some time to adapt and accept the hashes. So
we so far only whined^Wwarned about a missing hash (when the .hash file
exists).
Some time has passed now, and people are still missing updating hashes
when bumping packages.
Let's make that warning a little bit more annoying...
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When checking hashes reports no hash for a file, and this is treated as
an error (now: because BR2_ENFORCE_CHECK_HASH is set; later: because
that will be the new and only behaviour), exit promptly in error.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Return different exit codes depending on the error that occured:
0: no error (hash file missing, or all hashes match)
1: unknown option
2: hash file exists, but at least one hash in error
3: hash file exists, but no hash for file to check
4: hash file exists, but at least one hash type unknown
This will be used in a later patch to decide whether the downloaded file
should be kept or removed.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Samuel Martin <s.martin49@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support to explicitly state that an archive has no hash.
This can be used for archives downloaded from a repository, like a
git-clone or a subversion checkout, or using the github helper.
This will come in handy when we'll eventually make hashes mandatory as
soon as a .hash file exists: for some packages, like gcc, some versions
are downloaded as archives from upstream, while other versions may come
from a GitHub repository (via the github herlper).
In this case, a .hash file would exist, that contains hashes for the
downloaded tarballs, but archives downloaded from the repository would
not have a hash (since it is currently not possible to have reproducible
such archives). So, we'd need a way to explicitly state there is no
hash, on purpose, for those archives.
So, add 'none' as a new type of hash.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, specifying a hash file for our download wrapper is mandatory.
However, when we download a git, svn, bzr, hg or cvs tree, there's by
design no hash to check the download against.
Since we're going to have hash checking mandatory when a hash file
exists, this would break those downloads from a repository.
So, make specifying a hash file optional when calling our download
wrapper and bail out early from the check-hash script if no hash file is
specified.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We expresely call printf in the git helper, calls which were not
addresed in the previous silent-build patchset.
Just redirect stdout to oblivion when being silent.
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Fabio Porcedda <fabio.porcedda@gmail.com>
Tested-by: Fabio Porcedda <fabio.porcedda@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
| |
If doing a silent build (make -s -> QUIET=-q), silence all downloads,
by passing the -q flag downward to backends as well as to check-hash.
Change a printf to use the trace functions.
Signed-off-by: Fabio Porcedda <fabio.porcedda@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an option flag to all backends, as well as the check-hash script, so
as to silence download helpers when the user wants a silent build.
Additionaly, make the default be verbose.
Inspired by Fabio's patch on git/svn.
[Thomas: fix a typo "Environemnt" -> "Environment"
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Fabio Porcedda <fabio.porcedda@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases, upstream just update their releases in-place, without
renaming them. When that package is updated in Buildroot, a new hash to
match the new upstream release is included in the corresponding .hash
file.
As a consequence, users who previously downloaded that package's tarball
with an older version of Buildroot, will get stuck with an old archive
for that package, and after updating their Buildroot copy, will be greeted
with a failed download, due to the local file not matching the new
hashes.
Also, an upstream would sometime serve us HTML garbage instead of the
actual tarball we requested, like SourceForge does from time for as-yet
unknown reasons.
So, to avoid this situation, check the hashes prior to doing the
download. If the hashes match, consider the locally cached file genuine,
and do not download it. However, if the locally cached file does not
match the known hashes we have for it, it is promptly removed, and a
download is re-attempted.
Note: this does not add any overhead compared to the previous situation,
because we were already checking hashes of locally cached files. It just
changes the order in which we do the checks. For the records, here is the
overhead of hashing a 231MiB file (qt-everywhere-opensource-src-4.8.6.tar.gz)
on a core-i5 @2.5GHz:
cache-cold cache-hot
sha1 1.914s 0.762s
sha256 2.109s 1.270s
But again, this overhead already existed before this patch.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of repeating the check in our download rules, delegate the check
of the hashes to the download wrapper.
This needs three different changes:
- add a new argument to the download wrapper, that is the full path to
the hash file; if the hash file does not exist, that does not change
the current behaviour, as the existence of the hash file is checked
for in the check-hash script;
- add a third argument to the check-hash script, to be the basename of
the file to check; this is required because we no longer check the
final file with the final filename, but an intermediate file with a
temporary filename;
- do the actual call to the check-hash script from within the download
wrapper.
This further paves the way to doing pre-download checks of the hashes
for the locally cached files.
Note: this patch removes the check for hashes for already downloaded
files, since the wrapper script exits early. The behaviour to check
localy cached files will be restored and enhanced in the following
patch.
[Thomas: fix minor typo in comment.]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of repeating the same test again and again in all our download
rules, just delegate the check for an already downloaded file to the
download wrapper.
This clears up the path for doing the hash checks on a cached file
before the download.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on argument ordering, use actual options in the
download wrapper.
Download backends (bzr, cp, hg...) are left as-is, because it does not
make sense to complexify them, since they are almost very trivial shell
scripts, and adding option parsing would be really overkill.
This commit also renames the script to dl-wrapper so it looks better in
the traces, and it is not confused with another wrapper.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
| |
The argument are correctly used, but incorrectly documented.
Inverse the comments to match the actual usage.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Not all systems have /bin/bash (e.g. NixOS[1] doesn't). Buildroot
already uses /usr/bin/env shebangs for other interpreters (perl,
python), so why not bash?
This changes only the shebangs used by Buildroot itself; stuff installed
to the target system is left unchanged.
With this applied I can run Buildroot unmodified on NixOS.
[1]: http://nixos.org/
Signed-off-by: Bjørn Forsman <bjorn.forsman@gmail.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The git helper uses gzip to compress the intermediate tarball. But gzip
removes the source file, and create a new file named by appending .gz to
the original file name.
Thus, we end up with output.gz, while the download wrapper expects jsut
output, and thus believes the downlaod failed.
Fix that by storing the tar from git to a temporary file, then pipe this
file to gzip's stdin, and redirect gzip's stdout to the output file.
Reported-by: Graham Newton <gnewton@peavey-eu.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Peter Seiderer <ps.report@gmx.net>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Re-add the git_done variable (lost in commit [1]).
Fixes download problem reported by Rohit Kumar [2].
[1] http://git.buildroot.net/buildroot/commit/?id=7e40a1103a919a8177f00ddca2b46b4439953511
[2] http://lists.busybox.net/pipermail/buildroot/2014-August/103733.html
Signed-off-by: Peter Seiderer <ps.report@gmx.net>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the wget helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make busybox-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the svn helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make open2300-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the scp helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by setting a primary site to 'scp://localhost:/tmp' and
running 'make vim-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the hg helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make vim-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the git helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make fmc-fsl-sdk-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the cvs helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
| |
This drastically simplifies the localfiles helper, as it no longer has
to deal with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by setting BUSYBOX_SITE = file:///tmp and running 'make busybox-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
| |
This drastically simplifies the bzr helper, as it no longer has to
deal with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The download wrapper is responsible for ensuring the atomicity
of saving into $(BR2_DL_DIR).
It calls the appropriate download helper, telling it to save the
downloaded content to a temporary file in $(BUILD_DIR) (so it does
not clutter $(BR2_DL_DIR) with partial, failed downloads.
Then, only if the download helper was successful, does the wrapper
save the downloaded content to the final location, yet still in a
temporary file, and finally atomically renames it to the final output
file.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When switching the git helper over to a shell script, a special case was
not carried over: in case the remote has the required reference, we
attempt a shallow clone, using --depth 1. However, this is not supported
when the remote is accessed with the http protocol.
Therefore, the download fails.
What happened before the conversion to a shell script was that the helper
in the Makefile would fallback to doing a full-clone.
This is the case and behaviour that were lost in the conversion.
To avoid making the script too complex, we only attempt a full clone if
needed. And we decide that a full clone is needed by default; we decide
it is unnecessary if the remote has the needed reference *and* the
shallow clone was successful.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
|