summaryrefslogtreecommitdiffstats
path: root/support/download
Commit message (Collapse)AuthorAgeFilesLines
* dl-wrapper: Fix support for URIs containing '+'Robert Beckett2018-06-041-1/+1
| | | | | | | | | | | | | | | | | | | | | '+' is a valid character in a url. The current dl-wrapper gets the URI scheme by dropping everything after the last '+' character, with the intension of finding 'git' from e.g. 'git+https://uri'. If a uri has a '+' anywhere in it, it ends up using too much of the string as a scheme, and fails to match the handler properly. An example of where this form of URI is used is when using deploy tokens in gitlab. It uses a form like https://<username>:<password>@gitlab.com/<group>/<repo.git> where username for deploy token is of the form 'gitlab+deploy-token-<number>'. Use the %% operator to search backwards until the last '+' character when dropping the rest of the string as we know that the first '+' in the string should be the scheme. Signed-off-by: Robert Beckett <bbeckett@netvu.org.uk> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/cvs: add a 10 minute timeoutArnout Vandecappelle (Essensium/Mind)2018-05-311-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | Apparently, CVS servers can be deadlocked and in that case clients will retry connecting to them indefinitely. Cfr. http://autobuild.buildroot.net/results/23d/23d1034b33d0354de15de2ec4a8ccd0603e8db78/build-end.log Apparently, the sf.net CVS server got in such a deadlock on 2018-05-18, and almost 2 weeks later it is still not fixed. Instead of just hanging, we should fall back on BR2_SECONDARY_SITE. To achieve this, it's sufficient to add a timeout to the CVS command. The timeout value is of course arbitrary. However, we can assume that nobody will be putting large projects under CVS any more. So if the download takes more than 5 minutes, it's probably broken. Let's put the timeout at 10 minutes then. Fixes: http://autobuild.buildroot.net/results/db3/db33d4fa507fb3b4132423cd0a7e25a1fe6e4105 http://autobuild.buildroot.net/results/b6d/b6d927dcc73ac8d754422577dacefff4ff918a5c http://autobuild.buildroot.net/results/23d/23d1034b33d0354de15de2ec4a8ccd0603e8db78 http://autobuild.buildroot.net/results/127/1272a3aa3077e434c9805ec3034f35e6fcc330d4 Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download/file: remove set -xAngelo Compagnucci2018-05-131-1/+0
| | | | | | Signed-off-by: Angelo Compagnucci <angelo@amarulasolutions.com> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* support/download/file: fix file:// protocol handlingAngelo Compagnucci2018-05-131-1/+1
| | | | | | | | | | | | | | | | | | | | | Since the rework of the download infrastructure, the "file" download helper gets passed an URL that starts with file://, but forgets to strip it before passing it to "cp", causing a failure as the "cp" program isn't prepared for file paths starting with file://. This is fixed by stripping the file:// at the beginning of the URL. In addition, the path passed to cp lacked a slash between the directory path and the filename part of the url. This is fixed by adding a slash at the appropriate places. Fixes the following build failure when the "file" download method is used: cp: cannot stat 'file:///home/angelo/DEV/TOOLCHAINSarmv7-eabihf--glibc--bleeding-edge-2017.11-1.tar.bz2': No such file or directory Signed-off-by: Angelo Compagnucci <angelo@amarulasolutions.com> Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: always do full-cloneYann E. MORIN2018-05-011-21/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently attempt a shallow clone, as tentative to save bandwidth and download time. However, now that we keep the git tree as a cache, it may happen that we need to checkout an earlier commit, and that would not be present with a shallow clone. Furthermore, the shallow fetch is already really broken, and just happens to work by chance. Consider the following actions, which are basically what happens today: mkdir git git init git cd git git remote add origin https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git git fetch origin --depth 1 v4.17-rc1 if ! git fetch origin v4.17-rc1:v4.17-rc1 ; then echo "warning" fi git checkout v4.17-rc1 The checkout succeeds just because of the git-fetch in the if-condition, which is initially there to fetch the special refs from github PRs, or gerrit reviews. That fails, but we just print a warning. If we were to ever remove support for special refs, then the checkout would fail. The whole purpose of the git cache is to actually save bandwidth and download time, but in the long run. For one-offs, people would preferably use a wget download (e.g. with the github macro) instead of a git clone. We switch to always doing a full clone. It is more correct, and pays off in the long run... Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: ensure we can checkout repos with submodule conversionsYann E. MORIN2018-05-011-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a git tree has had sub-dir <-> sub-module conversions, or has had submodules added or removed over the course of time, checking out a changeset across those conversions/additions/removals may leave untracked files, or may fail because of a conflict of type. So, before we checkout the new changeset, we forcibly remove the submodules. The new set of submodules, if any, will be restored later. Ideally, we would use a native git command: git submodule deinit --all. However, that was only introduced in git 1.8.3 which, while not being recent by modern standards, is still too old for some enterprise-grade distributions (RHEL6 only has git-1.7.1). So, instead, we just use git submodule foreach, to rm -rf the submodules directory. Again, we would ideally use 'cd $toplevel && rm -rf $path', but $toplevel was only introduced in git 1.7.2. $path has always been there. So, instead, we just cd back one level, and remove the basename of the directory. Eventually, we need to get rid of now-empty and untracked directories, that were parents of a removed submodule. For example. ./foo/bar/ was a submodule, so ./foo/bar/ was removed, which left ./foo/ around. Yet again, recent-ish git versions would have removed it during the forced checkout, but old-ish versions (e.g. 1.7.1) do not remove it with the forced checkout. Instead we rely on the already used forced-forced clean of directories, untracked, and ignored content, to really get rid of extra stuff we are not interested in. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: ensure we checkout to a clean stateYann E. MORIN2018-05-011-1/+5
| | | | | | | | | | | | | | | | | | | | Force the checkout to ignore and throw away any local changes. This allows recovering from a previous partial checkout (e.g. killed by the user, or by a CI job...) git checkout -f has been supported since the inception of git, so we can use it without any second thought. Also do a forced-forced clean, to really get rid of all untracked stuff. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Arnout Vandecappelle <arnout@mind.be> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: try to recover from utterly-broken repositoriesYann E. MORIN2018-05-011-3/+34
| | | | | | | | | | | | | | | | In some cases, the repository may be in a state we can't automatically recover from, especially since we must still support oldish git versions that do not provide the necessary commands or options thereof. As a last-ditch recovery, delete the repository and recreate the cache from scratch. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: quickly exit when the cset does not existYann E. MORIN2018-05-011-0/+7
| | | | | | | | | | | | | | | | | | | | | Check that the given cset is indeed something we can checkout. If not, then exit early. This will be useful when a later commit will trap any failing git command to try to recover the repository by doing a clone from scratch: when the cset is not a commit, it does not mean the repository is broken or what, and re-cloning from scratch would not help, so no need to trash a good cache. Reported-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: run all git commands in the current directoryYann E. MORIN2018-05-011-4/+4
| | | | | | | | | | | | | | That way, we can pushd earlier, which will help with last-ditch recovery in a followup commit. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: add warning not to use our git cacheYann E. MORIN2018-05-011-0/+18
| | | | | | | | | | | | | | | | | | | | | | | We really want the user not to use our git cache manually, or their changes (committed or not) may eventually get lost. So, add a warning file, not unlike the one we put in the target/ directory, to warn the user not to use the git tree. Ideally, we would have carried this file in support/misc/, but the git backend does not have access to it: the working directory is somewhere unknown, and TOPDIR is not exported in the environment. So, we have to carry it in-line in the backend instead. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: fix transform regexp for older tar versionsYann E. MORIN2018-04-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Older versions of tar (e.g. 1.27.1) incorrectly interpret the escaping of the regexp separator, and generate broken tarballs. For example, given the following transform expression: --transform="s/^\.\//squashfs-e38956b92f738518c29734399629e7cdb33072d3\//" the resulting paths in the generated tarball would be: squashfs-e38956b92f738518c29734399629e7cdb33072d3\/ i.e. a directory which last character is indeed a '\'. We fix that by using a separator which is very unlikely to occur in a filename. Fixes: http://autobuild.buildroot.org/results/742/7427f34e5c9f6d043b0fe6ad2c66cc0f31d2b24f/ and probably a slew of others as well... Take this opportunity to fix indentation on the following line (leading spaces, not TABs). Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: be sure we do fetch somethingYann E. MORIN2018-04-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The different versions of git will behave in different ways when fetching remote references, as summarised by the table below: | ancient git | new git -------------------------------------------------------------------- git fetch | fetch all refs but tags | fetches all refs but tags git fetch -t | fetches only tags | fetch all refs and tags (git-fetch may still fetch tags, but only if reachable from a branch) So, to cover all the bases, we do a simple fetch, to be sure we have branches, followed by the existing fetch -t, to get extra tags. Fixes: http://autobuild.buildroot.net/results/0a2/0a238a7f55ea56c33b639ad03ed5796143426889/build-end.log Reported-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: ensure we have a sane repositoryYann E. MORIN2018-04-191-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | There are cases where a repository might be broken, e.g. when a previous operation was killed or otherwise failed unexpectedly. We fix that by always initialising the repository, as suggested by Ricardo. git-init is safe on an otherwise-healthy repository: Running git init in an existing repository is safe. It will not overwrite things that are already there. [...] Using git-init will just ensure that we have the strictly required files to form a sane tree. Any blob that is still missing would get fetched later on. Reported-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Reported-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Acked-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: ensure we always work in the expected repositoryYann E. MORIN2018-04-191-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git always look directories up until it finds a repository. In case the git cache is broken, it may no longer be identified as a repository, and git will look higher in the directories until it finds one. In the default conditions, this would be Buildroot's own git tree (because DL_DIR is a subdir of Buildroot), but in some situations may very well be any repository the user has Buildroot in, like a br2-external tree... So, we force git to use our git cache and never look elsewhere, as Suggested by Ricardo. Use GIT_DIR, as it has been there for ages now, while --git-dir was only introduced later (even if most distros ship an later version), as suggested by Arnout. Also fix the one call to git that was not using the wrapper. Reported-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Acked-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* support/download/dl-wrapper: pass the correct -N optionThomas Petazzoni2018-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | ${raw_name} is never defined in dl-wrapper, and therefore the value passed to the -N option is always empty. This causes a problem for the 'cvs' backend, which uses the value of this option as the CVS module to be downloaded. If the name of the CVS module is omitted, all the CVS modules from that CVS repository are downloaded, which creates a tarball with a lot more contents, and the actual useful contents in a sub-directory, obviously breaking patches that should be applied, and the entire build process that follows. Fixes: http://autobuild.buildroot.net/results/fcee0e3d7eeeb373313b1794092c729b1b052348/ Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* support/download/dl-wrapper: fix passing remaining options to helper scriptsThomas Petazzoni2018-04-121-1/+1
| | | | | | | | | | | | | | | | | | | | When calling the backend-specific helper scripts, the remaining options are in ${@}. However, in order to let the helper script know that those remaining options should not be parsed, but instead passed as-is to the download tool, they must be separated from the main options by "--". Without this, packages that use <pkg>_DL_OPTS, such as the amd-catalyst package, cannot download their tarball anymore. Fixes: http://autobuild.buildroot.net/results/de818f6e4c8e63d5e8a49c445d10c34eccc40410/ Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: more resilient in case of killYann E. MORIN2018-04-101-3/+5
| | | | | | | | | | | | | | | | | | | | | | In case the git backend gets killed right in-between it finished initialising the repository, but before it could add the remote, we'd end up with a repository without the 'origin' remote, so we would not be able to change its URL. Another case that may happen (like in the build failure, below), is that the repository was initialised with a previous version of Buildroot, before the commit e17719264b (download/git: don't require too-recent git) was applied, and that trepository was still lying around... Fixes: http://autobuild.buildroot.org/results/25a/25aae054634368fadb265b97ebe4dda809deff6f/ Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Peter Korsgaard <peter@korsgaard.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
* download/git: don't require too-recent gitYann E. MORIN2018-04-081-1/+3
| | | | | | | | | | | | | | | | git has supported -C only since 1.8.5, and some distros have not yet caught up after more than 4 years... Fall back to entering the directory. Fixes: http://autobuild.buildroot.net/results/35f9f7a4adc6c2cad741079e4afdf1408c94703b Reported-by: André Hentschel <nerv@dawncrow.de> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: André Hentschel <nerv@dawncrow.de> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download/git: fix transform-nameYann E. MORIN2018-04-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | When a package contains a relative symlink which first component is '..' (thus pointing one directory higher), for example package 'meh' contains this symlink: foo/bar -> ../buz then it would be stored as 'meh-version./buz' because of the transform-name pattern replacement. Fix it to only match the leading './'. Reported-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Peter Korsgaard <peter@korsgaard.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Tested-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* core/download fix local backendYann E. MORIN2018-04-061-3/+6
| | | | | | | | | | | | | | | | | | | | | Since c8ef0c03b0b (download: put most of the infra in dl-wrapper), the backend for local files is now named after the scheme, which is 'file' for a local file. >From the same commit on, the directory part and the basename are now passed separately, to let the backend reconstruct the full path when it needs to do so, which is the case for the 'file' backend too. Finaly, ff559846fdc1 (support/download: Add support to pass options directly to downloaders) introduced a nasty error, as it made use of "${@}" when calling its internal function. Revert that mess now... Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Peter Korsgaard <peter@korsgaard.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Arnout Vandecappelle <arnout@mind.be> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download/git: fix basename for files inside tarballsRicardo Martincoski2018-04-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit "6d938bcb52 download: git: introduce cache feature" introduced a typo that makes the tarball to contain files without the package basename: $ tar -tvf good-a238b1dfcd825d47d834af3c5223417c8411d90d.tar.gz -rw-r--r-- 0/0 8 2017-10-14 02:10 ./file Historically, all tarballs are generated with the basename: $ tar -tvf good-a238b1dfcd825d47d834af3c5223417c8411d90d.tar.gz -rw-r--r-- 0/0 8 2017-10-14 02:10 good-a238b1dfcd825d47d834af3c5223417c8411d90d/file The hashes in the tree were calculated with the basename. In the most common scenario, after the download ends the tarball is generated, the hash mismatches and the download mechanism falls back to use the tarball from http://sources.buildroot.net . The problem can be reproduced by forcing the download of any git package PKG that has a hash file to check against: $ make defconfig $ ./utils/config --set-str BR2_BACKUP_SITE "" $ BR2_DL_DIR=$(mktemp -d) make PKG-dirclean PKG-source Fix the typo so the basename is really added to the files, that was clearly the intention of the code. Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Cc: Arnout Vandecappelle <arnout@mind.be> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Peter Korsgaard <peter@korsgaard.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Yann E. MORIN <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* core/download: fix when the BR2_DL_DIR does not accept hardlinksYann E. MORIN2018-04-031-1/+6
| | | | | | | | | | | | | | | | | | | | | | | When the BR2_DL_DIR is a mountpoint (presumably shared between various machine, or mounted from the local host when running in a VM), it is possible that it does not support hardlinks (e.g. samba, or the VMWare VMFS, etc...). If the hardlink fails, fallback to copying the file. As a last resort, if that also fails, eventually fallback to doing the download. Note: this means that the dl-wrapper is no longer atomic-safe: the code suffers of a TOCTTOU condition: the file may be created in-between the check and the moment we try to ln/cp it. Fortunately, the dl-wrapper is now run under an flock, so we're still safe. If we eventually go for a more fine-grained implementation, we'll have to be careful then. Reported-by: Arnout Vandecappelle <arnout@mind.be> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Arnout Vandecappelle <arnout@mind.be> Cc: Peter Korsgaard <peter@korsgaard.com> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download: git: introduce cache featureMaxime Hadjinlian2018-04-021-24/+41
| | | | | | | | | | | | | | | | Now we keep the git clone that we download and generates our tarball from there. The main goal here is that if you change the version of a package (say Linux), instead of cloning all over again, you will simply 'git fetch' from the repo the missing objects, then generates the tarball again. This should speed the 'source' part of the build significantly. The drawback is that the DL_DIR will grow much larger; but time is more important than disk space nowadays. Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* core/download: look for archives in the global download dir firstYann E. MORIN2018-04-021-1/+9
| | | | | | | | | | | | | | | For existing setups, the global donload directory may have a lot of the required archives, so look into there before attempting a download. We simply hard-link them if found there and not in the new per-package loaction. Then we resume the existing procedure (which means the new hardlink will get removed if it happened to not match the hash). Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Peter Korsgaard <peter@korsgaard.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download: add missing '-d' optionMaxime Hadjinlian2018-04-021-2/+4
| | | | | | | | | The infrastructure needs to give the 'dl_dir' to the dl-wrapper which in its turn needs to give it to the helper. It will only be used by the 'git' helper as of now. Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download: put most of the infra in dl-wrapperMaxime Hadjinlian2018-04-023-57/+102
| | | | | | | | | | | | | | | | | | | | | | | The goal here is to simplify the infrastructure by putting most of the code in the dl-wrapper as it is easier to implement and to read. Most of the functions were common already, this patch finalizes it by making the pkg-download.mk pass all the parameters needed to the dl-wrapper which in turn will pass everything to every backend. The backend will then cherry-pick what it needs from these arguments and act accordingly. It eases the transition to the addition of a sub directory per package in the DL_DIR, and later on, a git cache. [Peter: drop ';' in BR_NO_CHECK_HASH_FOR in DOWNLOAD macro and swap cd/rm -rf as mentioned by Yann, fix typos] Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Tested-by: Luca Ceresoli <luca@lucaceresoli.net> Reviewed-by: Luca Ceresoli <luca@lucaceresoli.net> Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* core/pkg-download: change all helpers to use common optionsYann E. MORIN2018-04-029-90/+112
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently all download helpers accepts the local output file, the remote locations, the changesets and so on... as positional arguments. This was well and nice when that's was all we needed. But then we added an option to quiesce their verbosity, and that was shoehorned with a trivial getopts, still keeping all the existing positional arguments as... positional arguments. Adding yet more options while keeping positional arguments will not be very easy, even if we do not envision any new option in the foreseeable future (but 640K ought to be enough for everyone, remember? ;-) ). Change all helpers to accept a set of generic options (-q for quiet and -o for the output file) as well as helper-specific options (like -r for the repository, -c for a changeset...). Maxime: Changed -R to -r for recurse (only for the git backend) Changed -r to -u for URI (for all backend) Change -R to -c for cset (for CVS and SVN backend) Add the export of the BR_BACKEND_DL_GETOPTS so all the backend wrapper can use the same option easily Now all the backends use the same common options. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com> Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Reviewed-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: keep files downloaded without hashGaël PORTAY2018-04-011-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | In the situation where the hash is missing from the hash file, the dl-wrapper downloads the file again and again until the developer specifies the hash to complete the download step. To avoid this situation, the freshly-downloaded file is not removed anymore after a successful download. After this change, the behaviour is as follows: - Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR => always succeeds. - Hash file exists, but file is not present => file is NOT removed, build is terminated immediately (i.e. secondary site is not tried). - Hash file exists, file is present, but hash mismatch => file is removed, secondary site is tried. => If all primary/secondary site downloads or hash checks fail, the build is terminated. Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com> [Arnout: extend commit log] Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
* support/download: svn non-interactive in BR2_SVNSam Voss2017-11-261-1/+1
| | | | | | | | | | | | | | | | Instead of overriding the _svn command and injecting --non-interactive, change the default value of BR2_SVN to include this flag so the end user can choose not to use the flag. This change helps users behind corporate system rules which may not allow them to locally cache credentials and require interactive mode. Signed-off-by: Sam Voss <sam.voss@rockwellcollins.com> [Originally implemented by] CC: "Yann E. MORIN" <yann.morin.1998@free.fr> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* suport/download: force svn to be non-interactiveYann E. MORIN2017-11-051-1/+1
| | | | | | | | | | | Fixes: http://autobuild.buildroot.org/results/2af/2af7412846c576089f8596857ab8c81ac31c1bed/ Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: André Hentschel <nerv@dawncrow.de> Reviewed-by: André Hentschel <nerv@dawncrow.de> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: fix check_one_hash descriptionGaël PORTAY2017-09-191-2/+3
| | | | | | | | | | | Function check_one_hash takes three arguments: - algo hash - known hash - file to hash Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
* download/git: force gzip compression level 6Petr Kulhavy2017-09-121-1/+1
| | | | | | | | | | | | | Force gzip compression level 6 when calculating hash of a downloaded GIT repo. To make sure the tar->gzip->checksum chain always provides consistent result.` The script was relying on the default compression level, which must not be necessarily consistent among different gzip versions. The level 6 is gzip's current default compression level. Signed-off-by: Petr Kulhavy <brain@jikos.cz> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* download/git: clarify why .git is removedRicardo Martincoski2017-04-201-1/+4
| | | | | | | | | | | | | The removal of the .git dir before creating the tarball is not anymore just an optimization. It is necessary to make the tarball reproducible. Also, without the removal, large tarballs (gigabytes) would be created for some linux trees. Update the comment accordingly. Reported-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* download/git: create GNU format tar filesArnout Vandecappelle2017-03-211-1/+3
| | | | | | | | | | | | | | | | | | | | | On most distros, the tar format defaults to GNU. However, at build time the default format may be changed to posix. Also, future versions of tar will default to posix. Since we want the tarballs created by the git download method to be reproducible (so their hash can be checked), we should explicitly specify the format. Since existing tarballs on sources.buildroot.org use the GNU format, and also the existing hashes in the *.hash files are based on GNU format tarballs, we use the GNU format. In addition, the Posix format encodes atime and ctime as well as mtime, but tar offers no option like --mtime to override them. In the GNU format, atime and ctime are only encoded if the --incremental option is given. Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be> Cc: Peter Seiderer <ps.report@gmx.net> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* support/download: make the git wrapper more robustYann E. MORIN2016-10-251-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, there are two failure paths in the wrapper: - if the tar fails, then the error is ignored because it is on the left-hand-side of a pipe; - if the find fails, then the error is ignored because it is a process substitution (and there is a pipe, too). While the former could be fixed with "set -o pipefail", the latter can not be fixed thusly and we must use an intemediate file for it. So, fix both issues by using intermediate files, both to generate the list of files to include in the archive, and generate the archive in a temporary tarball. Fixes the following build issue, where the find is failing for whatever unknown reason: http://autobuild.buildroot.net/results/20f/20fd76d2256eee81837f7e9bbaefbe79d7645ae9/ And this one, where the process substitution failed, also for an unknown reason: http://autobuild.buildroot.org/results/018/018971ea9227b386fe25d3c264c7e80b843a9f68/ Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* support/download: Add support to pass options directly to downloadersRomain Perier2016-08-238-9/+25
| | | | | | | | | | | This adds support to pass options to the underlying command that is used by downloader. Useful for retrieving data with server-side checking for user login or passwords, use a proxy or use specific options for cloning a repository via git and hg. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* support/download/git: Fix compatibility issue with git older than 1.8.4Enrique Ocaña González2016-07-281-1/+1
| | | | | | | | | | | | The "--no-patch" option used by the git downloader appeared on git 1.8.4. Systems with older git versions show an error and fall back to the wget downloader, which isn't suitable for all the cases. Signed-off-by: Enrique Ocaña González <eocanha@igalia.com> Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com> Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: don't over-remove files from git archivesYann E. MORIN2016-07-041-3/+3
| | | | | | | | | | | | | | | | | When we now manually create git archives, we removed all .git-related files. However, we also exclude empty directories. This means that a directory which only had a .gitignore file is excluded from the archive. Fixes: http://autobuild.buildroot.org/results/2aa/2aa8954311f009988880d27b6e48af91bc74c346/ http://autobuild.buildroot.org/results/b45/b45cceea99b9860ccf1c925eeda498a823b30903/ http://autobuild.buildroot.org/results/5ae/5ae336052fd32057d9631649279e142a81f5651f/ http://autobuild.buildroot.org/results/5fc/5fc3abf4a1aea677f576e16c49253d00720a8bef/ Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* core/pkg-infra: download git submodules if the package wants themYann E. MORIN2016-07-021-3/+4
| | | | | | | | | | | | | | | Add a new package variable that packages can set to specify that they need git submodules. Only accept this option if the download method is git, as we can not get submodules via an http download (via wget). Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Aleksandar Simeonov <aleksandar@barix.com> Tested-by: Matt Weber <matt@thewebers.ws> Reviewed-by: Matt Weber <matt@thewebers.ws> Tested-By: Nicolas Cavallari <nicolas.cavallari@green-communications.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download/git: add support for submodulesYann E. MORIN2016-07-021-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | Some git repositories may be split into a master repository and submodules. Up until now, we did not have support for submodules, because we were using bare clones, in which it is not possible to update the list of submodules. Now that we are using plain clones with a working copy, we can retrieve the submdoules. Add an option to the git download helper to kick the update of submodules, so that they are only fetched for those packages that require them. Also document the existing -q option at the same time. Submodules have a .git file at their root, which contains the path to the real .git directory of the master repository. Since we remove it, there is no point in keeping those .git files either. Note: this is currently unused, but will be enabled with the follow-up patch that adds the necessary parts in the pkg-generic and pkg-download infrastructures. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Tested-by: Matt Weber <matt@thewebers.ws> Reviewed-by: Matt Weber <matt@thewebers.ws> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download/git: do not use git archive, handle it manuallyYann E. MORIN2016-07-021-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently use git-archive to generate the tarball. This is all handy and dandy, but git-archive does not support submodules. In the follow-up patch, we're going to handle submodules, so we would not be able to use git-archive. Instead, we manually generate the archive: - extract the tree to the requested cset, - get the date of the commit to store in the archive, - store only numeric owners, - store owner and group as 0 (zero, although any arbitrary value would have been fine, as long as it's a constant), - sort the files to store in the archive. We also get rid of the .git directory, because there is no reason to keep it in the context of Buildroot. Some people would love to keep it so as to speed up later downloads when updating a package, but that is not really doable. For example: - use current Buildroot - it would need foo-12345, so do a clone and keep the .git in the generated tarball - update Buildroot - it would need foo-98765 For that second clone, how could we know we would have to first extract foo-12345 ? So, the .git in the archive is pretty much useless for Buildroot. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Tested-by: Matt Weber <matt@thewebers.ws> Reviewed-by: Matt Weber <matt@thewebers.ws> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download/git: do not use bare clonesYann E. MORIN2016-07-021-3/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we are using bare clones, so as to minimise the disk usage, most notably for largeish repositories such as the one for the Linux kernel, which can go beyond the 1GiB barrier. However, this precludes updating (and thus using) the submodules, if any, of the repositories, as a working copy is required to use submodules (becaue we need to know the list of submodules, where to find them, where to clone them, what cset to checkout, and all those is dependent upon the checked out cset of the father repository). Switch to using /plain/ clones with a working copy. This means that the extra refs used by some forges (like pull-requests for Github, or changes for gerrit...) are no longer fetched as part of the clone, because git does not offer to do a mirror clone when there is a working copy. Instead, we have to fetch those special refs by hand. Since there is no easy solution to know whether the cset the user asked for is such a special ref or not, we just try to always fetch the cset requested by the user; if this fails, we assume that this is not a special ref (most probably, it is a sha1) and we defer the check to the archive creation, which would fail if the requested cset is missing anyway. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Tested-by: Matt Weber <matt@thewebers.ws> Reviewed-by: Matt Weber <matt@thewebers.ws> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: really, really make git archives reproducibleYann E. MORIN2016-02-271-1/+1
| | | | | | | | | | | | | | | The way we use it, gzip will store the current time in the header, which leads to unreproducible archives. Fix that by telling gzip to not store the name and date of the file it compresses, with the -n option. Since it compresses its stdin, there was already no filename stored; now there's even no date stored. Note: gzip has had -n since at least 1.2.4, released in 1993, so virtually every gzip out there nowadays has it. Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: alternative access methods to CVSJoao Mano2016-01-201-1/+7
| | | | | | | | | | | | | Allows user to specify other access methods than :pserver:anonymous@ on CVS repositories. This shall be defined in the <pkg>_SITE variable. [Thomas: - as suggested by Yann, quote the variable expansion - as suggested by Yann, use a regexp match - tweak commit log] Signed-off-by: Joao Mano <joao@datacom.ind.br> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* support/download: support older bazaar versionsYann E. MORIN2016-01-181-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | In efe7f68 (support/download: generate reproducible Bazaar archives), bzr was instructed to store files with the timestamp set to the date they were last modified in the repository, instead of the current date, using the --per-file-timestamp option. However, this option has been added only in bzr-2.2 (August 2010) which is not available on older distros. We fix that by not using --per-file-timestamp when the bzr version is older than 2.2, in which case we just generate the archive with the current date set on files. This means the archive is thus non-reproducible, and if a hash is available for that archive, the hash will not match, and Buildroot will try to download from the mirror (if any) or fail (if no mirror). Fixes: http://autobuild.buildroot.org/results/51f/51f4ff5462c15a85937d411f457096224d00fdcd http://autobuild.buildroot.org/results/b88/b8828b5fbc16128408c2f44169ac23de7e34d770 http://autobuild.buildroot.org/results/fb4/fb4b0fb2131b40c18273dbe5e51b393cb6df18ec ... [Peter: simplify sed invocation] Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: generate reproducible Bazaar archivesYann E. MORIN2016-01-031-1/+3
| | | | | | | | | | | | | | | | | | Similarly to what has previously been done for the Hg download backend, instruct bzr to generate the archive on stdout, so that we can generate reproducible archives. When instructing bzr to generate the output file by itself, it uses a temporary file that is then fed to gzip, which in turn stores the timestamp of that file in the generated archive, whereas when the output is generated on stdout, there is no timestamp, so the archive is then reproducible. Bizarely enough, we can tell 'bazaar' not to generate a bazaar in the archive. Cool, uh? ;-] Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: generate reproducible Hg archivesYann E. MORIN2016-01-031-1/+1
| | | | | | | | | | | | | | | | | | When hg directly creates the output file, the hash for that file changes everytime. However, if we just tell hg to output the archive on stdout and we do the redirect to the file, then the archive is reproducible. (The reason is that in the first case, a temporary file is created and then compressed, and gzip is adding the filename and its timestamp in the gzip header, while in the second case, there is no temporary file, and thus no timestamp and thus it is reproducible.) Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Yegor Yefremov <yegorslists@googlemail.com> Tested-by: Yegor Yefremov <yegorslists@googlemail.com> Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
* support/download: protect from custom commands with spaces in argsYann E. MORIN2015-12-128-15/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some users may provide custom download commands with spaces in their arguments, like so: BR2_HG="hg --config foo.bar='some space-separated value'" However, the way we currently call those commands does not account for the extra quotes, and each space-separated part of the command is interpreted as separate arguments. Fix that by calling 'eval' on the commands. Because of the eval, we must further quote our own arguments, to avoid the eval further splitting them in case there are spaces (even though we do not support paths with spaces, better be clean from the onset to avoid breakage in the future). We change all the wrappers to use a wrapper-function, even those with a single call, so they all look alike. Note that we do not single-quote some of the variables, like ${verbose} because it can be empty and we really do not want to generate an empty-string argument. That's not a problem, as ${verbose} would not normally contain space-separated values (it could get set to something like '-q -v' but in that case we'd still want two arguments, so that's fine). Reported-by: Thomas De Schampheleire <patrickdepinguin@gmail.com> Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com> Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com> Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
* support/download: fetch all refs on full git cloneVivien Didelot2015-11-291-1/+1
| | | | | | | | | | | | | | | | When specifying BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION, a user may want to specify the SHA of a reference different than a branch or tag. For instance, Gerrit stores the patchsets under refs/changes/xx/xxx, and Github stores the pull requests under refs/pull/xxx/head. When cloning a repository with --bare, you don't fetch these references. This patch uses --mirror for a full clone, in order to give the user access to all references of the Git repository. Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Reviewed-by: "Maxime Hadjinlian" <maxime.hadjinlian@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
OpenPOWER on IntegriCloud