summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorXavier Chantry <shiningxc@gmail.com>2008-08-03 20:03:05 +0200
committerDan McGee <dan@archlinux.org>2008-08-05 16:58:52 +0200
commit4476598e4e128f4595d5383ecb51a9576a447b5b (patch)
tree3cfeaa535b8731f67c1265512f210566bbe78139
parent9bc799ec7b1718e6d90ceedd5e08672068e26e10 (diff)
downloadpacman-4476598e4e128f4595d5383ecb51a9576a447b5b.tar.gz
pacman-4476598e4e128f4595d5383ecb51a9576a447b5b.tar.xz
dload.c : drop the specific handling of file: url.
Before commit fc48dc31, file:/// urls forced the use of the internal downloader (libdownload), because the default XferCommand, wget, does not handle them. We tried to move away from forcing usage of libdownload, so this commit implemented the handling of file:/// urls manually. However, this implementation is way too basic. It does not handle the progress bar, thus nothing at all appears in pacman's output when a file: repo is synchronized, or when a file is downloaded from a sync repo. Also, it is not able to detect when the repo is already up-to-date. When libdownload was used, both were handled. It seems better to just drop this implementation for now. All users who use libdownload will get the much better file:// handling back. For the users of XferCommand, it will be more problematic, but they have several options: 1) Switch to a downloader handling file:// (wget doesn't, but curl does for example). 2) Drop the file:// repo, and set up light http or ftp servers instead. Consider that going that way would make this repo available for the whole local network, which can be useful. 3) Switch back to libdownload, which works perfectly for many users. Signed-off-by: Xavier Chantry <shiningxc@gmail.com> Signed-off-by: Dan McGee <dan@archlinux.org>
-rw-r--r--lib/libalpm/dload.c15
1 files changed, 0 insertions, 15 deletions
diff --git a/lib/libalpm/dload.c b/lib/libalpm/dload.c
index b5f0b876..ef12646e 100644
--- a/lib/libalpm/dload.c
+++ b/lib/libalpm/dload.c
@@ -340,21 +340,6 @@ cleanup:
static int download(const char *url, const char *localpath,
time_t mtimeold, time_t *mtimenew) {
int ret;
- const char *proto = "file://";
- int len = strlen(proto);
- if(strncmp(url, proto, len) == 0) {
- /* we can simply grab an absolute path from the file:// url by starting
- * our path at the char following the proto (the root '/')
- */
- const char *sourcefile = url + len;
- const char *filename = get_filename(url);
- char *destfile = get_destfile(localpath, filename);
-
- ret = _alpm_copyfile(sourcefile, destfile);
- FREE(destfile);
- /* copyfile returns 1 on failure, we want to return -1 on failure */
- return(ret ? -1 : 0);
- }
/* We have a few things to take into account here.
* 1. If we have both internal/external available, choose based on