Use test URLs that are less likely to disappear
authorDebian Science Team <debian-science-maintainers@lists.alioth.debian.org>
Mon, 21 Feb 2022 07:35:51 +0000 (07:35 +0000)
committerRebecca N. Palmer <rebecca_palmer@zoho.com>
Mon, 21 Feb 2022 07:35:51 +0000 (07:35 +0000)
Avoid 404 errors in stable when upstream reorganize the test data
(happened to two of these in 0.25 -> 1.0).

It is _not_ necessary to update the tag version on every package release,
only if these tests fail because they expect moved/changed data.

Author: Rebecca N. Palmer <rebecca_palmer@zoho.com>
Forwarded: not-needed

Gbp-Pq: Name stable_test_urls.patch

pandas/tests/io/excel/test_readers.py
pandas/tests/io/parser/test_network.py
pandas/tests/io/test_html.py

index 1f081ee2dbe33213c24a6c5ef94aa91f9c087f1a..123a03b694a71fe4385af4d5c62007773128354e 100644 (file)
@@ -771,7 +771,7 @@ class TestReaders:
     @tm.network
     def test_read_from_http_url(self, read_ext):
         url = (
-            "https://raw.githubusercontent.com/pandas-dev/pandas/master/"
+            "https://raw.githubusercontent.com/pandas-dev/pandas/v1.0.3/"
             "pandas/tests/io/data/excel/test1" + read_ext
         )
         url_table = pd.read_excel(url)
index 497dd74d2a9a4752b6d0c2d578586dc71fb848cd..f2392c4263f385c0bdcd986b8453e7317b94d5cf 100644 (file)
@@ -36,7 +36,7 @@ def check_compressed_urls(salaries_table, compression, extension, mode, engine):
     # test reading compressed urls with various engines and
     # extension inference
     base_url = (
-        "https://github.com/pandas-dev/pandas/raw/master/"
+        "https://github.com/pandas-dev/pandas/raw/v1.0.3/"
         "pandas/tests/io/parser/data/salaries.csv"
     )
 
index f842e4cd58863fc3ee49a5a7d1037d01926a98ef..1de02679704ac98f331e6cde816947f7ae2d6a5d 100644 (file)
@@ -162,7 +162,7 @@ class TestReadHtml:
     @tm.network
     def test_spam_url(self):
         url = (
-            "https://raw.githubusercontent.com/pandas-dev/pandas/master/"
+            "https://raw.githubusercontent.com/pandas-dev/pandas/v1.0.3/"
             "pandas/tests/io/data/html/spam.html"
         )
         df1 = self.read_html(url, match=".*Water.*")