|
D>
|
ca
|
2025-10-15 04:40:07
|
2026-01-30 04:16:14
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca/people.py", line 54, in scrape
yield from self.scrape_people(rows, gender)
File "/app/scrapers/ca/people.py", line 67, in scrape_people
url = row.xpath('.//a[@class="ce-mip-mp-tile"]/@href')[0]
IndexError: list index out of range
|
|
C
|
ca_ab
|
2026-01-30 04:24:16
|
2026-01-30 04:24:16
|
|
|
C
|
ca_ab_calgary
|
2026-01-30 04:19:57
|
2026-01-30 04:19:57
|
|
|
C
|
ca_ab_edmonton
|
2026-01-30 04:07:46
|
2026-01-30 04:07:46
|
|
|
D>
|
ca_ab_grande_prairie
|
2025-10-27 04:21:42
|
2026-01-30 04:07:09
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_ab_grande_prairie/people.py", line 23, in scrape
image = councillor.xpath(".//img/@src")[0]
IndexError: list index out of range
|
|
D>
|
ca_ab_grande_prairie_county_no_1
|
2025-07-15 04:33:22
|
2026-01-30 04:22:11
|
scrapelib.HTTPError: 404 while retrieving https://www.countygp.ab.ca/en/county-government/council.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_ab_grande_prairie_county_no_1/people.py", line 9, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.countygp.ab.ca/en/county-government/council.aspx
|
D>
04:34:54 WARNING pupa: validation of CanadianPerson 0680a0b2-fd95-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'Blaine spent' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
ca_ab_lethbridge
|
2025-10-31 04:14:18
|
2026-01-30 04:34:54
|
Value 'Blaine spent' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Cons…
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 175, in validate
validator.validate(self.as_dict(), schema)
File "/app/.heroku/python/lib/python3.10/site-packages/validictory/validator.py", line 616, in validate
raise MultipleValidationError(self._errors)
validictory.validator.MultipleValidationError: 1 validation errors:
Value 'Blaine spent' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 104, in do_scrape
self.save_object(obj)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 89, in save_object
raise ve
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 85, in save_object
obj.validate()
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 177, in validate
raise ScrapeValueError('validation of {} {} failed: {}'.format(
pupa.exceptions.ScrapeValueError: validation of CanadianPerson 0680a0b2-fd95-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'Blaine spent' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
|
C
|
ca_ab_strathcona_county
|
2026-01-30 04:35:00
|
2026-01-30 04:35:00
|
|
|
D>
|
ca_ab_wood_buffalo
|
2025-09-30 04:10:18
|
2026-01-30 04:24:42
|
scrapelib.HTTPError: 404 while retrieving https://www.rmwb.ca/en/mayor-council-and-administration/councillors.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_ab_wood_buffalo/people.py", line 27, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.rmwb.ca/en/mayor-council-and-administration/councillors.aspx
|
|
C
|
ca_bc
|
2026-01-30 04:16:42
|
2026-01-30 04:16:43
|
|
|
C
|
ca_bc_abbotsford
|
2026-01-30 04:33:59
|
2026-01-30 04:33:59
|
|
|
C
|
ca_bc_burnaby
|
2026-01-30 04:26:46
|
2026-01-30 04:26:46
|
|
|
C
|
ca_bc_coquitlam
|
2026-01-30 04:25:21
|
2026-01-30 04:25:21
|
|
|
C
|
ca_bc_kelowna
|
2026-01-30 04:38:25
|
2026-01-30 04:38:25
|
|
|
C
|
ca_bc_langley
|
2026-01-30 04:19:37
|
2026-01-30 04:19:37
|
|
|
C
|
ca_bc_langley_city
|
2026-01-30 04:16:07
|
2026-01-30 04:16:07
|
|
|
C
|
ca_bc_new_westminster
|
2026-01-30 04:07:04
|
2026-01-30 04:07:04
|
|
|
C
|
ca_bc_richmond
|
2026-01-30 04:21:44
|
2026-01-30 04:21:44
|
|
|
C
|
ca_bc_saanich
|
2026-01-30 04:26:57
|
2026-01-30 04:26:57
|
|
|
C
|
ca_bc_surrey
|
2026-01-30 04:22:30
|
2026-01-30 04:22:30
|
|
|
C
|
ca_bc_vancouver
|
2026-01-30 04:19:43
|
2026-01-30 04:19:43
|
|
|
C
|
ca_bc_victoria
|
2026-01-30 04:34:26
|
2026-01-30 04:34:26
|
|
D>
04:29:20 ERROR ca_candidates.people:
04:29:23 ERROR ca_candidates.people:
04:29:23 ERROR ca_candidates.people:
04:29:23 ERROR ca_candidates.people:
04:29:23 ERROR ca_candidates.people:
04:29:23 ERROR ca_candidates.people:
04:29:24 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/2 (https://www.greenparty.ca/en/candidates/page/2)
04:29:25 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/3 (https://www.greenparty.ca/en/candidates/page/3)
04:29:26 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/4 (https://www.greenparty.ca/en/candidates/page/4)
04:29:27 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/5 (https://www.greenparty.ca/en/candidates/page/5)
04:29:28 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/6 (https://www.greenparty.ca/en/candidates/page/6)
04:29:29 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/7 (https://www.greenparty.ca/en/candidates/page/7)
04:29:30 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/8 (https://www.greenparty.ca/en/candidates/page/8)
04:29:31 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/9 (https://www.greenparty.ca/en/candidates/page/9)
04:29:32 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/10 (https://www.greenparty.ca/en/candidates/page/10)
04:29:33 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/11 (https://www.greenparty.ca/en/candidates/page/11)
04:29:34 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/12 (https://www.greenparty.ca/en/candidates/page/12)
04:29:35 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/13 (https://www.greenparty.ca/en/candidates/page/13)
04:29:36 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/14 (https://www.greenparty.ca/en/candidates/page/14)
04:29:37 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/15 (https://www.greenparty.ca/en/candidates/page/15)
04:29:38 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/16 (https://www.greenparty.ca/en/candidates/page/16)
04:29:39 WARNING ca_candidates.people: 404 while retrieving https://www.greenparty.ca/en/candidates/page/17 (https://www.greenparty.ca/en/candidates/page/17)
|
ca_candidates
|
2025-04-29 04:39:17
|
2026-01-30 04:29:41
|
scrapelib.HTTPError: 404 while retrieving https://www.conservative.ca/candidates
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_candidates/people.py", line 162, in scrape
for p in getattr(self, f"scrape_{party}")():
File "/app/scrapers/ca_candidates/people.py", line 425, in scrape_conservative
page = self.lxmlize(start_url)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.conservative.ca/candidates
|
|
C
|
ca_mb
|
2026-01-30 04:17:48
|
2026-01-30 04:17:48
|
|
|
D>
|
ca_mb_winnipeg
|
2025-10-30 04:29:23
|
2026-01-30 04:33:04
|
AssertionError: No councillors found on website
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_mb_winnipeg/people.py", line 18, in scrape
assert len(councillors), "No councillors found on website"
AssertionError: No councillors found on website
|
|
C
|
ca_nb
|
2026-01-30 04:31:03
|
2026-01-30 04:31:04
|
|
|
C
|
ca_nb_fredericton
|
2026-01-30 04:34:49
|
2026-01-30 04:34:49
|
|
|
C
|
ca_nb_moncton
|
2026-01-30 04:07:51
|
2026-01-30 04:07:52
|
|
|
C
|
ca_nb_saint_john
|
2026-01-30 04:18:00
|
2026-01-30 04:18:00
|
|
D>
04:27:16 WARNING pupa: validation of CanadianPerson f57f4d3c-fd93-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'Loyola O"Driscoll' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
ca_nl
|
2025-09-15 04:33:31
|
2026-01-30 04:27:16
|
Value 'Loyola O"Driscoll' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner…
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 175, in validate
validator.validate(self.as_dict(), schema)
File "/app/.heroku/python/lib/python3.10/site-packages/validictory/validator.py", line 616, in validate
raise MultipleValidationError(self._errors)
validictory.validator.MultipleValidationError: 1 validation errors:
Value 'Loyola O"Driscoll' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 104, in do_scrape
self.save_object(obj)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 89, in save_object
raise ve
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 85, in save_object
obj.validate()
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 177, in validate
raise ScrapeValueError('validation of {} {} failed: {}'.format(
pupa.exceptions.ScrapeValueError: validation of CanadianPerson f57f4d3c-fd93-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'Loyola O"Driscoll' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
|
C
|
ca_nl_st_john_s
|
2026-01-30 04:34:20
|
2026-01-30 04:34:21
|
|
|
C
|
ca_ns
|
2026-01-30 04:32:42
|
2026-01-30 04:32:43
|
|
|
C
|
ca_ns_cape_breton
|
2026-01-30 04:38:20
|
2026-01-30 04:38:21
|
|
|
C
|
ca_ns_halifax
|
2026-01-30 04:33:52
|
2026-01-30 04:33:52
|
|
|
C
|
ca_nt
|
2026-01-30 04:30:07
|
2026-01-30 04:30:07
|
|
D>
04:25:09 WARNING pupa: validation of Membership a9c79a16-fd93-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value '266-' for field '' does not match regular expression '\A1 \d{3} \d{3}-\d{4}(?: x\d+)?\Z'
|
ca_nu
|
2025-10-27 04:20:12
|
2026-01-30 04:25:09
|
Value '266-' for field '' does not match regular expression '\A1 \d{3} \d{3}-\d{4}(?: x\d+)?\Z'
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 175, in validate
validator.validate(self.as_dict(), schema)
File "/app/.heroku/python/lib/python3.10/site-packages/validictory/validator.py", line 616, in validate
raise MultipleValidationError(self._errors)
validictory.validator.MultipleValidationError: 1 validation errors:
Value '266-' for field '' does not match regular expression '\A1 \d{3} \d{3}-\d{4}(?: x\d+)?\Z'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 104, in do_scrape
self.save_object(obj)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 93, in save_object
self.save_object(obj)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 89, in save_object
raise ve
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 85, in save_object
obj.validate()
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 177, in validate
raise ScrapeValueError('validation of {} {} failed: {}'.format(
pupa.exceptions.ScrapeValueError: validation of Membership a9c79a16-fd93-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value '266-' for field '' does not match regular expression '\A1 \d{3} \d{3}-\d{4}(?: x\d+)?\Z'
|
|
C
|
ca_on
|
2026-01-30 04:03:31
|
2026-01-30 04:03:32
|
|
|
C
|
ca_on_ajax
|
2026-01-30 04:35:14
|
2026-01-30 04:35:14
|
|
|
C
|
ca_on_belleville
|
2026-01-30 04:37:59
|
2026-01-30 04:37:59
|
|
|
C
|
ca_on_brampton
|
2026-01-30 04:20:24
|
2026-01-30 04:20:24
|
|
|
C
|
ca_on_brantford
|
2026-01-30 04:05:31
|
2026-01-30 04:05:31
|
|
|
C
|
ca_on_burlington
|
2026-01-30 04:25:58
|
2026-01-30 04:25:58
|
|
|
C
|
ca_on_caledon
|
2026-01-30 04:31:26
|
2026-01-30 04:31:26
|
|
|
C
|
ca_on_cambridge
|
2026-01-30 04:33:31
|
2026-01-30 04:33:31
|
|
|
C
|
ca_on_chatham_kent
|
2026-01-30 04:25:46
|
2026-01-30 04:25:46
|
|
|
D>
|
ca_on_clarington
|
2026-01-28 04:17:30
|
2026-01-30 04:24:26
|
scrapelib.HTTPError: 404 while retrieving https://www.clarington.net/en/town-hall/Meet-Your-Councillors.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_clarington/people.py", line 12, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.clarington.net/en/town-hall/Meet-Your-Councillors.aspx
|
|
C
|
ca_on_fort_erie
|
2026-01-30 04:06:57
|
2026-01-30 04:06:57
|
|
|
C
|
ca_on_georgina
|
2026-01-30 04:24:36
|
2026-01-30 04:24:36
|
|
|
C
|
ca_on_greater_sudbury
|
2026-01-30 04:27:04
|
2026-01-30 04:27:04
|
|
|
C
|
ca_on_grimsby
|
2026-01-30 04:19:53
|
2026-01-30 04:19:53
|
|
|
C
|
ca_on_guelph
|
2026-01-30 04:06:12
|
2026-01-30 04:06:12
|
|
|
C
|
ca_on_haldimand_county
|
2026-01-30 04:20:59
|
2026-01-30 04:20:59
|
|
D>
04:18:04 WARNING scrapelib: sleeping for 10 seconds before retry
04:18:15 WARNING scrapelib: sleeping for 20 seconds before retry
04:18:35 WARNING scrapelib: sleeping for 40 seconds before retry
|
ca_on_hamilton
|
2025-07-02 04:38:31
|
2026-01-30 04:19:15
|
scrapelib.HTTPError: 403 while retrieving https://www.hamilton.ca/city-council/council-committee/city-council-members/mayors…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_hamilton/people.py", line 12, in scrape
yield self.mayor_data(MAYOR_PAGE)
File "/app/scrapers/ca_on_hamilton/people.py", line 44, in mayor_data
page = self.lxmlize(url)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 403 while retrieving https://www.hamilton.ca/city-council/council-committee/city-council-members/mayors-office
|
|
C
|
ca_on_huron
|
2026-01-30 04:15:36
|
2026-01-30 04:15:36
|
|
|
D>
|
ca_on_kawartha_lakes
|
2025-09-25 04:24:22
|
2026-01-30 04:32:57
|
scrapelib.HTTPError: 404 while retrieving https://www.kawarthalakes.ca/en/municipal-services/contact-a-council-member.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_kawartha_lakes/people.py", line 11, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.kawarthalakes.ca/en/municipal-services/contact-a-council-member.aspx
|
|
C
|
ca_on_king
|
2026-01-30 04:19:23
|
2026-01-30 04:19:23
|
|
|
C
|
ca_on_kingston
|
2026-01-30 04:06:02
|
2026-01-30 04:06:02
|
|
|
C
|
ca_on_kitchener
|
2026-01-30 04:17:52
|
2026-01-30 04:17:52
|
|
|
C
|
ca_on_lambton
|
2026-01-30 04:37:56
|
2026-01-30 04:37:56
|
|
|
C
|
ca_on_lasalle
|
2026-01-30 04:19:19
|
2026-01-30 04:19:19
|
|
|
C
|
ca_on_lincoln
|
2026-01-30 04:20:50
|
2026-01-30 04:20:50
|
|
|
C
|
ca_on_london
|
2026-01-30 04:21:07
|
2026-01-30 04:21:07
|
|
|
D>
|
ca_on_markham
|
2025-07-04 04:13:21
|
2026-01-30 04:19:55
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 101, in do_scrape
for iterobj in obj:
File "/app/scrapers/ca_on_markham/people.py", line 97, in scrape_mayor
p.image = page.xpath('.//div[@class="align-right media--image"]/div/img/@src')[0]
IndexError: list index out of range
|
|
C
|
ca_on_milton
|
2026-01-30 04:21:20
|
2026-01-30 04:21:20
|
|
|
D>
|
ca_on_mississauga
|
2025-12-17 04:02:23
|
2026-01-30 04:26:18
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_mississauga/people.py", line 22, in scrape
yield self.mayor_data(mayor_url.attrib["href"])
File "/app/scrapers/ca_on_mississauga/people.py", line 47, in mayor_data
photo = page.xpath('//*[@id="65a01af8598b7"]/p[1]/img/@src')[0]
IndexError: list index out of range
|
|
C
|
ca_on_newmarket
|
2026-01-30 04:35:05
|
2026-01-30 04:35:06
|
|
|
C
|
ca_on_niagara
|
2026-01-30 04:16:19
|
2026-01-30 04:16:20
|
|
|
C
|
ca_on_niagara_on_the_lake
|
2026-01-30 04:15:58
|
2026-01-30 04:15:58
|
|
|
D>
|
ca_on_north_dumfries
|
2025-06-19 04:19:39
|
2026-01-30 04:20:40
|
scrapelib.HTTPError: 404 while retrieving https://www.northdumfries.ca/en/township-services/mayor-and-council.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_north_dumfries/people.py", line 11, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.northdumfries.ca/en/township-services/mayor-and-council.aspx
|
|
C
|
ca_on_oakville
|
2026-01-30 04:34:32
|
2026-01-30 04:34:32
|
|
|
D>
|
ca_on_oshawa
|
2025-09-18 04:52:15
|
2026-01-30 04:20:34
|
scrapelib.HTTPError: 404 while retrieving https://www.oshawa.ca/en/city-hall/council-members.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_oshawa/people.py", line 12, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.oshawa.ca/en/city-hall/council-members.aspx
|
D>
04:35:28 WARNING scrapelib: sleeping for 10 seconds before retry
04:35:38 WARNING scrapelib: sleeping for 20 seconds before retry
04:35:58 WARNING scrapelib: sleeping for 40 seconds before retry
|
ca_on_ottawa
|
2025-06-26 04:31:47
|
2026-01-30 04:36:38
|
scrapelib.HTTPError: 400 while retrieving https://www.arcgis.com/sharing/rest/content/items/a5e9dc2425274bb796d3ded47b0d7b00…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/utils.py", line 386, in scrape
binary = BytesIO(self.get(self.csv_url).content)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 400 while retrieving https://www.arcgis.com/sharing/rest/content/items/a5e9dc2425274bb796d3ded47b0d7b00/data
|
|
C
|
ca_on_peel
|
2026-01-30 04:22:37
|
2026-01-30 04:22:37
|
|
|
C
|
ca_on_pickering
|
2026-01-30 07:03:31
|
2026-01-30 07:03:31
|
|
|
C
|
ca_on_richmond_hill
|
2026-01-30 04:20:08
|
2026-01-30 04:20:08
|
|
|
D>
|
ca_on_sault_ste_marie
|
2025-09-24 04:33:36
|
2026-01-30 04:09:10
|
AssertionError: No councillors found
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_sault_ste_marie/people.py", line 16, in scrape
assert len(councillors), "No councillors found"
AssertionError: No councillors found
|
D>
04:27:39 WARNING scrapelib: sleeping for 10 seconds before retry
04:27:49 WARNING scrapelib: sleeping for 20 seconds before retry
04:28:10 WARNING scrapelib: sleeping for 40 seconds before retry
|
ca_on_st_catharines
|
2025-10-09 04:06:20
|
2026-01-30 04:28:50
|
scrapelib.HTTPError: 502 while retrieving https://niagaraopendata.ca/dataset/ccb9c7f1-d3b0-4049-9c08-e4f7b048722c/resource/1…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/utils.py", line 408, in scrape
reader = self.csv_reader(
File "/app/scrapers/utils.py", line 251, in csv_reader
response = self.get(url, **kwargs)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 502 while retrieving https://niagaraopendata.ca/dataset/ccb9c7f1-d3b0-4049-9c08-e4f7b048722c/resource/128a39f0-8234-4708-b69b-9c73f7a55475/download/stcathcounsilors.csv
|
|
C
|
ca_on_thunder_bay
|
2026-01-30 04:22:06
|
2026-01-30 04:22:06
|
|
|
C
|
ca_on_toronto
|
2026-01-30 04:21:11
|
2026-01-30 04:21:11
|
|
|
C
|
ca_on_uxbridge
|
2026-01-30 04:15:55
|
2026-01-30 04:15:55
|
|
D>
04:03:42 WARNING scrapelib: sleeping for 10 seconds before retry
04:03:52 WARNING scrapelib: sleeping for 20 seconds before retry
04:04:12 WARNING scrapelib: sleeping for 40 seconds before retry
|
ca_on_vaughan
|
2025-11-24 04:34:40
|
2026-01-30 04:04:52
|
scrapelib.HTTPError: 403 while retrieving https://www.vaughan.ca/council
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_vaughan/people.py", line 11, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 403 while retrieving https://www.vaughan.ca/council
|
|
C
|
ca_on_waterloo
|
2026-01-30 04:33:27
|
2026-01-30 04:33:27
|
|
|
C
|
ca_on_waterloo_region
|
2026-01-30 04:05:17
|
2026-01-30 04:05:17
|
|
|
C
|
ca_on_welland
|
2026-01-30 04:24:22
|
2026-01-30 04:24:22
|
|
|
D>
|
ca_on_wellesley
|
2025-09-11 04:47:48
|
2026-01-30 04:20:43
|
scrapelib.HTTPError: 404 while retrieving https://www.wellesley.ca/council/councillors/?q=council/councillors
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_wellesley/people.py", line 15, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.wellesley.ca/council/councillors/?q=council/councillors
|
|
D>
|
ca_on_whitby
|
2025-09-12 04:03:37
|
2026-01-30 04:34:02
|
scrapelib.HTTPError: 404 while retrieving https://www.whitby.ca/en/town-hall/mayor-and-council.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_whitby/people.py", line 10, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.whitby.ca/en/town-hall/mayor-and-council.aspx
|
|
C
|
ca_on_whitchurch_stouffville
|
2026-01-30 04:35:11
|
2026-01-30 04:35:11
|
|
|
D>
|
ca_on_wilmot
|
2025-09-30 04:25:16
|
2026-01-30 04:27:09
|
scrapelib.HTTPError: 404 while retrieving https://www.wilmot.ca/en/township-office/council.aspx
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_wilmot/people.py", line 9, in scrape
page = self.lxmlize(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://www.wilmot.ca/en/township-office/council.aspx
|
|
D>
|
ca_on_windsor
|
2025-11-17 04:18:28
|
2026-01-30 04:27:32
|
}
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_on_windsor/people.py", line 48, in scrape
phone = self.get_phone(page)
File "/app/scrapers/utils.py", line 186, in get_phone
raise Exception(f"No phone pattern in {node.text_content()}")
Exception: No phone pattern in
Not Found | City of Windsor
!function(a,b,c,d){var e,f=document,g=f.getElementsByTagName("SCRIPT"),h=g[g.length-1].previousElementSibling,i=f.defaultView&&f.defaultView.getComputedStyle?f.defaultView.getComputedStyle(h):h.currentStyle;if(i&&i[a]!==b)for(e=0;e<c.length;e++)f.write('<link href="'+c[e]+'" '+d+"/>")}("position","absolute",["/lib/bootstrap/css/bootstrap.min.css"], "rel=\u0022stylesheet\u0022 crossorigin=\u0022anonymous\u0022 ");
!function(a,b,c,d){var e,f=document,g=f.getElementsByTagName("SCRIPT"),h=g[g.length-1].previousElementSibling,i=f.defaultView&&f.defaultView.getComputedStyle?f.defaultView.getComputedStyle(h):h.currentStyle;if(i&&i[a]!==b)for(e=0;e<c.length;e++)f.write('<link href="'+c[e]+'" '+d+"/>")}("font-style","normal",["/lib/fontawesome-free/css/all.min.css"], "rel=\u0022stylesheet\u0022 crossorigin=\u0022anonymous\u0022 ");
(window.jQuery||document.write("\u003Cscript src=\u0022/lib/jquery/jquery.min.js\u0022 crossorigin=\u0022anonymous\u0022\u003E\u003C/script\u003E"));
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-JQ3QXZG1N6');
(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-KWS49TGW');
(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-WM4LV3K');
City Hall
Accessibility
Accountability and Transparency
Audit and Oversight
Budget
By-laws Online
City Council Meetings
City Departments
City Hall Campuses
City Wellness
Committees of Council
Fraud and Waste Hotline
Customer Service
Employment Opportunities
Enforcement
Financial Documents
Freedom of Information and Protection of Privacy
Legal Services
Licensing and Registration
Municipal Election
Policies
Taxes and Assessment
For Residents
Building
Building Windsor's Future
Child Care and Early Years
Construction and Detours
Culture
Emergency and Crime Prevention
Environment
Employment and Training Services (ETS)
Facilities For Rent
History Of Windsor
Housing
Huron Lodge
Let's Talk Windsor
Parks and Forestry
Planning
Property
Recreation
Roads, Sewers, Sidewalks
Social Services
Traffic and Parking
Transit Windsor
Waste and Recycling
Water and Utilities
For Business
Bids and Tenders
BizPaL
Builders and Developers
Building E-Permits
Economic Development
Employment and Training Services (ETS)
Licensing
Physician Recruitment
Small Business
Sponsor Windsor
Stats and Facts
Visiting Windsor
Adventure Bay Family Water Park Presented by WFCU Credit Union
Capitol Theatre
Facilities
Maps
Municipal Accommodation Tax (MAT)
Retire Here
Tourism Windsor Essex Pelee Island
Transit Windsor
Walking Tours
Windsor Detroit Tunnel
YQG: Windsor International Airport
E-Services
311 Online
ACTIVE Windsor
Basement Flooding Protection Subsidy
BizPaL
Building Permits
Child Care Subsidy
Committee of Adjustment
Delegation Request
Dog Licence
External Tax Inquiry (ETI)
Lottery Reports
MappMyCity
My Property Tax
Online Payments
Ontario Works Application
Open Data Catalog
Property Document Request
Public Property Inquiry
Stormwater Financing Program
Ticket Payment
Utility Permits
Vulnerable Persons Registry
Mayor and Council
HomeNot Found
Not Found
var PublisherPageContext = PublisherPageContext || {
FileLeafRef: 'custom404.cshtml',
Title: 'Not Found',
PublisherPageTemplate: 'Base Page',
RollupContent: '',
RollupImage: '',
MetaDescription: '',
MetaKeywords: '',
GroupIDs: '',
RelativeURL: '/custom404',
ScheduledPublishDate: '',
ScheduledUnpublishDate: '',
PublishedStatus: '',
AlternateText: '',
PublicationType: '',
AdditionalCSS: '',
AdditionalJS: '',
Modified: '2026-01-15T02:48:30.0000000Z',
SharedWithUsers: '',
SharedWithDetails: '',
lcf76f155ced4ddcb4097134ff3c332f: '',
TaxCatchAll: '',
MediaServiceDateTaken: '',
MediaServiceGenerationTime: '',
MediaServiceEventHashCode: '',
ExcludeFromSearch: 'False',
UserIDs: '',
EventType1: '',
EventStartDate: '',
EventEndDate: '',
ConfirmationURL: '',
EventJoinURL: '',
EventLocation: '',
ArticleType: '',
PublishedDate1: '',
NewsType: '',
ShowReportFilterPane: '',
ShowReportPageNavPane: '',
CCW_PageURL: '',
CCW_Amenities: '',
CCW_MapCoordinates: '',
GroupCategory: '',
NOC: '',
RESPONSIBILITIES: '',
JobTitle1: '',
JobPostingNumber: '',
StartingSalary: '',
HoursOfWork: '',
NumberOfPositions: '',
JobPostingCategory: '',
DriverLicenceRequired: '',
VehicleRequired: '',
LocationbusRoute: '',
MinimumQualifications: '',
Preferredbutnegotiable: '',
JobDescription: '',
HowToApply: '',
ItemChildCount: '0',
FolderChildCount: '0',
RelatedPages_x003A_ID: '',
DataPortalReport_x003A_ID: '',
Location: '',
CCW_Description: '',
ID: 10316,
pageType: 'Content',
breadcrumb: '',
pageTemplateId: 2,
isApprovalEnabled: 'True',
moderationStatus: 0, //0-Approved, 1-Denied, 2-Pending, 3-Draft, 4-Scheduled
fileLevel: 'Published' //Published, Draft, Checkout
}
Sorry the page or item is not available. Please check your link.
© City of Windsor 2024
Navigate
City Hall
For Residents
For Business
Visiting Windsor
E-Services
Mayor and Council
Quick Links
Bus Schedules and Maps
Job Postings
City Parks
Bids and Tenders
Ticket Payment
Construction and Detours
City Council Meetings
More Links
Contact Us
Terms of Use/Privacy
211
311
Weather
Accessibility
(window.bootstrap||document.write("\u003Cscript src=\u0022/lib/bootstrap/js/bootstrap.bundle.min.js\u0022 crossorigin=\u0022anonymous\u0022\u003E\u003C/script\u003E"));
(window.Handlebars||document.write("\u003Cscript src=\u0022/lib/handlebars/handlebars.min.js\u0022 crossorigin=\u0022anonymous\u0022\u003E\u003C/script\u003E"));
(window.jQuery && window.jQuery.validator||document.write("\u003Cscript src=\u0022/lib/jquery-validation/jquery.validate.min.js\u0022 crossorigin=\u0022anonymous\u0022\u003E\u003C/script\u003E"));
(window.luxon||document.write("\u003Cscript src=\u0022/lib/luxon/luxon.min.js\u0022 crossorigin=\u0022anonymous\u0022\u003E\u003C/script\u003E"));
if (EUMPortal) {
EUMPortal.siteUrl = 'https://www.citywindsor.ca';
EUMPortal.userIsAuthenticated = "False".toLowerCase() === 'true';
EUMPortal.profileUrl = "";
EUMPortal.suiteFormats = [{"Title":"Datetime-ET","Data Type":"datetime","Locale":"en-us","FormatOptions":{"dateStyle":"full","timeStyle":"long","timeZone":"Canada/Eastern"}},{"Title":"Currency","Data Type":"number","Locale":"en-us","FormatOptions":{"style":"currency","currency":"USD"}},{"Title":"Date-ET","Data Type":"datetime","Locale":"en-us","FormatOptions":{"dateStyle":"full","timeZone":"Canada/Eastern"}},{"Title":"Integer","Data Type":"number","Locale":"en-us","FormatOptions":{"maximumFractionDigits":0}},{"Title":"Percentage","Data Type":"number","Locale":"en-us","FormatOptions":{"style":"percent"}}];
}
|
|
C
|
ca_on_woolwich
|
2026-01-30 09:26:12
|
2026-01-30 09:26:13
|
|
|
D>
|
ca_pe
|
2025-11-14 04:34:41
|
2026-01-30 04:20:56
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_pe/people.py", line 38, in scrape
p.image = details.xpath('//div[contains(@class, "member-portrait")]//img')[0].get("src")
IndexError: list index out of range
|
|
C
|
ca_pe_charlottetown
|
2026-01-30 04:33:11
|
2026-01-30 04:33:11
|
|
|
D>
|
ca_pe_stratford
|
2025-11-26 04:22:54
|
2026-01-30 04:21:02
|
scrapelib.HTTPError: 404 while retrieving https://townofstratford.ca/government/about_our_government/mayor_council
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_pe_stratford/people.py", line 14, in scrape
page = self.lxmlize(COUNCIL_PAGE, user_agent="Mozilla/5.0")
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://townofstratford.ca/government/about_our_government/mayor_council
|
|
C
|
ca_pe_summerside
|
2026-01-30 04:31:35
|
2026-01-30 04:31:35
|
|
|
D>
|
ca_qc
|
2026-01-09 04:10:23
|
2026-01-30 04:15:32
|
pupa.exceptions.SameNameError: multiple people with same name "Eric Girard" in Jurisdiction - must provide birth_date to dis…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 307, in do_handle
report['import'] = self.do_import(juris, args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 207, in do_import
report.update(person_importer.import_directory(datadir))
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 190, in import_directory
return self.import_data(json_stream())
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 226, in import_data
for json_id, data in self._prepare_imports(data_items):
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/people.py", line 33, in _prepare_imports
raise SameNameError(name)
pupa.exceptions.SameNameError: multiple people with same name "Eric Girard" in Jurisdiction - must provide birth_date to disambiguate
|
|
C
|
ca_qc_beaconsfield
|
2026-01-30 04:27:24
|
2026-01-30 04:27:24
|
|
|
D>
|
ca_qc_brossard
|
2025-09-25 04:11:31
|
2026-01-30 04:16:02
|
KeyError: 'district 1 - secteurs c-e-l'
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_brossard/people.py", line 43, in scrape
district = secteurs_to_districts[secteur]
KeyError: 'district 1 - secteurs c-e-l'
|
|
C
|
ca_qc_cote_saint_luc
|
2026-01-30 04:32:51
|
2026-01-30 04:32:51
|
|
|
D>
|
ca_qc_dollard_des_ormeaux
|
2025-11-07 04:30:36
|
2026-01-30 04:19:49
|
Exception: No email node in b'<div class="elementor-widget-wrap elementor-element-populated">\n\t\t\t\t<div class="elementor…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_dollard_des_ormeaux/people.py", line 22, in scrape
email = self.get_email(councillor)
File "/app/scrapers/utils.py", line 140, in get_email
raise Exception(f"No email node in {etree.tostring(node)}")
Exception: No email node in b'<div class="elementor-widget-wrap elementor-element-populated">\n\t\t\t\t<div class="elementor-element elementor-element-d003c4e e-flex e-con-boxed e-con e-parent" data-id="d003c4e" data-element_type="container">\n\t\t\t\t\t<div class="e-con-inner">\n\t\t\t\t<div class="elementor-element elementor-element-58b8f28 elementor-position-top elementor-widget elementor-widget-image-box" data-id="58b8f28" data-element_type="widget" data-widget_type="image-box.default">\n\t\t\t\t<div class="elementor-widget-container">\n\t\t\t\t\t<div class="elementor-image-box-wrapper"><figure class="elementor-image-box-img"><img decoding="async" width="819" height="1024" data-src="https://ville.ddo.qc.ca/wp-content/uploads/2025/11/SJesion-1.jpg" class="attachment-full size-full wp-image-111737 lazyload" alt="" data-srcset="https://ville.ddo.qc.ca/wp-content/uploads/2025/11/SJesion-1.jpg 819w, https://ville.ddo.qc.ca/wp-content/smush-webp/2025/11/SJesion-1-240x300.jpg.webp 240w, https://ville.ddo.qc.ca/wp-content/smush-webp/2025/11/SJesion-1-768x960.jpg.webp 768w, https://ville.ddo.qc.ca/wp-content/smush-webp/2025/11/SJesion-1-120x150.jpg.webp 120w" data-sizes="(max-width: 819px) 100vw, 819px" src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==" style="--smush-placeholder-width: 819px; --smush-placeholder-aspect-ratio: 819/1024;"/></figure><div class="elementor-image-box-content"><h3 class="elementor-image-box-title">District 3</h3></div></div>\t\t\t\t</div>\n\t\t\t\t</div>\n\t\t\t\t\t</div>\n\t\t\t\t</div>\n\t\t\t\t<div class="elementor-element elementor-element-590953a elementor-widget elementor-widget-heading" data-id="590953a" data-element_type="widget" data-widget_type="heading.default">\n\t\t\t\t<div class="elementor-widget-container">\n\t\t\t\t\t<h4 class="elementor-heading-title elementor-size-default">Jesion, Sandy</h4>\t\t\t\t</div>\n\t\t\t\t</div>\n\t\t\t\t<div class="elementor-element elementor-element-5648f2a elementor-align-center elementor-icon-list--layout-traditional elementor-list-item-link-full_width elementor-widget elementor-widget-icon-list" data-id="5648f2a" data-element_type="widget" data-widget_type="icon-list.default">\n\t\t\t\t<div class="elementor-widget-container">\n\t\t\t\t\t\t\t<ul class="elementor-icon-list-items">\n\t\t\t\t\t\t\t<li class="elementor-icon-list-item">\n\t\t\t\t\t\t\t\t\t\t\t<span class="elementor-icon-list-icon">\n\t\t\t\t\t\t\t<i aria-hidden="true" class="fas fa-envelope"/>\t\t\t\t\t\t</span>\n\t\t\t\t\t\t\t\t\t\t<span class="elementor-icon-list-text">To come</span>\n\t\t\t\t\t\t\t\t\t</li>\n\t\t\t\t\t\t</ul>\n\t\t\t\t\t\t</div>\n\t\t\t\t</div>\n\t\t\t\t\t</div>\n\t\t'
|
|
C
|
ca_qc_dorval
|
2026-01-30 04:07:16
|
2026-01-30 04:07:16
|
|
|
D>
|
ca_qc_gatineau
|
2025-11-25 04:15:12
|
2026-01-30 04:16:26
|
AssertionError: No councillors found
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_gatineau/people.py", line 21, in scrape
assert councillors, "No councillors found"
AssertionError: No councillors found
|
|
D>
|
ca_qc_kirkland
|
2025-11-10 04:49:56
|
2026-01-30 04:20:53
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_kirkland/people.py", line 24, in scrape
name = councillor.xpath(".//strong/text()")[0]
IndexError: list index out of range
|
|
C
|
ca_qc_laval
|
2026-01-30 04:19:45
|
2026-01-30 04:19:45
|
|
|
D>
|
ca_qc_levis
|
2025-11-21 04:08:26
|
2026-01-30 04:33:19
|
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "2", "organization__classification": "legisl…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 307, in do_handle
report['import'] = self.do_import(juris, args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 211, in do_import
report.update(membership_importer.import_directory(datadir))
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 190, in import_directory
return self.import_data(json_stream())
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 227, in import_data
obj_id, what = self.import_item(data)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 247, in import_item
data = self.prepare_for_db(data)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/memberships.py", line 50, in prepare_for_db
data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 165, in resolve_json_id
raise UnresolvedIdError(errmsg)
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "2", "organization__classification": "legislature", "role": "Conseill\u00e8re District"}
|
|
D>
|
ca_qc_longueuil
|
2025-11-03 04:03:52
|
2026-01-30 04:03:40
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_longueuil/people.py", line 21, in scrape
district = tr.xpath('.//p[contains(./strong, "District")]/a/text()')[0]
IndexError: list index out of range
|
|
C
|
ca_qc_mercier
|
2026-01-30 04:31:06
|
2026-01-30 04:31:06
|
|
|
D>
|
ca_qc_montreal
|
2025-12-15 04:29:05
|
2026-01-30 04:35:22
|
scrapelib.HTTPError: 404 while retrieving https://donnees.montreal.ca/dataset/381d74ca-dadd-459f-95c9-db255b5f4480/resource/…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/utils.py", line 408, in scrape
reader = self.csv_reader(
File "/app/scrapers/utils.py", line 251, in csv_reader
response = self.get(url, **kwargs)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 404 while retrieving https://donnees.montreal.ca/dataset/381d74ca-dadd-459f-95c9-db255b5f4480/resource/ce1315a3-50ee-48d0-a0f0-9bcc15f65643/download/liste_elus_montreal.csv
|
|
C
|
ca_qc_montreal_est
|
2026-01-30 04:21:26
|
2026-01-30 04:21:26
|
|
|
D>
|
ca_qc_pointe_claire
|
2025-11-17 04:34:17
|
2026-01-30 04:34:05
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_pointe_claire/people.py", line 25, in scrape
p.image = councillor.xpath('.//div[@class="member-photo"]/img/@src')[0]
IndexError: list index out of range
|
|
D>
|
ca_qc_quebec
|
2025-11-14 04:11:02
|
2026-01-30 04:19:50
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_quebec/people.py", line 29, in scrape
district = councillor.xpath('./p[@itemprop="jobTitle"]/a/text()')[0]
IndexError: list index out of range
|
|
D>
|
ca_qc_saguenay
|
2025-08-27 04:01:52
|
2026-01-30 04:19:41
|
IndexError: list index out of range
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_saguenay/people.py", line 15, in scrape
name = mayor_page.xpath('//a[contains(., "maire")]/span/text()')[0]
IndexError: list index out of range
|
D>
04:06:07 WARNING pupa: validation of CanadianPerson 00d46ff8-fd91-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'mboudreault@sadb.qc.ca' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
ca_qc_sainte_anne_de_bellevue
|
2025-11-17 04:23:59
|
2026-01-30 04:06:07
|
Value 'mboudreault@sadb.qc.ca' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commiss…
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 175, in validate
validator.validate(self.as_dict(), schema)
File "/app/.heroku/python/lib/python3.10/site-packages/validictory/validator.py", line 616, in validate
raise MultipleValidationError(self._errors)
validictory.validator.MultipleValidationError: 1 validation errors:
Value 'mboudreault@sadb.qc.ca' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 104, in do_scrape
self.save_object(obj)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 89, in save_object
raise ve
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 85, in save_object
obj.validate()
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 177, in validate
raise ScrapeValueError('validation of {} {} failed: {}'.format(
pupa.exceptions.ScrapeValueError: validation of CanadianPerson 00d46ff8-fd91-11f0-b045-72d1a67e60f8 failed: 1 validation errors:
Value 'mboudreault@sadb.qc.ca' for field '<obj>.name' does not match regular expression 'regex.Regex('\\A(?!(?:Chair|Commissioner|Conseiller|Councillor|Deputy|Dr|M|Maire|Mayor|Miss|Mme|Mr|Mrs|Ms|Regional|Warden)\\b)(?:(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)(?:\'|-| - | ))+(?:(?:\\p{Lu}\\.)+|\\p{Lu}+|(?:Jr|Rev|Sr|St)\\.|da|de|den|der|la|van|von|[("](?:\\p{Lu}+|\\p{Lu}\\p{Ll}*(?:-\\p{Lu}\\p{Ll}*)*)[)"]|(?:D\'|d\'|De|de|Des|Di|Du|L\'|La|Le|Mac|Mc|O\'|San|St\\.|Van|Vander?|van|vanden)?\\p{Lu}\\p{Ll}+|\\p{Lu}\\p{Ll}+Anne?|Marie\\p{Lu}\\p{Ll}+|[ᐁᐃᐄᐅᐆᐊᐋᐯᐱᐲᐳᐴᐸᐹᑉᑊᑌᑎᑏᑐᑑᑕᑖᑦᑫᑭᑮᑯᑰᑲᑳᒃᒉᒋᒌᒍᒎᒐᒑᒡᒣᒥᒦᒧᒨᒪᒫᒻᓀᓂᓃᓄᓅᓇᓈᓐᓓᓕᓖᓗᓘᓚᓛᓪᓭᓯᓰᓱᓲᓴᓵᔅᔦᔨᔩᔪᔫᔭᔮᔾᕂᕆᕇᕈᕉᕋᕌᕐᕓᕕᕖᕗᕘᕙᕚᕝᕴᕵᕶᕷᕸᕹᕺᕻᕼᕿᖀᖁᖂᖃᖄᖅᖏᖐᖑᖒᖓᖔᖕᖖᖠᖡᖢᖣᖤᖥᖦᖨᖩᖪᖫᖬᖭᖮᖯᙯᙰᙱᙲᙳᙴᙵᙶ\U00011ab0\U00011ab1\U00011ab2\U00011ab3\U00011ab4\U00011ab5\U00011ab6\U00011ab7\U00011ab8\U00011ab9\U00011aba\U00011abb]+|Á\'a:líya|A\'aliya|Ch\'ng|Prud\'homme|Qwulti\'stunaat|Ya\'ara|D!ONNE|ChiefCalf|IsaBelle)\\Z', flags=regex.V0)'
|
|
D>
|
ca_qc_saint_jean_sur_richelieu
|
2025-11-11 04:06:55
|
2026-01-30 04:34:28
|
AssertionError: No councillors found
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_saint_jean_sur_richelieu/people.py", line 16, in scrape
assert len(councillors), "No councillors found"
AssertionError: No councillors found
|
|
C
|
ca_qc_saint_jerome
|
2026-01-30 04:19:26
|
2026-01-30 04:19:26
|
|
|
D>
|
ca_qc_senneville
|
2025-10-02 04:03:34
|
2026-01-30 04:24:47
|
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "District 6", "organization__classification"…
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 307, in do_handle
report['import'] = self.do_import(juris, args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 211, in do_import
report.update(membership_importer.import_directory(datadir))
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 190, in import_directory
return self.import_data(json_stream())
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 227, in import_data
obj_id, what = self.import_item(data)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 247, in import_item
data = self.prepare_for_db(data)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/memberships.py", line 50, in prepare_for_db
data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/importers/base.py", line 165, in resolve_json_id
raise UnresolvedIdError(errmsg)
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "District 6", "organization__classification": "legislature", "role": "Conseill\u00e8re"}
|
|
C
|
ca_qc_sherbrooke
|
2026-01-30 04:05:56
|
2026-01-30 04:05:56
|
|
D>
04:36:42 WARNING scrapelib: sleeping for 10 seconds before retry
04:36:52 WARNING scrapelib: sleeping for 20 seconds before retry
04:37:12 WARNING scrapelib: sleeping for 40 seconds before retry
|
ca_qc_terrebonne
|
|
2026-01-30 04:37:52
|
scrapelib.HTTPError: 403 while retrieving https://terrebonne.ca/membres-du-conseil-municipal/
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_terrebonne/people.py", line 9, in scrape
page = self.lxmlize(COUNCIL_PAGE, user_agent=CUSTOM_USER_AGENT)
File "/app/scrapers/utils.py", line 217, in lxmlize
response = self.get(url, cookies=cookies, verify=verify)
File "/app/scrapers/utils.py", line 198, in get
return super().get(*args, verify=kwargs.pop("verify", SSL_VERIFY), **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/app/.heroku/python/lib/python3.10/site-packages/scrapelib/__init__.py", line 619, in request
raise HTTPError(resp)
scrapelib.HTTPError: 403 while retrieving https://terrebonne.ca/membres-du-conseil-municipal/
|
|
D>
|
ca_qc_trois_rivieres
|
2025-11-03 04:22:51
|
2026-01-30 04:20:47
|
AssertionError: No councillors found
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_qc_trois_rivieres/people.py", line 15, in scrape
assert len(members), "No councillors found"
AssertionError: No councillors found
|
|
C
|
ca_qc_westmount
|
2026-01-30 04:38:04
|
2026-01-30 04:38:04
|
|
|
C
|
ca_sk
|
2026-01-30 04:09:04
|
2026-01-30 04:09:05
|
|
|
C
|
ca_sk_regina
|
2026-01-30 04:15:51
|
2026-01-30 04:15:51
|
|
|
C
|
ca_sk_saskatoon
|
2026-01-30 04:06:53
|
2026-01-30 04:06:53
|
|
|
D>
|
ca_yt
|
|
2026-01-30 04:33:55
|
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://yukonassembly.ca/mlas
Traceback (most recent call last):
File "/app/reports/utils.py", line 73, in scrape_people
report.report = subcommand.handle(args, other)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 305, in do_handle
report['scrape'] = self.do_scrape(juris, args, scrapers)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/cli/commands/update.py", line 173, in do_scrape
report[scraper_name] = scraper.do_scrape(**scrape_args)
File "/app/.heroku/python/lib/python3.10/site-packages/pupa/scrape/base.py", line 99, in do_scrape
for obj in self.scrape(**kwargs) or []:
File "/app/scrapers/ca_yt/people.py", line 11, in scrape
page = self.cloudscrape(COUNCIL_PAGE)
File "/app/scrapers/utils.py", line 205, in cloudscrape
response.raise_for_status()
File "/app/.heroku/python/lib/python3.10/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://yukonassembly.ca/mlas
|