Last answered:

05 May 2024

Posted on:

20 May 2020

0

chromium download error

In the scraping javasript lecture in the web scraping section, when I ran await r.html.arender() , I received an error. chromium couldn't download. here is the error message   
MaxRetryError: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Max retries exceeded with url: /chromium-browser-snapshots/Win_x64/588429/chrome-win32.zip (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
4 answers ( 0 marked as helpful)
Instructor
Posted on:

21 May 2020

0
Dear Mark, You can try the solution outlined here: https://github.com/miyakogi/pyppeteer/issues/258   Best, 365 Team
Posted on:

21 May 2020

0
It worked. Thanks a lot
Posted on:

20 Nov 2023

1

Hello. I also tried this trick but it rais ed another error. Could you please help me?

Posted on:

05 May 2024

0

Hello! i tried the fix mentioned by the instructor (Nikola Pulev
 ) above and ran into the same issue as Hamidreza Ghobadi: "BadZipFile: File is not a zip file"


My guess is the zip file itself is missing from the directory that the patch is trying to download from: https://storage.googleapis.com/chromium-browser-snapshots/Win_x64/1181205/chrome-win.zip

So far i can't find any solutions online. My guess would be to find the updated url & repoint the patch to that url - but i have no idea how that can be done. Would the 365DS team or anyone know how this can be fixed? Thanks



[WARNING] Start patched secure https Chromium download from URL:
https://storage.googleapis.com/chromium-browser-snapshots/Win_x64/1181205/chrome-win.zip<br />Download may take a few minutes.
100%|██████████| 219/219 [00:00<?, ?it/s]
[WARNING] 
chromium download done.
[INFO] Beginning extraction


BadZipFile: File is not a zip file

Submit an answer