Home>

I'm thinking of downloading an image from Flickr, but when I wrote and executed the following code, an error occurred on the way and I couldn't download it.
I would like to know how to deal with it.

~ \ AppData \ Roaming \ Python \ Python37 \ site-packages \ requests \ adapters.py in send (self, request, stream, timeout, verify, cert, proxies)
496
497 except (ProtocolError, socket.error) as err:
->498 raise ConnectionError (err, request = request)
499
500 except MaxRetryError as e:

ConnectionError: ('Connection aborted.', OSError ("(10054,'WSAECONNRESET')"))

Corresponding source code
#Search photos with Flickr API --- (* 3) #Search and download photos with Flickr
from flickrapi import FlickrAPI
from urllib.request import urlretrieve
from pprint import pprint
import os, time, sys
#Specify API key and secret ( Please rewrite below ) --- (* 1)
key = "private just in case"
secret = "private just in case"
wait_time = 1 #waiting seconds (1 or more recommended)
#Download by specifying keyword and directory name --- (* 2)
def main ():
    go_download ('tuna sushi','sushi')
    go_download ('salad','salad')
    go_download ('Mapo tofu','tofu')
def go_download (keyword, dir):
    #Determine the image storage path
    savedir = "./image/" + dir
    if not os.path.exists (savedir):
        os.makedirs (savedir)
    #Download using API --- (* 4)
    flickr = FlickrAPI (key, secret, format ='parsed-json')
    res = flickr.photos.search (
      text = keyword, # search term
      per_page = 300, # Number of acquisitions
      media ='photos', # Find photos
      sort = "relevance", # Sort by related search terms
      safe_search = 1, # safe search
      extras ='url_q, license')
    #Check search results
    photos = res ['photos']
    pprint (photos)
    try: try:
      # Download images one by one --- (* 5)
      for i, photo in enumerate (photos ['photo']):
        url_q = photo ['url_q']
        filepath = savedir +'/' + photo ['id'] +'.jpg'
        if os.path.exists (filepath): continue
        print (str (i + 1) + ": download =", url_q)
        urlretrieve (url_q, filepath)
        time.sleep (wait_time)
    except:
      import traceback
      traceback.print_exc ()
if __name__ =='__main__':
    main ()
What I tried

I tried searching on the net

Supplementary information (FW/tool version, etc.)

Please provide more detailed information here.

  • Answer # 1

    Since it is MaxRetryError, I am retrying a lot. Perhaps per_page = 300 and wait_time = 1 (getting 300 images in 1 second) is unbalanced.

    The questioner confirmed the improvement by adjusting the wait_time, so I knew that it was as expected.

Trends