在Python请求库中查找不重复的raise KeyError

2024-09-24 02:25:03 发布

您现在位置:Python中文网/ 问答频道 /正文

这段代码以前就有用(当我在2015年5月开发它时,可能是我的Python/Requests版本导致了这个问题)。我当前正在requests version: 2.9.1我的代码中出现以下错误:

Traceback (most recent call last):
  File "./add.py", line 98, in <module>
    'SESSID':term.cookies['SESSID'],
  File "/usr/local/lib/python2.7/site-packages/requests/cookies.py", line 276, in __getitem__
    return self._find_no_duplicates(name)
  File "/usr/local/lib/python2.7/site-packages/requests/cookies.py", line 331, in _find_no_duplicates
    raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
KeyError: "name='SESSID', domain=None, path=None"

add.py中的第98行引用了以下内容:

^{pr2}$

为了确保该页面中的特定cookie没有被删除或丢失,我检查了当我点击该页面时的cURL,果然,SESSID和{}都在cURL请求中。

想知道是什么问题吗?

编辑:

我又添加了一些代码:

username = info['username']                                            
password = info['password']                                            
term_in = info['term_in']                                              

login_url="https://school.edu/cas/login?service=https%3A%2F%2Fschool.edu%2Fc%2Fportal%2Flogin"

# Fake random user agent generator                                     
ua = UserAgent()                                                       

# Persist a session                                                    
with requests.Session() as s:                                          

    # Call this URL to get our initial hidden parameter variables      
    # In particular, the `lt` variable changes every time [I'm not     
    # sure how often, but it's necessary every time you login]         
    page = s.get('https://school.edu/web/university')              

    # Convert page to string for easy scraping                         
    tree = html.fromstring(page.text)                                  

    # Grab our unique variables from these particular XPaths           
    # Yes, the XPaths are a MESS, but that's because School's websites 
    # are a mess.                                                      
    lt = tree.xpath('//*[@id="fm1"]/div[4]/input[1]/@value')           
    execution = tree.xpath('//*[@id="fm1"]/div[4]/input[2]/@value')    
    eventId = tree.xpath('//*[@id="fm1"]/div[4]/input[3]/@value')      
    submit = tree.xpath('//*[@id="fm1"]/div[4]/input[4]/@value') 

    l_payload = {                                                      
        'username':username,                                           
        'password':password,                                           
        'lt':lt,                                                       
        'execution':execution,                                         
        '_eventId':eventId,                                            
        'submit':submit                                                
    }                                                                  

    # Login page cookies                                               
    l_cookies = {                                                      
        'JSESSIONID':page.cookies['JSESSIONID'],                       
        'IDMSESSID':username                                           
    }                                                                  

    # Login page headers with a fake user-agent generator              
    l_headers = {                                                      
        "Referer":"https://login.school.edu/cas/login?service=https%3A%2F%2Fschool.edu%2Fc%2Fportal%2Flogin",
        "User-Agent":ua.random                                         
    }                                                                  

    # Now we login to School Connect, School's single-sign-o           
    # for all its web apps like School Learn, School One, etc          
    # For more info in this:                                           
    # https://www.school.edu/irt/help/a-z/schoolConnect/               
    s.post(                                                            
        login_url,                                                     
        data = l_payload,                                              
        cookies = l_cookies,                                           
        headers = l_headers,                                           
        allow_redirects = True                                         
    )                                                                  

    # Go to the "Add/Drop Class" link within School One                
    # to grab the SESSID cookie which will allow us to                 
    # add and/or drop classes                                          
    term = s.get('https://bannersso.school.edu/ssomanager/c/SSB?pkg=bwszkfrag.P_DisplayFinResponsibility%3Fi_url%3Dbwskfreg.P_AltPin')

    #print html.fromstring(term.text)                                  
    #sys.exit(0)                                                       

    # Grab the Add/Drop Class page cookies                             
    ad_cookie = {                                                      
        'SESSID':term.cookies['SESSID'],                               
        'IDMSESSID':username                                           
    }

编辑2:

我打印了term.headers,结果如下:

{'content-length': '8624', 'content-language': 'en', 'set-cookie': 'TESTID=set, SESSID=;expires=Mon, 01-Jan-1990 08:00:00 GMT, PROXY_HASH=;expires=Mon, 01-Jan-1990 08:00:00 GMT', 'keep-alive': 'timeout=5, max=100', 'server': 'Oracle-Application-Server-11g', 'connection': 'Keep-Alive', 'date': 'Thu, 31 Mar 2016 16:00:30 GMT', 'content-type': 'text/html; charset=UTF-8'}

好像根本就没有饼干。


Tags: thetoinhttpstreepageusernamelogin