有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

使用Jsoup和Android Studio对网站进行java截断

我正在使用Jsoup库从此站点提取数据: Tom's hardware benchmarks

我使用此代码连接到站点并提取数据:

  protected Void doInBackground(Object[] params) {
    doc = Jsoup.connect(url).maxBodySize(Integer.MAX_VALUE).header("Accept-Encoding", "gzip").userAgent("Dalvik").method(Connection.Method.GET).timeout(Integer.MAX_VALUE).get();


    try {

    if (doc != null) {

                    css_text = doc.select("div[class=clLeft] label[for]");

                    for (int i = 0; i < css_text.size(); i++)
                        elem1[i] = css_text.eq(i).text();

                    css_text = doc.select("ul[style=margin-left:0px;] span");

                    css_score = doc.select("div[class=clRight clearfix]");


                    for (int j = 0; j < css_text.size(); j++) {
                        elem2[j] = css_text.eq(j).text();
                        score[j] = css_score.eq(j).text();

                        processori_score_arraylist.add(elem1[j] + "\n" + elem2[j] + "   " + score[j]);


                        }

                    }


                } catch (IOException e) {
                    e.printStackTrace();
                }


                return null;


            }

            @Override
            protected void onPostExecute(Void aVoid) {
                super.onPostExecute(aVoid);

                processori_score_listview.setAdapter(adapter);

            }
        }

    }

我读到有一个1MB的默认限制,可以截断网页。这个网页看起来不是1MB,所以我先用一个值,但并不总是有效。对于一个我不理解的问题,当我在调试模式下看到变量docDocument时,网页有时下载,有时不下载。我不明白为什么。 然后我尝试将maxBodySize的值更改为0,然后更改为整数。MAX_VALUEtimeout读取其他帖子和搜索互联网的值,但这并不能解决问题。 有人能告诉我问题的原因或解决方法吗? 我希望这是清楚的问题是什么,如果不是,我可以怀疑

我发现的关于这个问题的其他帖子:

jsoup don't get full data

JSOUP not downloading complete html if the webpage is big in size. Any alternatives to this or any workarounds?

此处显示HTML页面如何被截断

 <!doctype html>
    <html>
     <head> 
      <meta name="ROBOTS" content="NOINDEX, NOFOLLOW"> 
      <meta http-equiv="cache-control" content="max-age=0"> 
      <meta http-equiv="cache-control" content="no-cache"> 
      <meta http-equiv="expires" content="0"> 
      <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT"> 
      <meta http-equiv="pragma" content="no-cache"> 
      <meta http-equiv="refresh" content="10; url=/distil_r_captcha.html?Ref=/charts/cpu-charts-2015/-01-CinebenchR15,Marque_fbrandx14,3693.html&amp;distil_RID=1CB642F0-76B5-11E5-9B22-93799C16BE3F&amp;distil_TID=20151019225954"> 
      <script type="text/javascript">
        (function(window){
            try {
                if (typeof sessionStorage !== 'undefined'){
                    sessionStorage.setItem('distil_referrer', document.referrer);
                }
            } catch (e){}
        })(window);
    </script> 
      <script type="text/javascript" src="/destilar-fbxcdbtcwcebrsxtw.js" defer></script>
      <style type="text/css">#d__fFH{position:absolute;top:-5000px;left:-5000px}#d__fF{font-family:serif;font-size:200px;visibility:hidden}#ssxfwzexyqctzdfy{display:none!important}</style>
     </head> 
     <body> 
      <div id="distil_ident_block">
       &nbsp;
      </div>   
     </body>
    </html>

共 (1) 个答案

  1. # 1 楼答案

    我看到了一些原因:

    1) 代码就绪性

    别忘了清理代码。你的问题看起来很混乱。这种奇怪的行为可能生活在那里

    2) 随机停机时间

    代码可能会导致服务器出现一些随机停机。在你的情况下,我会加强错误管理

    Document doc=null;
    
    try {
        doc = Jsoup.connect(url) //
               .timeout(0) // Relax the server by according it infinite time...
               .maxBodySize(0) // We don't know the size of the server response...
               .header("Accept-Encoding", "gzip") //
               .userAgent("Dalvik") //
               .get();
    
        // * Extract data from doc
        // If something is missing, raise an exception
        // or write a code that can accomodate with the missing data
    
    } catch(Throwable t) {
       // Using Throwable may seem extreme here, however you'll quickly see what's going on
    
       // Carefully log what happened
       log.error("Something BAD happened...", t);
    
       // Ultimately, if something is present in the document, dump it for later investigation
       if (doc!=null) {
          dump(doc.outerHtml());
       }
    }
    

    3) 网站保护

    一些网站有聪明的反网络垃圾保护。所以当你获取URL时,要慢慢来。代码应该在每次提取时标记3000到5000毫秒之间的随机暂停。它看起来更像人类。你也可以使用一些代理来更改你的ip地址