使用重建索引的Django Haystack和Elasticsearch错误

2024-07-05 09:50:51 发布

您现在位置:Python中文网/ 问答频道 /正文

我想用elasticsearch引擎搜索django haystack。我能够成功地安装每个模块和软件包。在

我执行这个命令

sudo service elasticsearch start

,结果如下。在

* Starting ElasticSearch Server                                         [ OK ]

之后,我就跑

python manage.py rebuild_index

它带来了这个错误

^{pr2}$

Elasticsearchlog文件:

   [2013-12-31 17:17:40,635][INFO ][node                     ] [Garrison Kane] version[0.90.9], pid[17314], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:17:40,635][INFO ][node                     ] [Garrison Kane] initializing ...
[2013-12-31 17:17:40,645][INFO ][plugins                  ] [Garrison Kane] loaded [], sites []
[2013-12-31 17:17:44,058][INFO ][node                     ] [Garrison Kane] initialized
[2013-12-31 17:17:44,059][INFO ][node                     ] [Garrison Kane] starting ...
[2013-12-31 17:17:44,195][INFO ][transport                ] [Garrison Kane] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:17:47,255][INFO ][cluster.service          ] [Garrison Kane] new_master [Garrison Kane][467vjQt7RTyOg8IEHSMKBg][inet[/192.241.129.232:9300]], reason: ze$
[2013-12-31 17:17:47,303][INFO ][discovery                ] [Garrison Kane] elasticsearch/467vjQt7RTyOg8IEHSMKBg
[2013-12-31 17:17:47,342][INFO ][http                     ] [Garrison Kane] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:17:47,343][INFO ][node                     ] [Garrison Kane] started
[2013-12-31 17:17:47,372][INFO ][gateway                  ] [Garrison Kane] recovered [0] indices into cluster_state
[2013-12-31 17:17:59,480][INFO ][cluster.metadata         ] [Garrison Kane] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:18:00,194][DEBUG][action.admin.indices.mapping.put] [Garrison Kane] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:23:36,774][INFO ][node                     ] [Rock Python] version[0.90.9], pid[17565], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:23:36,775][INFO ][node                     ] [Rock Python] initializing ...
[2013-12-31 17:23:36,783][INFO ][plugins                  ] [Rock Python] loaded [], sites []
[2013-12-31 17:23:40,156][INFO ][node                     ] [Rock Python] initialized
[2013-12-31 17:23:40,156][INFO ][node                     ] [Rock Python] starting ...
[2013-12-31 17:23:40,310][INFO ][transport                ] [Rock Python] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:23:43,390][INFO ][cluster.service          ] [Rock Python] new_master [Rock Python][mWPUd96mQyqnlBriAgy-9Q][inet[/192.241.129.232:9300]], reason: zen-di$
[2013-12-31 17:23:43,430][INFO ][discovery                ] [Rock Python] elasticsearch/mWPUd96mQyqnlBriAgy-9Q
[2013-12-31 17:23:43,457][INFO ][http                     ] [Rock Python] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:23:43,458][INFO ][node                     ] [Rock Python] started
[2013-12-31 17:23:44,424][INFO ][gateway                  ] [Rock Python] recovered [1] indices into cluster_state
[2013-12-31 17:35:55,614][INFO ][node                     ] [Wraith] version[0.90.9], pid[755], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:35:55,618][INFO ][node                     ] [Wraith] initializing ...
[2013-12-31 17:35:55,638][INFO ][plugins                  ] [Wraith] loaded [], sites []
[2013-12-31 17:35:59,536][INFO ][node                     ] [Wraith] initialized
[2013-12-31 17:35:59,537][INFO ][node                     ] [Wraith] starting ...
[2013-12-31 17:35:59,708][INFO ][transport                ] [Wraith] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:36:02,785][INFO ][cluster.service          ] [Wraith] new_master [Wraith][5Jgys5vjRcah6LIbcPPecQ][inet[/192.241.129.232:9300]], reason: zen-disco-join ($
[2013-12-31 17:36:02,829][INFO ][discovery                ] [Wraith] elasticsearch/5Jgys5vjRcah6LIbcPPecQ
[2013-12-31 17:36:02,861][INFO ][http                     ] [Wraith] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:36:02,862][INFO ][node                     ] [Wraith] started
[2013-12-31 17:36:04,072][INFO ][gateway                  ] [Wraith] recovered [1] indices into cluster_state
[2013-12-31 17:37:23,469][INFO ][cluster.metadata         ] [Wraith] [haystack] deleting index
[2013-12-31 17:37:23,726][INFO ][cluster.metadata         ] [Wraith] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:37:24,200][DEBUG][action.admin.indices.mapping.put] [Wraith] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
         at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:37:26,856][INFO ][cluster.metadata         ] [Wraith] [haystack] update_mapping [modelresult] (dynamic)
[2013-12-31 17:50:56,446][INFO ][node                     ] [Nikki] version[0.90.9], pid[754], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 17:50:56,446][INFO ][node                     ] [Nikki] initializing ...
[2013-12-31 17:50:56,462][INFO ][plugins                  ] [Nikki] loaded [], sites []
[2013-12-31 17:50:59,849][INFO ][node                     ] [Nikki] initialized
[2013-12-31 17:50:59,850][INFO ][node                     ] [Nikki] starting ...
[2013-12-31 17:50:59,977][INFO ][transport                ] [Nikki] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 17:51:03,174][INFO ][cluster.service          ] [Nikki] new_master [Nikki][e-voUaukTnKHaj50uQDsrA][inet[/192.241.129.232:9300]], reason: zen-disco-join (el$
[2013-12-31 17:51:03,227][INFO ][discovery                ] [Nikki] elasticsearch/e-voUaukTnKHaj50uQDsrA
[2013-12-31 17:51:03,264][INFO ][http                     ] [Nikki] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 17:51:03,265][INFO ][node                     ] [Nikki] started
[2013-12-31 17:51:04,622][INFO ][gateway                  ] [Nikki] recovered [1] indices into cluster_state
[2013-12-31 17:52:20,253][INFO ][cluster.metadata         ] [Nikki] [haystack] deleting index
[2013-12-31 17:52:20,496][INFO ][cluster.metadata         ] [Nikki] [haystack] creating index, cause [api], shards [5]/[1], mappings []
[2013-12-31 17:52:20,973][DEBUG][action.admin.indices.mapping.put] [Nikki] failed to put mappings on indices [[haystack]], type [modelresult]
org.elasticsearch.ElasticSearchIllegalArgumentException: bool field can't be tokenized
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:93)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$Builder.tokenized(BooleanFieldMapper.java:76)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseIndex(TypeParsers.java:185)
        at org.elasticsearch.index.mapper.core.TypeParsers.parseField(TypeParsers.java:75)
        at org.elasticsearch.index.mapper.core.BooleanFieldMapper$TypeParser.parse(BooleanFieldMapper.java:108)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:262)
        at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:218)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:201)
        at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:183)
         at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:322)
        at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:318)
        at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:533)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)
[2013-12-31 17:52:23,817][INFO ][cluster.metadata         ] [Nikki] [haystack] update_mapping [modelresult] (dynamic)
[2013-12-31 21:14:55,585][INFO ][node                     ] [Wingfoot, Wyatt] version[0.90.9], pid[753], build[a968646/2013-12-23T10:35:28Z]
[2013-12-31 21:14:55,588][INFO ][node                     ] [Wingfoot, Wyatt] initializing ...
[2013-12-31 21:14:55,604][INFO ][plugins                  ] [Wingfoot, Wyatt] loaded [], sites []
[2013-12-31 21:14:59,147][INFO ][node                     ] [Wingfoot, Wyatt] initialized
[2013-12-31 21:14:59,148][INFO ][node                     ] [Wingfoot, Wyatt] starting ...
[2013-12-31 21:14:59,275][INFO ][transport                ] [Wingfoot, Wyatt] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.241.129.232:9300]}
[2013-12-31 21:15:02,599][INFO ][cluster.service          ] [Wingfoot, Wyatt] new_master [Wingfoot, Wyatt][lRhJ4RD0Q9uLoHbaYCPFzA][inet[/192.241.129.232:9300]], reason$
[2013-12-31 21:15:02,648][INFO ][discovery                ] [Wingfoot, Wyatt] elasticsearch/lRhJ4RD0Q9uLoHbaYCPFzA
[2013-12-31 21:15:02,682][INFO ][http                     ] [Wingfoot, Wyatt] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.241.129.232:9200]}
[2013-12-31 21:15:02,683][INFO ][node                     ] [Wingfoot, Wyatt] started
[2013-12-31 21:15:04,150][INFO ][gateway                  ] [Wingfoot, Wyatt] recovered [1] indices into cluster_state

搜索_索引.py在

class FinhallIndex(indexes.SearchIndex, indexes.Indexable):
    text=indexes.CharField(document=True,use_template=True)
    name=indexes.CharField(model_attr='name')
    address=indexes.CharField(model_attr='address')

    def get_model(self):
        return Finhall

    def index_queryset(self, using=None):
        return self.get_model().objects.filter(pub_date__lte=datetime.datetime.now())

我通过deb安装了elasticsearch

我错过了什么?在


Tags: orgcoreinfonodeindexaddressjavaelasticsearch