2020年3月

原文UiSelector, 由于谷歌说这个已经弃用了,吓得我赶快Copy下来.
新文档: UiSelector
经过对比呢, 这两个文档截止今天(2020-3-9)一模一样, 虚惊一场

UiSelector

This package is part of the Android support library which is no longer maintained. The support library has been superseded by AndroidX which is part of Jetpack. We recommend using the AndroidX libraries in all new projects. You should also consider migrating existing projects to AndroidX.

public class UiSelector
extends Object
java.lang.Object
   ↳    android.support.test.uiautomator.UiSelector

Specifies the elements in the layout hierarchy for tests to target, filtered by properties such as text value, content-description, class name, and state information. You can also target an element by its location in a layout hierarchy.

Summary

Public constructors

UiSelector()

Public methods

UiSelector checkable(boolean val)
Set the search criteria to match widgets that are checkable.

UiSelector checked(boolean val)
Set the search criteria to match widgets that are currently checked (usually for checkboxes).

UiSelector childSelector(UiSelector selector)
Adds a child UiSelector criteria to this selector.

UiSelector className(String className)
Set the search criteria to match the class property for a widget (for example, "android.widget.Button").

UiSelector `className(Class type)` Set the search criteria to match the class property for a widget (for example, "android.widget.Button").

UiSelector classNameMatches(String regex)
Set the search criteria to match the class property for a widget, using a regular expression.

UiSelector clickable(boolean val)
Set the search criteria to match widgets that are clickable.

UiSelector description(String desc)
Set the search criteria to match the content-description property for a widget.

UiSelector descriptionContains(String desc)
Set the search criteria to match the content-description property for a widget.

UiSelector descriptionMatches(String regex)
Set the search criteria to match the content-description property for a widget.

UiSelector descriptionStartsWith(String desc)
Set the search criteria to match the content-description property for a widget.

UiSelector enabled(boolean val)
Set the search criteria to match widgets that are enabled.

UiSelector focusable(boolean val)
Set the search criteria to match widgets that are focusable.

UiSelector focused(boolean val)
Set the search criteria to match widgets that have focus.

UiSelector fromParent(UiSelector selector)
Adds a child UiSelector criteria to this selector which is used to start search from the parent widget.

UiSelector index(int index)
Set the search criteria to match the widget by its node index in the layout hierarchy.

UiSelector instance(int instance)
Set the search criteria to match the widget by its instance number.

UiSelector longClickable(boolean val)
Set the search criteria to match widgets that are long-clickable.

UiSelector packageName(String name)
Set the search criteria to match the package name of the application that contains the widget.

UiSelector packageNameMatches(String regex)
Set the search criteria to match the package name of the application that contains the widget.

UiSelector resourceId(String id)
Set the search criteria to match the given resource ID.

UiSelector resourceIdMatches(String regex)
Set the search criteria to match the resource ID of the widget, using a regular expression.

UiSelector scrollable(boolean val)
Set the search criteria to match widgets that are scrollable.

UiSelector selected(boolean val)
Set the search criteria to match widgets that are currently selected.

UiSelector text(String text)
Set the search criteria to match the visible text displayed in a widget (for example, the text label to launch an app).

UiSelector textContains(String text)
Set the search criteria to match the visible text in a widget where the visible text must contain the string in your input argument.

UiSelector textMatches(String regex)
Set the search criteria to match the visible text displayed in a layout element, using a regular expression.

UiSelector textStartsWith(String text)
Set the search criteria to match visible text in a widget that is prefixed by the text parameter.

String toString()

Protected methods

UiSelector cloneSelector()

Inherited methods

From class java.lang.Object

Object  clone()
boolean equals(Object arg0)
void    finalize()
final Class<?>  getClass()
int hashCode()
final void  notify()
final void  notifyAll()
String  toString()
final void  wait(long arg0, int arg1)
final void  wait(long arg0)
final void  wait()

Public constructors

UiSelector
UiSelector ()
Public methods
checkable
UiSelector checkable (boolean val)
Set the search criteria to match widgets that are checkable. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
checked
UiSelector checked (boolean val)
Set the search criteria to match widgets that are currently checked (usually for checkboxes). Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
childSelector
UiSelector childSelector (UiSelector selector)
Adds a child UiSelector criteria to this selector. Use this selector to narrow the search scope to child widgets under a specific parent widget.

Returns
UiSelector UiSelector with this added search criterion
className
UiSelector className (String className)
Set the search criteria to match the class property for a widget (for example, "android.widget.Button").

Parameters
className String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
className
UiSelector className (Class type)
Set the search criteria to match the class property for a widget (for example, "android.widget.Button").

Parameters
type Class: type
Returns
UiSelector UiSelector with the specified search criteria
classNameMatches
UiSelector classNameMatches (String regex)
Set the search criteria to match the class property for a widget, using a regular expression.

Parameters
regex String: a regular expression
Returns
UiSelector UiSelector with the specified search criteria
clickable
UiSelector clickable (boolean val)
Set the search criteria to match widgets that are clickable. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
description
UiSelector description (String desc)
Set the search criteria to match the content-description property for a widget. The content-description is typically used by the Android Accessibility framework to provide an audio prompt for the widget when the widget is selected. The content-description for the widget must match exactly with the string in your input argument. Matching is case-sensitive.

Parameters
desc String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
descriptionContains
UiSelector descriptionContains (String desc)
Set the search criteria to match the content-description property for a widget. The content-description is typically used by the Android Accessibility framework to provide an audio prompt for the widget when the widget is selected. The content-description for the widget must contain the string in your input argument. Matching is case-insensitive.

Parameters
desc String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
descriptionMatches
UiSelector descriptionMatches (String regex)
Set the search criteria to match the content-description property for a widget. The content-description is typically used by the Android Accessibility framework to provide an audio prompt for the widget when the widget is selected. The content-description for the widget must match exactly with the string in your input argument.

Parameters
regex String: a regular expression
Returns
UiSelector UiSelector with the specified search criteria
descriptionStartsWith
UiSelector descriptionStartsWith (String desc)
Set the search criteria to match the content-description property for a widget. The content-description is typically used by the Android Accessibility framework to provide an audio prompt for the widget when the widget is selected. The content-description for the widget must start with the string in your input argument. Matching is case-insensitive.

Parameters
desc String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
enabled
UiSelector enabled (boolean val)
Set the search criteria to match widgets that are enabled. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
focusable
UiSelector focusable (boolean val)
Set the search criteria to match widgets that are focusable. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
focused
UiSelector focused (boolean val)
Set the search criteria to match widgets that have focus. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
fromParent
UiSelector fromParent (UiSelector selector)
Adds a child UiSelector criteria to this selector which is used to start search from the parent widget. Use this selector to narrow the search scope to sibling widgets as well all child widgets under a parent.

Returns
UiSelector UiSelector with this added search criterion
index
UiSelector index (int index)
Set the search criteria to match the widget by its node index in the layout hierarchy. The index value must be 0 or greater. Using the index can be unreliable and should only be used as a last resort for matching. Instead, consider using the instance(int) method.

Parameters
index int: Value to match
Returns
UiSelector UiSelector with the specified search criteria
instance
UiSelector instance (int instance)
Set the search criteria to match the widget by its instance number. The instance value must be 0 or greater, where the first instance is 0. For example, to simulate a user click on the third image that is enabled in a UI screen, you could specify a a search criteria where the instance is 2, the className(String) matches the image widget class, and enabled(boolean) is true. The code would look like this: new UiSelector().className("android.widget.ImageView") .enabled(true).instance(2);

Parameters
instance int: Value to match
Returns
UiSelector UiSelector with the specified search criteria
longClickable
UiSelector longClickable (boolean val)
Set the search criteria to match widgets that are long-clickable. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
packageName
UiSelector packageName (String name)
Set the search criteria to match the package name of the application that contains the widget.

Parameters
name String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
packageNameMatches
UiSelector packageNameMatches (String regex)
Set the search criteria to match the package name of the application that contains the widget.

Parameters
regex String: a regular expression
Returns
UiSelector UiSelector with the specified search criteria
resourceId
UiSelector resourceId (String id)
Set the search criteria to match the given resource ID.

Parameters
id String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
resourceIdMatches
UiSelector resourceIdMatches (String regex)
Set the search criteria to match the resource ID of the widget, using a regular expression.

Parameters
regex String: a regular expression
Returns
UiSelector UiSelector with the specified search criteria
scrollable
UiSelector scrollable (boolean val)
Set the search criteria to match widgets that are scrollable. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
selected
UiSelector selected (boolean val)
Set the search criteria to match widgets that are currently selected. Typically, using this search criteria alone is not useful. You should also include additional criteria, such as text, content-description, or the class name for a widget. If no other search criteria is specified, and there is more than one matching widget, the first widget in the tree is selected.

Parameters
val boolean: Value to match
Returns
UiSelector UiSelector with the specified search criteria
text
UiSelector text (String text)
Set the search criteria to match the visible text displayed in a widget (for example, the text label to launch an app). The text for the element must match exactly with the string in your input argument. Matching is case-sensitive.

Parameters
text String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
textContains
UiSelector textContains (String text)
Set the search criteria to match the visible text in a widget where the visible text must contain the string in your input argument. The matching is case-sensitive.

Parameters
text String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
textMatches
UiSelector textMatches (String regex)
Set the search criteria to match the visible text displayed in a layout element, using a regular expression. The text in the widget must match exactly with the string in your input argument.

Parameters
regex String: a regular expression
Returns
UiSelector UiSelector with the specified search criteria
textStartsWith
UiSelector textStartsWith (String text)
Set the search criteria to match visible text in a widget that is prefixed by the text parameter. The matching is case-insensitive.

Parameters
text String: Value to match
Returns
UiSelector UiSelector with the specified search criteria
toString
String toString ()
Returns
String
Protected methods
cloneSelector
UiSelector cloneSelector ()
Returns
UiSelector

Command line tool

版本0.10中的新功能。
Scrapy是通过scrapy命令行工具控制的,在这里称为“ Scrapy工具”,以将其与子命令区分开,我们仅将子命令称为“命令”或“ Scrapy命令”。
Scrapy工具提供了多个命令,用于多种用途,每个命令接受一组不同的参数和选项。
(为了支持独立的scrapyd-deploy,在1.0中已删除scrapy deploy命令。请参阅部署项目。)

配置设置

Scrapy将在标准位置的ini样式scrapy.cfg文件中查找配置参数:

  1. /etc/scrapy.cfgc:\scrapy\scrapy.cfg (系统级),
  2. ~/.config/scrapy.cfg ($XDG_CONFIG_HOME) 以及 ~/.scray.cfg ($HOME) 用来做(用户级)全局设置,
  3. scrapy.cfg 位于Scrapy项目根目录(见下节).

这些文件中的设置将按照列出的优先顺序进行合并:用户定义的值的优先级高于系统级的默认值,并且在定义时,项目级的设置将覆盖所有其他设置。
Scrapy还了解并可以通过许多环境变量进行配置。当前这些是:

Scrapy项目的默认结构

在研究命令行工具及其子命令之前,首先让我们了解Scrapy项目的目录结构。
尽管可以修改,但默认情况下,所有Scrapy项目都具有相同的文件结构,类似于:

scrapy.cfg
myproject/
    __init__.py
    items.py
    middlewares.py
    pipelines.py
    settings.py
    spiders/
        __init__.py
        spider1.py
        spider2.py
        ...

scrapy.cfg文件所在的目录称为项目根目录。该文件包含定义项目设置的python模块的名称。这是一个例子:

[settings]
default = myproject.settings

在项目之间共享根目录

一个项目根目录(包含scrapy.cfg的目录)可以由多个Scrapy项目共享,每个项目都有自己的设置模块。
在这种情况下,必须在scrapy.cfg文件的[settings]下为这些设置模块定义一个或多个别名:

[settings]
default = myproject1.settings
project1 = myproject1.settings
project2 = myproject2.settings

默认情况下,scrapy命令行工具将使用默认设置。使用SCRAPY_PROJECT环境变量来指定其他项目以供scrapy使用:

$ scrapy settings --get BOT_NAME
Project 1 Bot
$ export SCRAPY_PROJECT=project2
$ scrapy settings --get BOT_NAME
Project 2 Bot

使用scrapy工具

您可以先运行不带任何参数的Scrapy工具,它会显示一些用法帮助和可用命令:

Scrapy X.Y - no active project

Usage:
  scrapy <command> [options] [args]

Available commands:
  crawl         Run a spider
  fetch         Fetch a URL using the Scrapy downloader
[...]

如果您在Scrapy项目中,第一行将打印当前活动的项目。在此示例中,它是从项目外部运行的。如果从项目内部运行,它将打印出以下内容:

Scrapy X.Y - project: myproject

Usage:
  scrapy <command> [options] [args]

[...]

创建项目

通常,使用scrapy工具要做的第一件事是创建Scrapy项目:

scrapy startproject myproject [project_dir]

这将在project_dir目录下创建一个Scrapy项目。如果未指定project_dir,则project_dir将与myproject相同。
接下来,进入新项目目录:

cd project_dir

您已经准备好使用scrapy命令管理和控制您的项目。

控制项目

您可以从项目内部使用scrapy工具来控制和管理它们。
例如,创建一个新的蜘蛛:

scrapy genspider mydomain mydomain.com

一些Scrapy命令(例如crawl)必须从Scrapy项目内部运行。请参阅下面的命令参考,以获取有关哪些命令必须在项目内部运行以及哪些不是必须的更多信息。
还请记住,从项目内部运行某些命令时,它们的行为可能略有不同。例如,如果要获取的url与某些特定的爬虫关联,则fetch命令将会覆盖爬虫(例如user_agent属性来覆盖user-agent)。这是有意的,因为fetch命令用于检查爬虫如何下载页面。

可用的工具命令

本节包含可用的内置命令的列表,并带有说明和一些用法示例。请记住,您始终可以通过运行以下命令获取有关每个命令的更多信息:

scrapy <command> -h

您可以使用以下命令查看所有可用命令:

scrapy -h

有两种命令,它们只能在Scrapy项目内部运行(特定于项目的命令),也可以在没有活动的Scrapy项目内部运行(全局命令),尽管它们在项目内部运行时的行为可能略有不同(因为他们会使用项目的覆盖设置)。
全局命令:

  • startproject

  • genspider

  • settings

  • runspider

  • shell

  • fetch

  • view

  • version
    仅项目可用命令:

  • crawl

  • check

  • list

  • edit

  • parse

  • bench

    startproject

  • 语法: scrapy startproject <project_name> [project_dir]

  • 需要项目: 不需要
    project_dir目录下创建一个名为project_name的新Scrapy项目。如果未指定project_dir,则project_dir将与project_name相同。
    用法示例:

    $ scrapy startproject myproject

    genspider

  • 语法: scrapy genspider [-t template] <name> <domain>

  • 需要项目: 不需要
    如果从项目内部调用,则在当前文件夹或当前项目的Spiders文件夹中创建一个新的Spider。<name>参数设置为蜘蛛的名称,而<domain>用于生成允许的域和start_urls蜘蛛的属性。
    用法示例:

    
    $ scrapy genspider -l
    Available templates:
    basic
    crawl
    csvfeed
    xmlfeed

$ scrapy genspider example example.com
Created spider 'example' using template 'basic'

$ scrapy genspider -t crawl scrapyorg scrapy.org
Created spider 'scrapyorg' using template 'crawl'

这只是用于基于预定义模板创建蜘蛛的便利快捷命令,但肯定不是唯一的创建蜘蛛的方法。您可以自己创建蜘蛛源代码文件,而不使用此命令。

###crawl
* 语法: `scrapy crawl <spider>`
* 需要项目: 需要
开始使用蜘蛛抓取。
用法示例:

$ scrapy crawl myspider
[ ... myspider starts crawling ... ]

###check
* 语法: `scrapy check [-l] <spider>
* 需要项目: 需要
运行合法性检查.
用法示例:

$ scrapy check -l
first_spider

  • parse
  • parse_item
    second_spider
  • parse
  • parse_item

$ scrapy check
[FAILED] first_spider:parse_item

'RetailPricex' field is missing

[FAILED] first_spider:parse

Returned 92 requests, expected 0..4

###list
* 语法: `scrapy list`
* 需要项目: 需要
列出当前项目中所有可用的蜘蛛。输出是每行一个蜘蛛。
用法示例:

$ scrapy list
spider1
spider2


###edit
语法: scrapy edit <spider>
* 需要项目: 需要
使用EDITOR环境变量或(如果未设置)EDITOR设置中定义的编辑器来编辑给定的蜘蛛。

在大多数情况下,此命令仅作为便利快捷方式提供,开发人员当然可以选择任何工具或IDE来编写和调试Spider。

用法示例:

$ scrapy edit spider1

fetch

  • 语法: scrapy fetch
  • 需要项目: 不需要
    使用Scrapy下载器下载给定的URL,并将内容写入标准输出。
    关于此命令的有趣之处在于,它会fetch获取蜘蛛如何下载页面的页面。例如,如果蜘蛛具有覆盖用户代理的USER_AGENT属性,它将使用该属性。
    因此,该命令可用于“查看”您的Spider如何获取特定页面。
    如果在项目外部使用,则不会应用任何特定的行为,它只会使用默认的Scrapy下载程序设置。
    支持的选项:
  • --spider=SPIDER: 绕过蜘蛛自动检测并强制使用特定蜘蛛
  • --headers: 打印响应的HTTP标头,而不是响应的正文
  • --no-redirect: 不遵循HTTP 3xx重定向(默认为遵循它们)
    用法示例:
    
    $ scrapy fetch --nolog http://www.example.com/some/page.html
    [ ... html content here ... ]

$ scrapy fetch --nolog --headers http://www.example.com/
{'Accept-Ranges': ['bytes'],
'Age': ['1263 '],
'Connection': ['close '],
'Content-Length': ['596'],
'Content-Type': ['text/html; charset=UTF-8'],
'Date': ['Wed, 18 Aug 2010 23:59:46 GMT'],
'Etag': ['"573c1-254-48c9c87349680"'],
'Last-Modified': ['Fri, 30 Jul 2010 15:30:18 GMT'],
'Server': ['Apache/2.2.3 (CentOS)']}


###view
* 语法: `scrapy view <url>`
* 需要项目: 不需要
在浏览器中打开给定的URL,就像您的Scrapy蜘蛛会“看到”它一样。有时,蜘蛛人看到的页面与普通用户的页面不同,因此可以用来检查蜘蛛“看到”的内容并确认它是您期望的。
Supported options:

--spider=SPIDER: bypass spider autodetection and force use of specific spider
--no-redirect: do not follow HTTP 3xx redirects (default is to follow them)
Usage example:

$ scrapy view http://www.example.com/some/page.html
[ ... browser starts ... ]
shell
Syntax: scrapy shell [url]
Requires project: no
Starts the Scrapy shell for the given URL (if given) or empty if no URL is given. Also supports UNIX-style local file paths, either relative with ./ or ../ prefixes or absolute file paths. See Scrapy shell for more info.

Supported options:

--spider=SPIDER: bypass spider autodetection and force use of specific spider
-c code: evaluate the code in the shell, print the result and exit
--no-redirect: do not follow HTTP 3xx redirects (default is to follow them); this only affects the URL you may pass as argument on the command line; once you are inside the shell, fetch(url) will still follow HTTP redirects by default.
Usage example:

$ scrapy shell http://www.example.com/some/page.html
[ ... scrapy shell starts ... ]

$ scrapy shell --nolog http://www.example.com/ -c '(response.status, response.url)'
(200, 'http://www.example.com/')

# shell follows HTTP redirects by default
$ scrapy shell --nolog http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F -c '(response.status, response.url)'
(200, 'http://example.com/')

# you can disable this with --no-redirect
# (only for the URL passed as command line argument)
$ scrapy shell --no-redirect --nolog http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F -c '(response.status, response.url)'
(302, 'http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F')
parse
Syntax: scrapy parse <url> [options]
Requires project: yes
Fetches the given URL and parses it with the spider that handles it, using the method passed with the --callback option, or parse if not given.

Supported options:

--spider=SPIDER: bypass spider autodetection and force use of specific spider
--a NAME=VALUE: set spider argument (may be repeated)
--callback or -c: spider method to use as callback for parsing the response
--meta or -m: additional request meta that will be passed to the callback request. This must be a valid json string. Example: –meta=’{“foo” : “bar”}’
--cbkwargs: additional keyword arguments that will be passed to the callback. This must be a valid json string. Example: –cbkwargs=’{“foo” : “bar”}’
--pipelines: process items through pipelines
--rules or -r: use CrawlSpider rules to discover the callback (i.e. spider method) to use for parsing the response
--noitems: don’t show scraped items
--nolinks: don’t show extracted links
--nocolour: avoid using pygments to colorize the output
--depth or -d: depth level for which the requests should be followed recursively (default: 1)
--verbose or -v: display information for each depth level
Usage example:

$ scrapy parse http://www.example.com/ -c parse_item
[ ... scrapy log lines crawling example.com spider ... ]

>>> STATUS DEPTH LEVEL 1 <<<
# Scraped Items  ------------------------------------------------------------
[{'name': 'Example item',
 'category': 'Furniture',
 'length': '12 cm'}]

# Requests  -----------------------------------------------------------------
[]
settings
Syntax: scrapy settings [options]
Requires project: no
Get the value of a Scrapy setting.

If used inside a project it’ll show the project setting value, otherwise it’ll show the default Scrapy value for that setting.

Example usage:

$ scrapy settings --get BOT_NAME
scrapybot
$ scrapy settings --get DOWNLOAD_DELAY
0
runspider
Syntax: scrapy runspider <spider_file.py>
Requires project: no
Run a spider self-contained in a Python file, without having to create a project.

Example usage:

$ scrapy runspider myspider.py
[ ... spider starts crawling ... ]
version
Syntax: scrapy version [-v]
Requires project: no
Prints the Scrapy version. If used with -v it also prints Python, Twisted and Platform info, which is useful for bug reports.

bench
New in version 0.17.

Syntax: scrapy bench
Requires project: no
Run a quick benchmark test. Benchmarking.

Custom project commands
You can also add your custom project commands by using the COMMANDS_MODULE setting. See the Scrapy commands in scrapy/commands for examples on how to implement your commands.

COMMANDS_MODULE
Default: '' (empty string)

A module to use for looking up custom Scrapy commands. This is used to add custom commands for your Scrapy project.

Example:

COMMANDS_MODULE = 'mybot.commands'
Register commands via setup.py entry points
Note

This is an experimental feature, use with caution.

You can also add Scrapy commands from an external library by adding a scrapy.commands section in the entry points of the library setup.py file.

The following example adds my_command command:

from setuptools import setup, find_packages

setup(name='scrapy-mymodule',
  entry_points={
    'scrapy.commands': [
      'my_command=my_scrapy_module.commands:MyCommand',
    ],
  },
 )

阅读原文

示例

最好的学习方法是通过示例,Scrapy也不例外。因此,有一个名为quotesbot的示例Scrapy项目,您可以使用它来玩和了解有关Scrapy的更多信息。它包含两个用于http://quotes.toscrape.com的爬虫,一个使用CSS选择器,另一个使用XPath表达式。
quotesbot项目可从以下网址获得:https://github.com/scrapy/quotesbot。您可以在项目的自述文件中找到有关它的更多信息。
如果您熟悉git,则可以checkout签出代码。否则,您可以通过单击此处将项目下载为zip文件。

阅读原文Scrapy tutorial

Scrapy 入门

在本教程中,我们假设您的系统上已经安装了Scrapy。
如果不是这种情况,请参阅安装指南

我们将爬取quotes.toscrape.com,该网站列出了著名作家的名言。
本教程将指导您完成以下任务:

  • 创建一个新的Scrapy项目
  • 编写爬虫以爬网站点并提取数据
  • 使用命令行导出抓取的数据
  • 更改爬虫以递归地跟随链接
  • 使用爬虫参数

Scrapy用Python编写。
如果您是该语言的新手,则可能首先要了解该语言,以充分利用Scrapy。

如果您已经熟悉其他语言,并且想快速学习Python,那么Python教程是一个很好的资源。

如果您不熟悉编程并且想开始使用Python,那么以下书籍可能对您有用:

您还可以查看针对非程序员的Python资源列表,以及Learnpython-subreddit中的建议资源

创建项目

在开始抓取之前,您将必须设置一个新的Scrapy项目。
进入您要存储代码目录,运行:

scrapy startproject tutorial

这将创建一个tutorial目录,其中包含以下内容:

tutorial/
    scrapy.cfg            # deploy configuration file

    tutorial/             # project's Python module, you'll import your code from here
        __init__.py

        items.py          # project items definition file

        middlewares.py    # project middlewares file

        pipelines.py      # project pipelines file

        settings.py       # project settings file

        spiders/          # a directory where you'll later put your spiders
            __init__.py

我们的第一只爬虫

爬虫是您定义的类,Scrapy用于从网站(或一组网站)中获取信息。
他们必须继承Spider的子类,并定义要发出的初始请求,可以选择如何跟随页面中的链接,以及如何解析下载的页面内容以提取数据。

这是我们第一个Spider的代码。
将其保存在项目中tutorial / spiders目录下的quotes_spider.py文件中:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

如您所见,我们的Spider从scrapy.Spider创建子类并定义了一些属性和方法:

  • name: 识别爬虫。它在一个项目中必须是唯一的,也就是说,您不能为不同的Spider设置相同的名称。

  • start_requests(): 必须返回一个可迭代的请求(您可以返回一个请求列表或编写一个生成器函数),Spider将从中开始爬行。随后的请求将根据这些初始请求连续生成。

  • parse(): 一个将被调用以处理针对每个请求下载的响应的方法。
    response参数是TextResponse的一个实例,该实例保存页面内容并具有其他有用的方法来处理它。

parse()方法通常解析响应,提取抓取的数据作为字典,还查找要遵循的新URL并从中创建新请求(Request)。

如何运行我们的爬虫

要使我们的爬虫工作,请转到项目的顶级目录并运行:

scrapy crawl quotes

此命令运行带有我们刚刚添加的名称quotes的爬虫,它将发送对quotes.toscrape.com域的一些请求。您将获得类似于以下的输出:

... (omitted for brevity)
2016-12-16 21:24:05 [scrapy.core.engine] INFO: Spider opened
2016-12-16 21:24:05 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-12-16 21:24:05 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/2/> (referer: None)
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-1.html
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-2.html
2016-12-16 21:24:05 [scrapy.core.engine] INFO: Closing spider (finished)
...

现在,检查当前目录中的文件。您应该注意,已经创建了两个新文件:quotes-1.html和quotes-2.html,其中包含我们URL的内容,正如我们的parse方法所指示的那样。

注意

如果您想知道为什么我们还没有解析HTML,请稍候,我们将尽快解决。

到底发生了什么?

Scrapy调度Spider的start_requests方法返回的scrapy.Request对象。
在收到每个响应时,它实例化Response对象并调用与请求关联的回调方法(在本例中为parse方法),并将响应作为参数传递。

start_requests方法的快捷方式

无需实现从URL生成scrapy.Request对象的start_requests()方法,您只需定义带有URL列表的start_urls类属性即可。然后,start_requests()的默认实现将使用此列表来为您的爬虫创建初始请求:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)

即使我们没有明确告诉Scrapy这样做,也会调用parse()方法来处理对这些URL的每个请求。
发生这种情况是因为parse()是Scrapy的默认回调方法,对于没有显式分配的回调的请求会调用该方法。

提取数据

学习如何使用Scrapy提取数据的最佳方法是使用Scrapy shell尝试选择器。运行:

scrapy shell 'http://quotes.toscrape.com/page/1/'

注意

请记住,从命令行运行Scrapy shell时,始终将网址括在引号中,否则包含参数(如&字符)的网址将不起作用。

在Windows上,请使用双引号代替:

scrapy shell "http://quotes.toscrape.com/page/1/"

您会看到类似以下内容:

[ ... Scrapy log here ... ]
2016-09-19 12:09:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x7fa91d888c90>
[s]   item       {}
[s]   request    <GET http://quotes.toscrape.com/page/1/>
[s]   response   <200 http://quotes.toscrape.com/page/1/>
[s]   settings   <scrapy.settings.Settings object at 0x7fa91d888c10>
[s]   spider     <DefaultSpider 'default' at 0x7fa91c8af990>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

使用命令行,您可以尝试使用带有响应对象的CSS选择元素:

>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]

运行response.css('title')的结果是一个名为SelectorList的类似列表的对象,该对象表示围绕XML / HTML元素的Selector对象的列表,并允许您运行进一步的查询(queries)来细化选择或提取内容数据。

要从上面的标题中提取文本,您可以执行以下操作:

>>> response.css('title::text').getall()
['Quotes to Scrape']

这里有两点需要注意:一是我们在CSS查询中添加了::text,这意味着我们只想直接在<title>元素内选择text元素。如果不指定::text,则会获得完整的title元素,包括其标签(tag):

>>> response.css('title').getall()
['<title>Quotes to Scrape</title>']

另一件事是,调用.getall()的结果是一个列表:选择器有可能返回多个结果,因此我们将它们全部提取出来。
当您知道只想要第一个结果时,在这种情况下,您可以执行以下操作:

>>> response.css('title::text').get()
'Quotes to Scrape'

或者,您可以编写:

>>> response.css('title::text')[0].get()
'Quotes to Scrape'

然而,直接在SelectorList实例上使用.get()可以避免IndexError,并且在找不到与选择匹配的任何元素时返回None

这里有一个教训:对于大多数抓取代码,您希望它能够对由于页面上找不到内容而导致的错误具有弹性,以便即使某些部分未能被抓取,您也至少可以获取一些数据。
除了getall()get()方法之外,您还可以使用re()方法使用正则表达式进行提取:

>>> response.css('title::text').re(r'Quotes.*')
['Quotes to Scrape']
>>> response.css('title::text').re(r'Q\w+')
['Quotes']
>>> response.css('title::text').re(r'(\w+) to (\w+)')
['Quotes', 'Scrape']

为了找到合适的CSS选择器,您可能会发现使用view(response)从Web浏览器的外壳中打开响应页面很有用。您可以使用浏览器的开发人员工具检查HTML并提供一个选择器(请参阅使用浏览器的开发人员工具进行抓取)。

Selector Gadget还是一个不错的工具,可以快速为视觉选择的元素找到CSS选择器,该选择器可在许多浏览器中使用。

XPath: 简介

除了CSS,Scrapy选择器还支持使用XPath表达式:

>>> response.xpath('//title')
[<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>]
>>> response.xpath('//title/text()').get()
'Quotes to Scrape'

XPath表达式非常强大,并且是Scrapy Selectors的基础。实际上,CSS选择器是在后台转换为XPath的。您可以看到,如果您仔细阅读shell中选择器对象的文本表示形式。

尽管XPath表达式可能不如CSS选择器流行,但它提供了更多功能,因为除了导航结构之外,它还可以查看内容。使用XPath,您可以选择以下内容:选择包含文本“下一页”的链接。这使XPath非常适合于抓取任务,并且即使您已经知道如何构造CSS选择器,我们也鼓励您学习XPath,这将使抓取更加容易。

我们不会在这里介绍XPath,但是您可以在此处阅读有关将XPath与Scrapy Selectors结合使用的更多信息。要了解有关XPath的更多信息,我们建议这个通过示例学习XPath教程,和这个学习“如何在XPath中思考”教程

提取名言和作者

现在您对选择和提取有所了解,让我们通过编写代码从网页中提取引号来完善爬虫。
http://quotes.toscrape.com中的每个名言均由如下所示的HTML元素表示

<div class="quote">
    <span class="text">“The world as we have created it is a process of our
    thinking. It cannot be changed without changing our thinking.”</span>
    <span>
        by <small class="author">Albert Einstein</small>
        <a href="/author/Albert-Einstein">(about)</a>
    </span>
    <div class="tags">
        Tags:
        <a class="tag" href="/tag/change/page/1/">change</a>
        <a class="tag" href="/tag/deep-thoughts/page/1/">deep-thoughts</a>
        <a class="tag" href="/tag/thinking/page/1/">thinking</a>
        <a class="tag" href="/tag/world/page/1/">world</a>
    </div>
</div>

让我们打开scrapy shell,玩一会儿,找出如何提取所需的数据:

$ scrapy shell 'http://quotes.toscrape.com'

我们获得带有HTML名言(quote)的选择器的列表,其中包括:

>>> response.css("div.quote")
[<Selector xpath="descendant-or-self::div[@class and contains(concat(' ', normalize-space(@class), ' '), ' quote ')]" data='<div class="quote" itemscope itemtype...'>,
 <Selector xpath="descendant-or-self::div[@class and contains(concat(' ', normalize-space(@class), ' '), ' quote ')]" data='<div class="quote" itemscope itemtype...'>,
 ...]

上面的查询返回的每个选择器都允许我们在其子元素上运行进一步的查询。让我们将第一个选择器分配给变量,以便我们可以直接在特定名言上运行CSS选择器:

>>> quote = response.css("div.quote")[0]

现在,让我们使用刚刚创建的quote对象从该名言中提取文本,作者和标签:

>>> text = quote.css("span.text::text").get()
>>> text
'“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'
>>> author = quote.css("small.author::text").get()
>>> author
'Albert Einstein'

鉴于标签是字符串列表,我们可以使用.getall()方法来获取所有标签:

>>> tags = quote.css("div.tags a.tag::text").getall()
>>> tags
['change', 'deep-thoughts', 'thinking', 'world']

在弄清楚如何提取每一位之后,我们现在可以遍历所有名言元素并将它们放到Python字典中:

>>> for quote in response.css("div.quote"):
...     text = quote.css("span.text::text").get()
...     author = quote.css("small.author::text").get()
...     tags = quote.css("div.tags a.tag::text").getall()
...     print(dict(text=text, author=author, tags=tags))
{'text': '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”', 'author': 'Albert Einstein', 'tags': ['change', 'deep-thoughts', 'thinking', 'world']}
{'text': '“It is our choices, Harry, that show what we truly are, far more than our abilities.”', 'author': 'J.K. Rowling', 'tags': ['abilities', 'choices']}
...

在我们的爬虫中提取数据

让我们回到爬虫。到目前为止,它没有特别提取任何数据,只是将整个HTML页面保存到本地文件中。让我们将上面的提取逻辑集成到我们的爬虫中。
让我们回到爬虫。到目前为止,它没有特别提取任何数据,只是将整个HTML页面保存到本地文件中。让我们将上面的提取逻辑集成到我们的Spider中。

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').get(),
                'author': quote.css('small.author::text').get(),
                'tags': quote.css('div.tags a.tag::text').getall(),
            }

如果运行此爬虫,它将输出提取的数据和日志:

2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'}
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A. Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"}

存储抓取的数据

存储抓取数据的最简单方法是使用Feed导出,并使用以下命令:

scrapy crawl quotes -o quotes.json

这将生成一个quotes.json文件,其中包含所有以JSON序列化的抓取项。

由于历史原因,Scrapy会附加到给定文件,而不是覆盖其内容。如果您两次运行此命令而没有在第二次之前删除该文件,那么最终将得到一个损坏的JSON文件。
您还可以使用其他格式,例如JSON Lines

scrapy crawl quotes -o quotes.jl

JSON Lines格式很有用,因为它像流一样,您可以轻松地向其添加新记录。当您运行两次时,就不会遇到JSON的相同问题。另外,由于每条记录都是单独的一行,因此您可以处理大文件而不必将所有内容都放入内存中,因此有类似JQ的工具可以在命令行中帮助完成此任务。
在小型项目中(例如本教程中的项目),这应该足够了。但是,如果要对已抓取物件(item)执行更复杂的操作,则可以编写物件管道 Item Pipeline。创建项目时,已在tutorial / pipelines.py中为您设置了“Item Pipeline”的占位文件。如果您只想存储已抓取物件,则无需实施任何物件管道。

跟踪链接

假设,您不仅需要从http://quotes.toscrape.com的前两个页面中抓取内容,还希望抓取网站的所有页面中的名言内容。

现在您知道了如何从页面提取数据,让我们看看如何跟踪页面中的链接。

首先是将链接提取到我们要关注的页面。检查我们的页面,我们可以看到带有以下标记的指向下一页的链接:

<ul class="pager">
    <li class="next">
        <a href="/page/2/">Next <span aria-hidden="true">&rarr;</span></a>
    </li>
</ul>

我们可以尝试在shell中提取:

>>> response.css('li.next a').get()
'<a href="/page/2/">Next <span aria-hidden="true">→</span></a>'

这获得了anchor元素,但是我们需要属性href。为此,Scrapy支持CSS扩展,可让您选择属性内容,如下所示:

>>> response.css('li.next a::attr(href)').get()
'/page/2/'

还有一个attrib属性可用(有关更多信息,请参见选择元素属性):

>>> response.css('li.next a').attrib['href']
'/page/2/'

现在让我们看一下我们的Spider,将其修改为以递归方式访问下一页的链接,并从中提取数据:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').get(),
                'author': quote.css('small.author::text').get(),
                'tags': quote.css('div.tags a.tag::text').getall(),
            }

        next_page = response.css('li.next a::attr(href)').get()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

现在,在提取数据之后,parse()方法将查找到下一页的链接,使用urljoin()方法构建完整的绝对URL(因为链接可以是相对的),并产生对下一页的新请求,将其自身注册为回调,以处理下一页的数据提取并保持所有页面的爬取。
您在这里看到的是Scrapy的以下链接机制:当您在回调方法中产生请求时,Scrapy将安排该请求的发送, 并在该请求完成时注册要执行的回调方法。
这么做,您可以构建复杂的搜寻器,并根据定义的规则跟踪链接,并根据其访问的页面提取不同类型的数据。
在我们的示例中创建了一个循环,将其链接到下一页的所有链接,直到找不到该链接为止----便于通过分页方式爬取博客,论坛和其他网站。

创建请求的快捷方式

作为创建请求对象的快捷方式,您可以使用response.follow

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
    ]

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').get(),
                'author': quote.css('span small::text').get(),
                'tags': quote.css('div.tags a.tag::text').getall(),
            }

        next_page = response.css('li.next a::attr(href)').get()
        if next_page is not None:
            yield response.follow(next_page, callback=self.parse)

scrapy.Request不同,response.follow直接支持相对URL----无需调用urljoin。注意response.follow仅返回一个Request实例;您仍然需要产生此请求。
您还可以将选择器而不是字符串传递给response.follow。该选择器应提取必要的属性:

for href in response.css('ul.pager a::attr(href)'):
    yield response.follow(href, callback=self.parse)

对于<a>元素,有一个快捷方式:response.follow自动使用其href属性。因此,代码可以进一步缩短:

for a in response.css('ul.pager a'):
    yield response.follow(a, callback=self.parse)

要从一个可迭代对象创建多个请求,可以改用response.follow_all

anchors = response.css('ul.pager a')
yield from response.follow_all(anchors, callback=self.parse)

或者,将其进一步缩短:

yield from response.follow_all(css='ul.pager a', callback=self.parse)

更多示例和模式

这是说明回叫和后续链接的另一个爬虫,这次是用于抓取作者信息:

import scrapy


class AuthorSpider(scrapy.Spider):
    name = 'author'

    start_urls = ['http://quotes.toscrape.com/']

    def parse(self, response):
        author_page_links = response.css('.author + a')
        yield from response.follow_all(author_page_links, self.parse_author)

        pagination_links = response.css('li.next a')
        yield from response.follow_all(pagination_links, self.parse)

    def parse_author(self, response):
        def extract_with_css(query):
            return response.css(query).get(default='').strip()

        yield {
            'name': extract_with_css('h3.author-title::text'),
            'birthdate': extract_with_css('.author-born-date::text'),
            'bio': extract_with_css('.author-description::text'),
        }

这个爬虫将从首页开始,它将跟随指向作者页面的所有链接,并为每个页面调用parse_author回调,以及带有parse回调的分页链接(pagination links), 像前面看到的那样。

在这里,我们将回调传递给response.follow_all作为位置参数,以使代码更短;它也适用于request请求。
parse_author回调定义了一个辅助函数,用于从CSS查询中提取和清除数据,并生成包含作者数据的Python字典。
该爬虫演示的另一件有趣的事情是,即使同一位作者的名言很多,我们也不必担心会多次访问同一作者页面。默认情况下,Scrapy过滤掉对已访问URL的重复请求,从而避免了由于编程错误而导致服务器访问过多的问题。可以通过设置DUPEFILTER_CLASS进行配置。
希望到目前为止,您已经对如何在Scrapy中使用跟踪链接和回调的机制有了很好的了解。
作为另一个利用以下链接机制的爬虫示例,请查看CrawlSpider类中的通用爬虫,该类实现了一个小的规则引擎,您可以使用该规则引擎在其上编写爬虫。
同样,一种常见的模式是使用技巧将更多数据传递给回调,从而使用来自多个页面的数据来构建项目。

使用爬虫参数

您可以在运行爬虫时使用-a选项来为爬虫提供命令行参数:

scrapy crawl quotes -o quotes-humor.json -a tag=humor

这些参数会传递给Spider的__init__方法,并默认成为爬虫属性。

在此示例中,为tag参数提供的值可通过self.tag获得。您可以使用它使您的Spider只获取带有特定标记的名言,并根据参数构建URL:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        url = 'http://quotes.toscrape.com/'
        tag = getattr(self, 'tag', None)
        if tag is not None:
            url = url + 'tag/' + tag
        yield scrapy.Request(url, self.parse)

    def parse(self, response):
        for quote in response.css('div.quote'):
            yield {
                'text': quote.css('span.text::text').get(),
                'author': quote.css('small.author::text').get(),
            }

        next_page = response.css('li.next a::attr(href)').get()
        if next_page is not None:
            yield response.follow(next_page, self.parse)

如果您将tag=humor参数传递给该爬虫,您会注意到它只会访问来自humor标签的URL,例如http://quotes.toscrape.com/tag/humor
您可以在此处了解有关处理爬虫参数的更多信息

下一步

本教程仅介绍了Scrapy的基础知识,但此处未提及许多其他功能。检查Scrapy概览中的还有什么?一章,其中快速概述了最重要的主题。
您可以从“基本概念”部分继续,以进一步了解命令行工具,爬虫,选择器以及本教程未涵盖的其他内容,例如对抓取的数据进行建模。如果您喜欢玩示例项目,请查看“示例”部分。

敏捷思想当然最准确是敏捷宣言所讲的。我用我自己的理解表述下:
敏捷就是大事化小,小事化了。
大事化小,其实这个似乎和传统项目管理的工作分解Work Break Down(WBD)类似,也和很多老外说的做事方式颇为相通。然而其中有个重要区别:敏捷是大事化小事,WBD是大事化小步。
小事化了。敏捷的每一件小事都是完整的事件,需要同大事一样有完整的流程,包括从设计到执行到检验的全过程。这是与传统WBD的最大不同。WBD只能说是把一件事分成了很多步骤,而不能说分成了很多小事。小事化了意味着此事的完成,WBD一小步的完成却不能意味任何事情的完成。
所以,每一件小事都是可呈现的。每一件小事之间的有机连接成就了大事。

微信开发者社区在18年就有人提出这个问题,当时说是不行.然而提问者说为什么另外一家小温智能可以?
经过自行测试,发现通过微信接口是可以以AP模式进行设置的. 因为微信小程序WIFI有两个能力:获取周围SSID,以及连接SSID.
SDK文档

获取SSID方式

在app.json中获取地理位置能力

  "permission": {
    "scope.userLocation": {
      "desc": "你的位置信息将用于小程序连接WIFI"
    }

在页面js中onGetWifiList->startWifi->getWifiList. 微信小程序比较神奇的地方是,get信息的函数居然不是直接返回信息,而是只返回成功失败, 然后发出一个事件,你得再用另一个专门监听这个事件的函数来得到信息. 感觉脱裤子放屁一样.

  onReady: function () { //启动后先监听获取到WIFILIST的事件
    wx.onGetWifiList(function (res){
      console.log(res)
    })
  },
  startwifi:function(){ //启动WIFI接口,事实上不是开启设备的WIFI,只是开启小程序的WIFI能力
    wx.startWifi({
      success(res) {
        console.log(res.errMsg)
      }
    })
  },
  getwifilist: function () { //获取WIFILIST, 成功后发出事件会被前面的监听捕获
    console.log('getwifilist')
    wx.getWifiList({
      complete(res) {
        console.log(res)
      }
    })
  },

WIFILIST的形式:

{wifiList: []}

Android举例(从console输出COPY来的)

wifiList: Array(24)
0: {SSID: "KK5G", BSSID: "xx:xx:xx:xx:xx:xx", secure: true, signalStrength: 77}
1: {SSID: "DIRECT-5BDESKTOP-0M7QR80msVL", BSSID: "xx:xx:xx:xx:xx:xx", secure: true, signalStrength: 99}
2: {SSID: "ChinaNet-UFsN", BSSID: "xx:xx:xx:xx:xx:xx", secure: true, signalStrength: 44}
3: {SSID: "KK", BSSID: "xx:xx:xx:xx:xx:xx", secure: true, signalStrength: 46}

IOS举例, 居然元素结构和Android不同. signalStrength的定义也不一样,一个是整数一个是纯小数.幸好SSID和BSSID两个字段是一样的.
IOS还有个问题,wx.onGetWifiList在ios中不进入回调,幸好官方有回答,是getWifiList以后会调起微信权限页面,上退一级到设置主页面,再点击WIFI,刷出来列表后才能收到onGetWifiList的回调.

wifiList: Array(18)
0: {SSID: "ROADSUN2", autoJoined: false, signalStrength: 0.26170599460601807, justJoined: false, BSSID: "xx:xx:xx:xx:xx:xx", …}
1: {SSID: "TP-LINK_090C", autoJoined: false, signalStrength: 0.3535159230232239, justJoined: false, BSSID: "xx:xx:xx:xx:xx:xx", …}
2: {SSID: "office1_2.4GHz", autoJoined: false, signalStrength: 0.39624282717704773, justJoined: false, BSSID: "xx:xx:xx:xx:xx:xx", …}
3: {SSID: "408a", autoJoined: false, signalStrength: 0.5117818117141724, justJoined: false, BSSID: "xx:xx:xx:xx:xx:xx", …}
4: {SSID: "ChinaNet-ePMi", autoJoined: false, signalStrength: 0.5352672934532166, justJoined: false, BSSID: "xx:xx:xx:xx:xx:xx", …}

//展开一个元素:
{
BSSID: "xx:xx:xx:xx:xx:xx"
SSID: "ROADSUN2"
autoJoined: false
justJoined: false
secure: true
signalStrength: 0.26170599460601807
}

可以看到返回值里居然没有WIFI类型是2.4G还是5G...

可能的设置方式

设置过程如下:

  1. 智能设备进入Station模式, 扫描周围WIFI SSID,先存下来
  2. 智能设备进入AP模式, 等待小程序连接
  3. 小程序获取周围WIFI SSID, 这儿要区分下IOS和Android,两者体验不同.(此步骤也可省略,以智能设备获取的为准)
  4. 小程序连接智能设备AP
  5. 小程序通过API获取智能设备的SSID
  6. 小程序的SSID和智能设备的SSID取交集, 供用户选择(筛选出2.4G WIFI)
  7. 用户选择并输入密码
  8. 小程序自己先试试能不能连接上,尝试的过程中会断开与智能设备的链接,如果密码错误让用户重新输入
  9. 小程序链接成功后,再次链接智能设备的AP, 正式通知智能设备链接WIFI.
  10. 完成后,小程序自行重新连接WIFI. 结束.

兼容性问题和系统差异

  • 对版本要求: 小程序1.6.0,现在99.99%已经支持.
  • getWifiList这个接口iOS 将跳转到系统的 Wi-Fi 界面,Android 不会跳转。 iOS 11.0 及 iOS 11.1 两个版本因系统问题,该方法失效。但在 iOS 11.2 中已修复。
  • iOS11.0是2017.9.17发布,2017年10月31日发布11.2的首个beta版. 同期2017年9月13日发布的手机是,第十一代iPhone 8,iPhone 8 Plus,iPhone X
  • connectWifi仅 Android 与 iOS 11 以上版本支持. 现在的版本已经是13.3.

国外

github不用说了,只是私有库要收费
gitlab也是极好的, 尤其是自建私服首选

国内

码云gitee是OSCHINA的团队, 虽然是个深圳的不大的公司,但支持私有库嘛,虽然国内对隐私保护都很可疑. 所有私有库协作人数累计不超过5个.
coding.net, 也在深圳,似乎现在要被腾讯收了, 腾讯的托管平台叫腾讯云开发者平台现在注册的话会直接跳转coding. coding号称是以团队方式注册,5人以下团队免费. 里面不止有git仓库,还有整套敏捷管理, 比较像是tapd+github,甚至额外还有些持续集成持续测试.
阿里云托管平台, 登录逻辑很奇怪,使用阿里云账户登陆后, 又要让我另外再建立一个账号.对仓库数目有限制,50个.可以建立私人库. 嗯,这个限制数目的态度感觉会很不待见啊

git地址
文档地址
软件地址,Windows根据目前推荐是4.0最好
识别库地址,要把根目录下的chi_sim.traineddata和chi_sim_vert.traineddata放到安装目录tessdata下,把script里面的HanS.traineddata和HanS_vert.traineddata放到tessdata\script下。还不太明白script和外面的文件的关系。
试了下,效果还行。英文效果比中文好不少。似乎中文还是贡献率太低。
用法也简单。

tesseract imagename outputbase [-l lang] [-psm pagesegmode] [configfile...]
tesseract myscan.png out
tesseract myscan.png out -l deu
tesseract myscan.png out -l eng+deu
tesseract myscan.png out -l chi_sim hocr
tesseract myscan.png out pdf

其中hocr是一个xml文件,里面有对应文字的坐标信息,应该比较适合于做自动化。
试了下,同样的图片,白底黑字比黑底白字识别率好非常多。

x = {
a:1, 
b:function (){return this.a}, // OK,可以访问,this指的是x
b(){return this.a}, //OK,上面的简写形式.
c:()=>{return this.a} //Error! 箭头函数的this似乎在上一层,x同级的一层,global.
}

之前看了太多大约都不是官方文档看得自己晕乎乎, 直到今天决心用MDN看看
使用Promise
Promise对象的构造器

const p = function(param){
  new Promise((callItWhenResolved, callItWhenRejected)=>{
    if(doSomethingOk){callItWhenResolved(resovledResultAsParam)}
    else{callItWhenRejected(rejectedResultAsParam)}
  })
}
p(Real_param).then(function callItWhenResovled(resovledResultAsParam){})

所以Promise构建的函数,具体来说就是返回一个promise对象的函数. 这个函数的主要操作完成后, 在promise对象的构造器中根据操作结果执行resolved()函数或者rejected()函数.这两个回调函数放在p().then()中定义.

async await只是Promise的语法糖.

async function af(){
  function callItWhenResovled(resovledResultAsParam){}
  callItWhenResovled(await p(Real_param)) 
}

也能合并简写为:

async function af(){
  (function callItWhenResovled(resovledResultAsParam){})(await p(Real_param)) 
}

如果callItWhenResovled已经定义过,当然又可以省却定义的过程,例如console.log

async function af(){
  console.log(await p(Real_param)) 
}

所以: await aPromiseObject,得到的是正常完成后的返回值resovledResultAsParam. 这个返回值会输入callItWhenResovled函数作为参数,并执行callItWhenResovled.
这儿特别晕的地方就是, 回调函数callItWhenResovled的参数, 表面上是aPromiseObject在await等待的返回值resovledResultAsParam, 实际上这个返回值在Promise对象构建的时候,并没有一个return resovledResultAsParam, 却是以callItWhenResovled实参的方式提供返回值的callItWhenResovled(resovledResultAsParam). 这个透过输入实参造成输出return的效果确实让人好晕.
所以async函数干脆在函数尾部使用return 返回值来替代这种回调函数(实参返回值),使得这个逻辑一下子清晰了很多. 而其返回的实质相当于return new Promise((resolved)=>{resolved(返回值)}).
而await的作用,就是进行Promise的脱壳,将一个promise对象变成其resolved返回值. await 异步函数()相当于异步函数返回的Promise对象.then(), 而某个函数(await 异步函数())相当于异步函数返回的Promise对象.then(某个函数).
这儿有个重要区别:

某个函数(await 异步函数()) //只能在async函数内使用await
异步函数返回的Promise对象.then(某个函数)  //在普通函数中也可以使用,应用范围更广些了