python使用mitmproxy抓取浏览器请求的方法

yipeiwu_com5年前Python爬虫

最近要写一款基于被动式的漏洞扫描器,因为被动式是将我们在浏览器浏览的时候所发出的请求进行捕获,然后交给扫描器进行处理,本来打算自己写这个代理的,但是因为考虑到需要抓取https,所以最后找到Mitmproxy这个程序。

安装方法:

pip install mitmproxy

接下来通过一个案例程序来了解它的使用,下面是目录结构

sproxy

|utils

|__init__.py

|parser.py

|sproxy.py

sproxy.py代码

#coding=utf-8
 
from pprint import pprint
from mitmproxy import flow, proxy, controller, options
from mitmproxy.proxy.server import ProxyServer
from utils.parser import ResponseParser
 
# http static resource file extension
static_ext = ['js', 'css', 'ico', 'jpg', 'png', 'gif', 'jpeg', 'bmp']
# media resource files type
media_types = ['image', 'video', 'audio']
 
# url filter
url_filter = ['baidu','360','qq.com']
 
static_files = [
 'text/css',
 'image/jpeg',
 'image/gif',
 'image/png',
]
 
class WYProxy(flow.FlowMaster):
 
 
 def __init__(self, opts, server, state):
  super(WYProxy, self).__init__(opts, server, state)
 
 def run(self):
  try:
   pprint("proxy started successfully...")
   flow.FlowMaster.run(self)
  except KeyboardInterrupt:
   pprint("Ctrl C - stopping proxy")
   self.shutdown()
 
 def get_extension(self, flow):
  if not flow.request.path_components:
   return ''
  else:
   end_path = flow.request.path_components[-1:][0]
   split_ext = end_path.split('.')
   if not split_ext or len(split_ext) == 1:
    return ''
   else:
    return split_ext[-1:][0][:32]
 
 def capture_pass(self, flow):
 	# filter url
  url = flow.request.url
  for i in url_filter:		
			if i in url:
				return True
 
  """if content_type is media_types or static_files, then pass captrue"""
  extension = self.get_extension(flow)
  if extension in static_ext:
   return True
 
  # can't catch the content_type
  content_type = flow.response.headers.get('Content-Type', '').split(';')[:1][0]
  if not content_type:
   return False
 
  if content_type in static_files:
   return True
 
  http_mime_type = content_type.split('/')[:1]
  if http_mime_type:
   return True if http_mime_type[0] in media_types else False
  else:
   return False
 
 @controller.handler
 def request(self, f):
 	pass
 
 @controller.handler
 def response(self, f):
  try:
   if not self.capture_pass(f):
    parser = ResponseParser(f)
    result = parser.parser_data()
    if f.request.method == "GET":
     print result['url']
    elif f.request.method == "POST":
     print result['request_content'] # POST提交的参数
 
  except Exception as e:
   raise e
 
 @controller.handler
 def error(self, f):
 	pass
  # print("error", f)
 
 @controller.handler
 def log(self, l):
 	pass
  # print("log", l.msg)
 
def start_server(proxy_port, proxy_mode):
 port = int(proxy_port) if proxy_port else 8090
 mode = proxy_mode if proxy_mode else 'regular'
 
 if proxy_mode == 'http':
  mode = 'regular'
 
 opts = options.Options(
  listen_port=port,
  mode=mode,
  cadir="~/.mitmproxy/",
  )
 config = proxy.ProxyConfig(opts)
 state = flow.State()
 server = ProxyServer(config)
 m = WYProxy(opts, server, state)
 m.run()
 
 
if __name__ == '__main__':
	start_server("8090", "http")

parser.py

# from __future__ import absolute_import
 
class ResponseParser(object):
 """docstring for ResponseParser"""
 
 def __init__(self, f):
  super(ResponseParser, self).__init__()
  self.flow = f
 
 def parser_data(self):
  result = dict()
  result['url'] = self.flow.request.url
  result['path'] = '/{}'.format('/'.join(self.flow.request.path_components))
  result['host'] = self.flow.request.host
  result['port'] = self.flow.request.port
  result['scheme'] = self.flow.request.scheme
  result['method'] = self.flow.request.method
  result['status_code'] = self.flow.response.status_code
  result['content_length'] = int(self.flow.response.headers.get('Content-Length', 0))
  result['request_header'] = self.parser_header(self.flow.request.headers)
  result['request_content'] = self.flow.request.content
  return result
 
 @staticmethod
 def parser_multipart(content):
  if isinstance(content, str):
   res = re.findall(r'name=\"(\w+)\"\r\n\r\n(\w+)', content)
   if res:
    return "&".join([k + '=' + v for k, v in res])
   else:
    return ""
  else:
   return ""
 
 @staticmethod
 def parser_header(header):
  headers = {}
  for key, value in header.items():
   headers[key] = value
  return headers
 
 @staticmethod
 def decode_response_text(content):
  for _ in ['UTF-8', 'GB2312', 'GBK', 'iso-8859-1', 'big5']:
   try:
    return content.decode(_)
   except:
    continue
  return content

参考链接:

https://github.com/ring04h/wyproxy

以上这篇python使用mitmproxy抓取浏览器请求的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持【听图阁-专注于Python设计】。

相关文章

如何准确判断请求是搜索引擎爬虫(蜘蛛)发出的请求

如何准确判断请求是搜索引擎爬虫(蜘蛛)发出的请求

网站经常会被各种爬虫光顾,有的是搜索引擎爬虫,有的不是,通常情况下这些爬虫都有UserAgent,而我们知道UserAgent是可以伪装的,UserAgent的本质是Http请求头中的一...

python实现爬虫下载美女图片

本次爬取的贴吧是百度的美女吧,给广大男同胞们一些激励 在爬取之前需要在浏览器先登录百度贴吧的帐号,各位也可以在代码中使用post提交或者加入cookie 爬行地址:http://tieb...

Python中urllib+urllib2+cookielib模块编写爬虫实战

Python中urllib+urllib2+cookielib模块编写爬虫实战

超文本传输协议http构成了万维网的基础,它利用URI(统一资源标识符)来识别Internet上的数据,而指定文档地址的URI被称为URL(既统一资源定位符),常见的URL指向文件、目录...

Python 用Redis简单实现分布式爬虫的方法

Redis通常被认为是一种持久化的存储器关键字-值型存储,可以用于几台机子之间的数据共享平台。 连接数据库 注意:假设现有几台在同一局域网内的机器分别为Master和几个Slaver...

Python中Scrapy爬虫图片处理详解

下载图片 下载图片有两种方式,一种是通过 Requests 模块发送 get 请求下载,另一种是使用 Scrapy 的 ImagesPipeline 图片管道类,这里主要讲后者。 安装...