vulnhuntr:基于大语言模型和静态代码分析的漏洞扫描与分析工具

2024-11-22 5 0

关于vulnhuntr

vulnhuntr是一款基于大语言模型和静态代码分析的安全漏洞扫描与分析工具,该工具可以算得上是世界上首款具备自主AI能力的安全漏洞扫描工具了。

Vulnhuntr 利用 LLM 的强大功能自动创建和分析整个代码调用链,从远程用户输入开始,到服务器输出结束,以检测复杂的、多步骤的和严重影响安全的漏洞,而这些漏洞,远远超出了传统静态代码分析工具的能力。

功能介绍

当前版本的vulnhuntr支持检测和识别以下漏洞类别:

1、本地文件包含(LFI)

2、任意文件覆盖(AFO)

3、远程代码执行(RCE)

4、跨站点脚本(XSS)

5、SQL 注入(SQLI)

6、服务器端请求伪造(SSRF)

7、不安全的直接对象引用(IDOR)

工具执行逻辑

工具要求

Python v3.10

工具安装

由于该工具基于Python 3.10开发,因此我们首先需要在本地设备上安装并配置好Python 3.10环境。我们建议使用pipx或 Docker 轻松安装和运行 Vulnhuntr。

Docker安装

docker build -t vulnhuntr https://github.com/protectai/vulnhuntr.git#main

pipx安装

pipx install git+https://github.com/protectai/vulnhuntr.git --python python3.10

源码安装

git clone https://github.com/protectai/vulnhuntr

cd vulnhuntr && poetry install

工具使用

usage: vulnhuntr [-h] -r ROOT [-a ANALYZE] [-l {claude,gpt,ollama}] [-v]

 

Analyze a GitHub project for vulnerabilities. Export your ANTHROPIC_API_KEY/OPENAI_API_KEY before running.

 

options:

  -h, --help             显示工具帮助信息和退出

  -r ROOT, --root ROOT  项目根目录的路径

  -a ANALYZE, --analyze ANALYZE

                        项目中要分析的特定路径或文件

  -l {claude,gpt,ollama}, --llm {claude,gpt,ollama}

                        要使用的LLM客户端(默认:claude)

  -v, --verbosity       增加输出的详细程度(-v表示INFO,-vv表示DEBUG)

工具运行演示

使用 Claude 分析整个存储库:

export ANTHROPIC_API_KEY="sk-1234"

vulnhuntr -r /path/to/target/repo/

使用 GPT-4o 分析文件,也可以指定子目录而不是文件:

export OPENAI_API_KEY="sk-1234"

vulnhuntr -r /path/to/target/repo/ -a server.py -l gpt

使用 Claude 和自定义端点运行来分析 /local/path/to/target/repo/repo-subfolder/target-file.py:

docker run --rm -e ANTHROPIC_API_KEY=sk-1234 -e ANTHROPIC_BASE_URL=https://localhost:1234/api -v /local/path/to/target/repo:/repo vulnhuntr:latest -r /repo -a repo-subfolder/target-file.py

输出样例

scratchpad:

Analyzing the add_llm function in llm_app.py. 2. Identified user input used as keys to access dictionaries: EmbeddingModel, ChatModel, RerankModel,

CvModel, and TTSModel. 3. These dictionaries contain class references, which are instantiated with user-supplied parameters. 4. The factory variable is

directly used from user input (req['llm_factory']) without proper validation. 5. The llm_name is also derived from user input in most cases. 6. The

instantiation of these classes occurs in try-except blocks, but exceptions are only used for error messages, not preventing execution. 7. The api_key

and other sensitive information are passed directly to these instantiated classes. 8. There's no comprehensive input validation or sanitization for the

factory or llm_name variables. 9. The vulnerability is in a route decorated with @login_required, indicating it's accessible to authenticated users. 10.

Reviewing the EmbeddingModel, ChatModel, RerankModel, CvModel, and TTSModel dictionaries to understand their contents and potential for exploitation.

----------------------------------------

 

analysis:

  The add_llm function in llm_app.py contains a critical Remote Code Execution (RCE) vulnerability. The function uses user-supplied input

(req['llm_factory'] and req['llm_name']) to dynamically instantiate classes from the EmbeddingModel, ChatModel, RerankModel, CvModel, and TTSModel

dictionaries. This pattern of using user input as a key to access and instantiate classes is inherently dangerous, as it allows an attacker to

potentially execute arbitrary code. The vulnerability is exacerbated by the lack of comprehensive input validation or sanitization on these

user-supplied values. While there are some checks for specific factory types, they are not exhaustive and can be bypassed. An attacker could potentially

provide a malicious value for 'llm_factory' that, when used as an index to these model dictionaries, results in the execution of arbitrary code. The

vulnerability is particularly severe because it occurs in a route decorated with @login_required, suggesting it's accessible to authenticated users,

which might give a false sense of security.

----------------------------------------

 

poc:

  POST /add_llm HTTP/1.1

  Host: target.com

  Content-Type: application/json

  Authorization: Bearer <valid_token>

  

  {

      "llm_factory": "__import__('os').system",

      "llm_name": "id",

      "model_type": "EMBEDDING",

      "api_key": "dummy_key"

  }

  

  This payload attempts to exploit the vulnerability by setting 'llm_factory' to a string that, when evaluated, imports the os module and calls system.

The 'llm_name' is set to 'id', which would be executed as a system command if the exploit is successful.

----------------------------------------

 

confidence_score:

  8

----------------------------------------

 

vulnerability_types:

  - RCE

----------------------------------------

许可证协议

本项目的开发与发布遵循AGPL-3.0开源许可协议。

项目地址

vulnhuntr:【GitHub传送门

参考资料

https://protectai.com/threat-research/vulnhuntr-first-0-day-vulnerabilities

https://huntr.com/


4A评测 - 免责申明

本站提供的一切软件、教程和内容信息仅限用于学习和研究目的。

不得将上述内容用于商业或者非法用途,否则一切后果请用户自负。

本站信息来自网络,版权争议与本站无关。您必须在下载后的24个小时之内,从您的电脑或手机中彻底删除上述内容。

如果您喜欢该程序,请支持正版,购买注册,得到更好的正版服务。如有侵权请邮件与我们联系处理。敬请谅解!

程序来源网络,不确保不包含木马病毒等危险内容,请在确保安全的情况下或使用虚拟机使用。

侵权违规投诉邮箱:4ablog168#gmail.com(#换成@)

相关文章

办事处网络安全监控与事件响应;国外员工终端安全性怎么保障 | FB甲方群话题讨论
拿不下总统之位,那就用热加载拿下验证码识别与爆破好了!
Sooty:一款SoC分析一体化与自动化CLI工具
shiro CVE-2016-6802 路径绕过(越权)
Apache Solr 身份验证绕过漏洞(CVE-2024-45216)详解
llama_index的CVE-2024-4181漏洞根因分析

发布评论