Jump to content
×
×
  • Create New...

Search the Community

Showing results for tags 'scripting'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HOME
    • Shell_Meet
    • Shell_Talk
    • Board Meet
    • Announcements and Updates
    • Shell_Update
    • Pending Approvals
    • Member Introductions
    • Shell_Crew Support
  • HACKING & EXPLOITATION
    • Ctf Updates & Walkthroughs
    • Latest CVE-Info
    • Android/IOS Pentesting
    • Reverse Engineering
    • IoT Exploitation
    • Malware Analysis
    • API Pentesting
    • Cloud Security
    • Off-topic Lounge
  • CAREER
    • Internships
    • Career Discussion
    • Mentorship
    • Career Guidance
  • BUG BOUNTY
    • P5 (Informational Bugs)
    • P4 (Low-Level Bugs)
    • P3-P2 (High-Level Bugs)
    • P2-P1 (Critical Bugs)
    • Vulnerability Chaining
    • Report Writing
    • Personal Hunting Methodology
  • PROGRAMMING
    • Front-End Development
    • Scripting
    • Backend-Development
    • Application Development
    • Linux Kernel and OS Developers
    • Hardware Programming
    • DevOps
    • Queries Assessment
  • PROFESSIONAL CYBERSEC
    • Penetration Testing (Risk Assessment)
    • Red Teaming (Risk Assessment)
    • Blue Teaming (Risk Assessment)
    • Exploit Development (Risk Assessment)
    • OSINT-External and Internal (Threat Intelligence)
    • IOC (Threat Intelligence)
    • Awareness (Reinforcement)
    • Digital Forensics (Security Operations)
    • SOC & SIEM
  • Bug-Hunters's Resources
  • Open Source Contribution's Topics
  • Pentesting's Resources
  • SDR & AutoMobile Pentesting's Topics
  • Networking's Topics
  • Networking's Network Resources

Blogs

  • Open Source Contribution's Blogs

Categories

  • Bug-Hunt
  • Penetration Testing
  • Blue-Teaming

Product Groups

There are no results to display.

Categories

  • Pentesting
  • Bug-POC Videos
  • CTF-Walkthrough
  • Scripting
  • Bug-Hunters's Videos
  • SDR & AutoMobile Pentesting's Videos
  • Networking's Videos

Categories

  • Pentesting
  • Bug-Hunting
  • SDR & AutoMobile Pentesting's Tutorials

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 5 results

  1. """ Port-/FTP-Scanner """ import socket import threading import sys import time import argparse def CheckIfOpen(ip,port): target = (ip,int(port)) try: socket.create_connection(target,1.5) open('open','a').write(ip+":"+str(port)+"\n") print("Port: "+str(port)+" open on IP: "+ip+"!\n") except: print("Port: "+str(port)+" closed on IP: "+ip+"!\n") def CheckIfPub(target): try: server = (target,21) user = "USER anonymous\r\n" pwd = "PASS anonymous\r\n" sock = socket.socket() sock.connect(server) sock.recv(4096) sock.sendall(user.encode()) if "331" in sock.recv(4096).decode('utf-8'): sock.sendall(pwd.encode()) answer = sock.recv(4096).decode('utf-8') if "230" in answer: open('found_ftp','a').write(target+":21 anonymous:anonymous\n") sock.recv(4096) sock.close() elif "530" in answer: sock.close() else: sock.close() except: print("Login failed!\n") pass parser = argparse.ArgumentParser() parser.add_argument("scan", type=str,choices=["ftp","port"],help="decide whether to scan for pubs or ports.") parser.add_argument("ranges",type=str,help="specify the file containing the ranges to scan, format: 123.123.123.123 123.123.123.123") parser.add_argument("-t","--threads",type=int,help="specify amount of threads, else the default will be 100.") parser.add_argument("-p","--ports",type=str,help="specify the port or ports to scan if you decided to scan for open ports. format: port1,port2,port3,...") args = parser.parse_args() if args.threads: threads = args.threads else: threads = 100 ranges = open(args.ranges).read().splitlines() for ipranges in ranges: chain = ipranges.split(" ") start = chain[0].split(".") end = chain[1].split(".") if int(end[3]) != 255: end[3] = int(end[3])+1 else: if int(end[2]) != 255: end[2] = int(end[2])+1 end[3] = 0 else: if int(end[1]) != 255: end[1] = int(end[1])+1 end[2] = 0 end[3] = 0 else: if int(end[0]) != 255: end[0] = int(end[0])+1 end[1] = 0 end[2] = 0 end[3] = 0 end = str(end[0])+"."+str(end[1])+"."+str(end[2])+"."+str(end[3]) current = str(start[0])+"."+str(start[1])+"."+str(start[2])+"."+str(start[3]) if args.scan == "port": try: ports = args.ports.split(",") except: ports = args.ports elif args.scan == "pub": ports = 21 while(current != end): for port in ports: if threading.active_count() <= threads: if args.scan == "port": T = threading.Thread(target=CheckIfOpen,args=(current,int(port),)) elif args.scan == "ftp": T = threading.Thread(target=CheckIfPub,args=(current,)) T.start() else: time.sleep(0.2) if args.scan == "port": T = threading.Thread(target=CheckIfOpen,args=(current,int(port),)) elif args.scan == "ftp": T = threading.Thread(target=CheckIfPub,args=(current,)) T.start() progress = current.split(".") if int(progress[3]) != 255: progress[3] = int(progress[3])+1 else: if int(progress[2]) != 255: progress[2] = int(progress[2])+1 progress[3] = 0 else: if int(end[1]) != 255: progress[1] = int(progress[1])+1 progress[2] = 0 progress[3] = 0 else: if int(progress[0]) != 255: progress[0] = int(progress[0])+1 progress[1] = 0 progress[2] = 0 progress[3] = 0 current = str(progress[0])+"."+str(progress[1])+"."+str(progress[2])+"."+str(progress[3]) open('current_ip','w').write(current) T.join() print("Scan finished!\n") exit()
  2. Hey Guys! I am Venom! 🐍 Today I am going to share a simple python script for finding open redirections from the given site list! import requests # Requests library for making get requests to website import os # os library for checking if file exists or not! openRedirection = ["/http://example.com", "/%5cexample.com", "/%2f%2fexample.com", "/example.com/%2f%2e%2e", "/http:/example.com", "/?url=http://example.com&next=http://example.com&redirect=http://example.com&redir=http://example.com&rurl=http://example.com", "/?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com", "/?url=/\/example.com&next=/\/example.com&redirect=/\/example.com", "/redirect?url=http://example.com&next=http://example.com&redirect=http://example.com&redir=http://example.com&rurl=http://example.com", "/redirect?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com", "/redirect?url=/\/example.com&next=/\/example.com&redirect=/\/example.com&redir=/\/example.com&rurl=/\/example.com", "/.example.com", "///\;@example.com", "///example.com/", "///example.com", "///example.com/%2f..", "/////example.com/", "/////example.com"] # Payloads for checking open redirection file = input("[+] Enter the file path: ") # taking file as user input! if os.path.exists(file) is True: # checking if file exists print("[+] File Found: True!") # If file exists then print this line op = open(file, 'r') # now opening file in read mode read = op.readlines() # reading the content inside the file! and storing it in a variable named read which is a list print("[+] Total Sites: " + str(len(read))) # printing the total number of sites using len function for sites in read: # using for loop on the site lists (read) site = sites.replace("\n", "") # replacing the new line character for payloads in openRedirection: # using for loop on payload list (OpenRedirection) url = site + payloads # making final url = target site + payload try: # error handling ! Try this thing or if you got any error then put it in except loop! response = requests.get(url, allow_redirects=True) # making a get request to the final url with allowing the redirects history = response.history # using history method to check if the website got any other redirections or not if history == []: # if the history has no content (status code) then run this situation print("[+] " + url + " [Not Vulnerable]") # print not vulnerable else: # if the history is not empty then run this situation op = open("output.txt", 'a') # opening an output text file in append mode op.write(url + "\n") # writing the vulnerable url with new line character! print("[+] " + url + " [Vulnerable]") # The url is vulnerable! op.close() # closing the file and saving the content except: # if there is any error run this loop! pass # pass else: # if file not found! run this situation print("[+] File Not found! Please try again with correct file path!") Thanks for reading the post!
  3. How much Python is important for Bug-Hunting, Pentesting & Scripting. Watch this video to know more about it. 🤙💯🆒 Credit - @Pentest With Rohit
  4. Hey Guys! I am Venom! Today I am going to share some basic methods and functions of requests library ! If you want to know more about the same please do comment and lemme know! 🙂 import requests # for importing the library url = "https://forum.shellcrew.org" headers = {"host": "forum.shellcrew.org", "origin": "shellcrew.org", "referer": "venomgrills.com", "Cookies": "somerandomcookies", "Accept": "application/json", "Content-Type": "application/json"} data = {"username": "admin", "password": "admin"} auth = {"user": "venom", "password": "fuck0ff"} response = requests.get(url) # making a get requests url (You can use any url like https://venomgrills.com) response = requests.get(url, auth=auth) response = requests.get(url, headers=headers) # make a get requests with headers response = requests.get(url, headers=headers, allow_redirects=True) # make a get requests with redirections response = requests.post(url, headers=headers, data=data) # Making a post based requests with data and headers response = requests.put(url, headers=headers, data=data) # Making a put requests with data and headers response = requests.get(url, timeout=0.5) # Making a get requests with timeout (time is in seconds) statusCode = response.status_code # for getting status code of any requests header = response.headers # for getting headers of any response cookies = response.cookies # for getting cookies of any response history = response.history # for getting history of any response encoding = response.encoding # for getting encoding of any response content = response.content # for getting content of any response this includes html tags, tabs etc. # Errors and exceptions in requests library try: response = requests.get(url) except requests.exceptions.ConnectionError as e: print("Connection error") # if website is down or unavilable then there is a connection error try: response = requests.get(url) except requests.exceptions.SSLError as e: print("The website has an invalid or expired ssl certificate!") try: response = requests.get(url) except requests.exceptions.InvalidHeader as e: print("Invalid header given (*_*)") # If you want more detailed info about requests library please do comment down for the second part!! :)
  5. Hii guys I m Venom!. So today we will see a short python3 script for a web-fuzzer. I hope u will learn something new out of it. So let's roll. # importing libraries import requests # for making request to website import os # for checking wordlist path url = input('Enter the url: ') # taking url as user input wordlist = input('Enter the wordlist path: ') # taking wordlist as user input # checking if wordlist exists or not if os.path.exists(wordlist) is False: # if wordlist does not exists then print that wordlist not found print("Wordlist not found! Please try again!") else: # if wordlist do exists then run this thing print("Wordlist Found: True!") op = open(wordlist, 'r') # opening the wordlist file read = op.readlines() # reading the content inside the file for path in read: # we are using a for loop to grab each and everyline finalurl = url + "/" + path # apending the which is targeturl + path response = requests.get(finalurl) # making a get request to get the status code of the website if response.status_code == 200 or response.status_code == 403: # checking if status is 200 or 403 print(finalurl + " [" + str(response.status_code) + "]") # if status code is 200 or 403 print the finalurl else: # else pass the response pass fuzzer_2-Veed.mp4