Jump to content
×
×
  • Create New...

Search the Community

Showing results for tags 'script'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HOME
    • Shell_Meet
    • Shell_Talk
    • Board Meet
    • Announcements and Updates
    • Shell_Update
    • Pending Approvals
    • Member Introductions
    • Shell_Crew Support
  • HACKING & EXPLOITATION
    • Ctf Updates & Walkthroughs
    • Latest CVE-Info
    • Android/IOS Pentesting
    • Reverse Engineering
    • IoT Exploitation
    • Malware Analysis
    • API Pentesting
    • Cloud Security
    • Off-topic Lounge
  • CAREER
    • Internships
    • Career Discussion
    • Mentorship
    • Career Guidance
  • BUG BOUNTY
    • P5 (Informational Bugs)
    • P4 (Low-Level Bugs)
    • P3-P2 (High-Level Bugs)
    • P2-P1 (Critical Bugs)
    • Vulnerability Chaining
    • Report Writing
    • Personal Hunting Methodology
  • PROGRAMMING
    • Front-End Development
    • Scripting
    • Backend-Development
    • Application Development
    • Linux Kernel and OS Developers
    • Hardware Programming
    • DevOps
    • Queries Assessment
  • PROFESSIONAL CYBERSEC
    • Penetration Testing (Risk Assessment)
    • Red Teaming (Risk Assessment)
    • Blue Teaming (Risk Assessment)
    • Exploit Development (Risk Assessment)
    • OSINT-External and Internal (Threat Intelligence)
    • IOC (Threat Intelligence)
    • Awareness (Reinforcement)
    • Digital Forensics (Security Operations)
    • SOC & SIEM
  • Bug-Hunters's Resources
  • Open Source Contribution's Topics
  • Pentesting's Resources
  • SDR & AutoMobile Pentesting's Topics
  • Networking's Topics
  • Networking's Network Resources

Blogs

  • Open Source Contribution's Blogs

Categories

  • Bug-Hunt
  • Penetration Testing
  • Blue-Teaming

Product Groups

There are no results to display.

Categories

  • Pentesting
  • Bug-POC Videos
  • CTF-Walkthrough
  • Scripting
  • Bug-Hunters's Videos
  • SDR & AutoMobile Pentesting's Videos
  • Networking's Videos

Categories

  • Pentesting
  • Bug-Hunting
  • SDR & AutoMobile Pentesting's Tutorials

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 5 results

  1. #!/usr/bin/python import time, socket, struct, threading, sys class file(): def __init__(self, file): self.file = open(file, "r") def next_line(self): try: next_line = self.file.next().rstrip() except StopIteration: next_line = False self.act_line = next_line return self.act_line def actual_line(self): return self.act_line class class_tools(): def __init__(self): pass def init_iprange(self, start, end): """ Init iprange, must be called before iprange_nextip! """ self.var_iprange_start = start self.var_iprange_end = end self.var_iprange_now = None def iprange_nextip(self): """ Calc next ip for range defined wiht init_iprange. If Range is finished it returns False. """ if self.var_iprange_now == None: self.var_iprange_now = self.var_iprange_start elif self.var_iprange_now != self.var_iprange_end: self.var_iprange_now = self.ip2nextip(self.var_iprange_now) else: return False return self.var_iprange_now def ip2nextip(self, ip): """ Calc next the next ip """ long_ip = self.ip2long(ip) long_ip += 1 next_ip = self.long2ip(long_ip) return next_ip def ip2long(self, ip): """ Convert ip to a long """ packed = socket.inet_aton(ip) return struct.unpack("!L", packed)[0] def long2ip(self, n): """ Convert a long to ip """ unpacked = struct.pack('!L', n) return socket.inet_ntoa(unpacked) def ipportopen(self, host, port, timeout = 10): """ Check if a port is open and return true or false """ s = socket.socket() s.settimeout(timeout) try: s.connect((host, port)) except socket.error as e: e = str(e) return [False, e] return [True] def logging(self, file, value): """ Append value to file """ log_file = open(file, "a") log_file.write(value+"\r\n") log_file.close() def range_line_struct(self, line): """ structure/parse a range line """ tmp_line = line.split(" ") # To do: # - add regex to match if valid ip if len(tmp_line) != 2: return False else: return [tmp_line[0], tmp_line[1]] def createDaemon(): try: # Fork a child process so the parent can exit. This returns control to # the command-line or shell. It also guarantees that the child will not # be a process group leader, since the child receives a new process ID # and inherits the parent's process group ID. This step is required # to insure that the next call to os.setsid is successful. pid = os.fork() except OSError as e: raise Exception("%s [%d]" % (e.strerror, e.errno)) if (pid == 0): # The first child. # To become the session leader of this new session and the process group # leader of the new process group, we call os.setsid(). The process is # also guaranteed not to have a controlling terminal. os.setsid() # Is ignoring SIGHUP necessary? # # It's often suggested that the SIGHUP signal should be ignored before # the second fork to avoid premature termination of the process. The # reason is that when the first child terminates, all processes, e.g. # the second child, in the orphaned group will be sent a SIGHUP. # # "However, as part of the session management system, there are exactly # two cases where SIGHUP is sent on the death of a process: # # 1) When the process that dies is the session leader of a session that # is attached to a terminal device, SIGHUP is sent to all processes # in the foreground process group of that terminal device. # 2) When the death of a process causes a process group to become # orphaned, and one or more processes in the orphaned group are # stopped, then SIGHUP and SIGCONT are sent to all members of the # orphaned group." [2] # # The first case can be ignored since the child is guaranteed not to have # a controlling terminal. The second case isn't so easy to dismiss. # The process group is orphaned when the first child terminates and # POSIX.1 requires that every STOPPED process in an orphaned process # group be sent a SIGHUP signal followed by a SIGCONT signal. Since the # second child is not STOPPED though, we can safely forego ignoring the # SIGHUP signal. In any case, there are no ill-effects if it is ignored. # # import signal # Set handlers for asynchronous events. # signal.signal(signal.SIGHUP, signal.SIG_IGN) try: # Fork a second child and exit immediately to prevent zombies. This # causes the second child process to be orphaned, making the init # process responsible for its cleanup. And, since the first child is # a session leader without a controlling terminal, it's possible for # it to acquire one by opening a terminal in the future (System V- # based systems). This second fork guarantees that the child is no # longer a session leader, preventing the daemon from ever acquiring # a controlling terminal. pid = os.fork() # Fork a second child. except OSError as e: raise Exception("%s [%d]" % (e.strerror, e.errno)) if (pid == 0): # The second child. # Since the current working directory may be a mounted filesystem, we # avoid the issue of not being able to unmount the filesystem at # shutdown time by changing it to the root directory. # os.chdir(WORKDIR) EDIT by Whyned: Not important for me ;) # We probably don't want the file mode creation mask inherited from # the parent, so we give the child complete control over permissions. #os.umask(UMASK) EDIT by Whyned: Not important for me ;) pass else: # exit() or _exit()? See below. os._exit(0) # Exit parent (the first child) of the second child. else: # exit() or _exit()? # _exit is like exit(), but it doesn't call any functions registered # with atexit (and on_exit) or any registered signal handlers. It also # closes any open file descriptors. Using exit() may cause all stdio # streams to be flushed twice and any temporary files may be unexpectedly # removed. It's therefore recommended that child branches of a fork() # and the parent branch(es) of a daemon use _exit(). os._exit(0) # Exit parent of the first child. def func_help(): print((""" :: pyRangeScanner v%s :: With this Tool you can scan a range for (multiple) open port(s) It can handle a single range or a file with multiple ranges and it supports threads. :: HELP :: .py -r range_start range_end ports threads [timeout] .py -rf range_file ports threads [timeout] ports = 80 or for multiple ports 80,8080,81... Default Timeout = %s :: EXAMPLE :: .py -r 127.0.0.0 127.0.1.0 80,8080,22 20 10 .py -rf xyz.txt 80,8080,22 20 10 :: EXAMPLE RANGE FILE :: 127.0.0.0 127.0.1.0 125.1.1.0 125.2.0.0 ... :: GREETS :: Greets fly out to: Team DDR, Team WTC, BWC, Inferno-Load, B2R, Datenreiter, Burnz, Gil, LeChuck, Bebop, Fr0sty, Gnu, Airy, FaKe, Generation, Shizuko, leety and all i forget! """ %(__info__['version'], __info__['def_timeout']))) def func_portcheck(ip, port, timeout): """ Handle return from tools.ipportopen and log to file """ log_result = "result.txt" log_failure = "log.txt" tmp_ip = tools.ipportopen(ip, port, timeout) sys.stdout.write("[*] Checking: %s %s\n" %(ip, port)) if tmp_ip[0] != False: sys.stdout.write("[+] %s Port %s Open!\n" %(ip, port)) tools.logging(log_result, "%s:%s" %(ip, port)) elif tmp_ip[0] == False: sys.stdout.write("[-] %s Port %s %s\n" %(ip,port, tmp_ip[1])) tools.logging(log_failure, "%s:%s %s" %(ip, port, tmp_ip[1])) def func_portcheckv1(ip, port, timeout): """ Handle return from tools.ipportopen and log to file port must be array! """ log_result = "result.txt" log_failure = "log.txt" timeout = int(timeout) for tmp_port in port: tmp_port = int(tmp_port) tmp_ip = tools.ipportopen(ip, tmp_port, timeout) sys.stdout.write("[*] Checking: %s %s\n" %(ip, tmp_port)) if tmp_ip[0] != False: sys.stdout.write("[+] %s Port %s Open!\n" %(ip, tmp_port)) tools.logging(log_result, "%s:%s" %(ip, tmp_port)) elif tmp_ip[0] == False: sys.stdout.write("[-] %s Port %s %s\n" %(ip, tmp_port, tmp_ip[1])) tools.logging(log_failure, "%s:%s %s" %(ip, tmp_port, tmp_ip[1])) if tmp_ip[0] == False and tmp_ip[1] == "timed out" or tmp_ip[0] == False and tmp_ip[1] == "[Errno 101] Network is unreachable": # Delete this if you want to check all ports sys.stdout.write("[-] Skipping other Ports from %s" %(ip)) break def main1(range_start, range_end, port, timeout): """ Check a Range for open port (single threaded) """ tools.init_iprange(range_start, range_end) while True: next_ip = tools.iprange_nextip() if next_ip != False: print(next_ip) print((tools.ipportopen(next_ip, port, timeout = timeout))) else: break def main2(range_start, range_end, port, timeout, threads): """ Check a Range for open port (multi threaded) """ tools.init_iprange(range_start, range_end) while True: if threading.active_count() < threads: next_ip = tools.iprange_nextip() if next_ip != False: thread = threading.Thread(target=func_portcheck, args=(next_ip, port, timeout,)) thread.start() else: break while threading.active_count() != 1: #Wait until all threads are finished. time.sleep(0.1) def main2v1(range_start, range_end, port, timeout, threads): """ Check a Range for open port(s) (multi threaded) port argument must be array! """ threads = int(threads) tools.init_iprange(range_start, range_end) while True: if threading.active_count() <= threads: print((threading.active_count(), threads)) next_ip = tools.iprange_nextip() if next_ip != False: thread = threading.Thread(target=func_portcheckv1, args=(next_ip, port, timeout,)) thread.start() else: break while threading.active_count() > 2: #Wait until all threads are finished. time.sleep(0.1) def main3(range_file, port, timeout, threads): """ Check Ranges from Range file for open port """ range_file = file(range_file) while True: #Read range_file line per line line = range_file.next_line() if line == False: break line_split = tools.range_line_struct(line) main2(line_split[0], line_split[1], port, timeout, threads) def main3v1(range_file, port, timeout, threads): """ Check Ranges from Range file for multiple open ports port must be array! """ range_file = file(range_file) while True: #Read range_file line per line line = range_file.next_line() if line == False: break line_split = tools.range_line_struct(line) main2v1(line_split[0], line_split[1], port, timeout, threads) if __name__ == "__main__": global tools, __info__ __info__ = {} __info__['version'] = "0.1" __info__['def_timeout'] = 10 tools = class_tools() #main1("173.194.35.151", "173.194.35.160", 80, 2) #main2("173.194.35.151", "173.194.35.160", 81, 2, 10) #main3("/tmp/test.txt", 80, 2, 10) #main2v1("192.168.178.0", "192.168.179.0", [81, 80], 2, 10) #main3v1("/tmp/test.txt", [80, 8080, 21], 2, 20) print((len(sys.argv),sys.argv)) if len(sys.argv) >= 5: if sys.argv[1] == "-rf": if len(sys.argv) == 6: # Use range_file and timeout # .py -rf rangefile port,port,port threads timeout range_file = sys.argv[2] port = sys.argv[3].split(",") threads = int(sys.argv[4])+1 timeout = sys.argv[5] main3v1(range_file, port, timeout, threads) elif len(sys.argv) == 5: # Use range_file and no timeout # .py -rf rangefile port,port,port threads (timeout = default = 10) range_file = sys.argv[2] port = sys.argv[3].split(",") threads = int(sys.argv[4])+1 timeout = __info__['def_timeout'] main3v1(range_file, port, timeout, threads) else: func_help() elif sys.argv[1] == "-r": if len(sys.argv) == 7: # Use a single range and timeout # .py -r range_start range_end port,port,port threads timeout range_start = sys.argv[2] range_end = sys.argv[3] port = sys.argv[4].split(",") threads = int(sys.argv[5])+1 timeout = sys.argv[6] main2v1(range_start, range_end, port, timeout, threads) elif len(sys.argv) == 6: # Use a single range and no timeout # .py -r range_start range_end port,port,port threads range_start = sys.argv[2] range_end = sys.argv[3] port = sys.argv[4].split(",") threads = int(sys.argv[5])+1 timeout = __info__['def_timeout'] main2v1(range_start, range_end, port, timeout, threads) else: func_help() else: func_help() else: func_help()
  2. Hey Guys! I am Venom! I hope you all are fine! Today I am sharing the web crawler script written in python ! So let's begin! import requests # using requests library for getting the source code from site. import re # using re module for getting a tags from urllib.parse import urljoin # urlparse to parse the url from bs4 import BeautifulSoup # using bs4 to parse code using html parser urls = [] # to store the urls used target_links = [] url = [] target = input("[+] Enter the url: ") # taking target url as user input def extract(tar): # making a function extract with a value tar try: # try and except loop in case we got some status or http error response = requests.get(target) # getting the url content using get requests soup = BeautifulSoup(response.content, 'html.parser') # parsing the content return re.findall('(?:href=")(.*?)"', str(soup)) # find urls using the regex pattern except: pass # if any error occurs then just pass the data! def crawl(path): # making another function crawl which takes an argument links = extract(path) # now the function extract with get all the links from the argument given which is target in our case for link in links: # using for loop to format the urls and crawl them again one by one url = urljoin(path, link) # if url is not complete then joining the url with target if "#" in url: # if else loop url = url.split("#")[0] # if there is a # in url then just split the url and print the first path if link in url and url not in target_links: # if the link is in url list and url is not in the target list then target_links.append(url) # append the url to target link urls.append(target_links) print("[+] " + url) # printing the url which we have found! crawl(url) # now again running the crawl loop on the url! crawl(target) # running the crawl function with target url as argument!
  3. The Project Credit goes to @AS Hacker Am Posting from his behalf. So in this python code with the help of turtle graphics we would gonna print the National Flag Of INDIA. 🙏 It's a short crisp project. Hope you learn something new out of it. So, let's roll. So, here's the source code with explanation to it. So enjoy reading. import turtle from turtle import* #screen for output screen = turtle.Screen() # Defining a turtle Instance t = turtle.Turtle() t.speed(10) # initially penup() t.penup() t.goto(-400, 250) t.pendown() # Orange Rectangle #white rectangle t.color("orange") t.begin_fill() t.forward(800) t.right(90) t.forward(167) t.right(90) t.forward(800) t.end_fill() t.left(90) t.forward(167) # Green Rectangle t.color("green") t.begin_fill() t.forward(167) t.left(90) t.forward(800) t.left(90) t.forward(167) t.end_fill() # Big Blue Circle t.penup() t.goto(70, 0) t.pendown() t.color("navy") t.begin_fill() t.circle(70) t.end_fill() # Big White Circle t.penup() t.goto(60, 0) t.pendown() t.color("white") t.begin_fill() t.circle(60) t.end_fill() # Mini Blue Circles t.penup() t.goto(-57, -8) t.pendown() t.color("navy") for i in range(24): t.begin_fill() t.circle(3) t.end_fill() t.penup() t.forward(15) t.right(15) t.pendown() # Small Blue Circle t.penup() t.goto(20, 0) t.pendown() t.begin_fill() t.circle(20) t.end_fill() # Spokes t.penup() t.goto(0, 0) t.pendown() t.pensize(2) for i in range(24): t.forward(60) t.backward(60) t.left(15) #to hold the #output window turtle.done() output-flag_FHroUYa7.mp4
  4. Hello Freinds I m Venom (Gaurav) founder of Venomgrills and a Mod at Shell_Crew. So Let's look at a cool python script developed by me hope you guys enjoy and will learn from it so let's go. # Importing libraries import requests # requests library to making a get request for the webpage from bs4 import BeautifulSoup # to parse the html and read the content from html attributes and tags import re # regex to check email if any in the parsed data emailList = [] # making a list to add emails which we get from site x = 0 # an integer variable x with value 0.This will help us to index the content of the emailist emailRegex = r"""(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])""" # this is the regular expression for email format query = input("Enter your query: ") # taking any word as input like food apps or gaming apps url = "https://play.google.com/store/search?q=" + query +"&c=apps" # url for searching the apps as per are query # this us the scrap part where we will scrape the apps link for above url response = requests.get(url).content # making a get request to get the source code of website soup = BeautifulSoup(response, 'html.parser') # using beautiful soup with html parser to parse the content of the response variable for links in soup.findAll("a", class_="JC71ub"😞 # making a for loop which will grab the content from a tag with class "JC71UB" attribute (This is the tag where all the app links are given) link = links.get("href") # grabbing the link from href attribute finalLink = "https://play.google.com" + link # final link = initial playstore link + app link from above href attribute response = requests.get(finalLink).content # making a get request to fetch the content inside the webpage soup = BeautifulSoup(response, 'html.parser') # again parsing the html content using html parser for emails in re.finditer(emailRegex, str(soup)): # using re library to find the email pattern from the source of web page email = emails.group() # converting the emails output in a string emailList.append(email) # if email exists then adding it to the emailist given in the line 6 print(emailList[x]) # now printing the value of emails from the emaillist with x as indexing output = open("emails.txt", 'a') # opening a text file with name emails.txt in append mode to add the code output.write(emailList[x]) # write emails to the text file output.close() # closing the output file to save the output x += 3 # here we are adding three as we are indexing the list we are getting an email 3 times in webpage in order to avoid multiple prints we are skipping three emails everytime to avoid duplicates playstore_python.mp4
  5. Hii guys I m Venom!. So today we will see a short python3 script for a web-fuzzer. I hope u will learn something new out of it. So let's roll. # importing libraries import requests # for making request to website import os # for checking wordlist path url = input('Enter the url: ') # taking url as user input wordlist = input('Enter the wordlist path: ') # taking wordlist as user input # checking if wordlist exists or not if os.path.exists(wordlist) is False: # if wordlist does not exists then print that wordlist not found print("Wordlist not found! Please try again!") else: # if wordlist do exists then run this thing print("Wordlist Found: True!") op = open(wordlist, 'r') # opening the wordlist file read = op.readlines() # reading the content inside the file for path in read: # we are using a for loop to grab each and everyline finalurl = url + "/" + path # apending the which is targeturl + path response = requests.get(finalurl) # making a get request to get the status code of the website if response.status_code == 200 or response.status_code == 403: # checking if status is 200 or 403 print(finalurl + " [" + str(response.status_code) + "]") # if status code is 200 or 403 print the finalurl else: # else pass the response pass fuzzer_2-Veed.mp4