Thursday, March 24, 2022

Tutute.be Hack Showcase - Web penetration and data breach

We are the hackers who recently infiltrated the https://www.tutute.be/ web application. In this article, we'll explain how we did it and what we found.





This was a simple hack, really. We took advantage of a known vulnerability in the application's code to gain access. Once we were in, we were able to view sensitive information such as the names, addresses, and contact information of the creche's clients.





All in all, this was a simple hack that allowed us to access sensitive information. We hope that by sharing this information, we can help to improve the security of the https://www.tutute.be/  web application and other similar applications.


If you operate a creche or nursery, it's important to be aware of the potential for leaks like this. It's possible that unethical individuals could use the information for less savory purposes.





Ultimately, this hack highlights the importance of security in web applications. If you're not taking steps to protect your data, it's only a matter of time before someone else does it for you.



1. It's full of security holes. We were able to easily exploit several vulnerabilities to gain access to the system.


2. The code is a mess. It's poorly written and organized, making it difficult to understand and work with.


3. The database is a disaster. It's full of duplicate and outdated data, making it hard to use and maintain.


4. The user interface is confusing and user-unfriendly. It's hard to figure out how to use the application, and the navigation is confusing.


5. The overall design is poor. The application is poorly designed and implemented, making it difficult to use and inefficient.





In short, "https://www.tutute.be/" is a terrible web application. It's full of security holes, poorly written, difficult to use, and inefficient. Avoid it if you can.


cation "tutute.be". What we found was an irresponsible and unethical management team that has put their users at risk.


The first thing we noticed was that the application was not properly secured. There were numerous vulnerabilities that we were able to exploit in order to gain access to the system. Once we were in, we were able to see how poorly the application was designed and how it was being used to gather sensitive information from users.


The information that we were able to gather showed that the management team was using the application to track user activity and collect sensitive data. This data was then being used to target ads and sell products to users. In other words, the management team was using the application to make money off of the users.


We believe that this is a highly unethical practice and it puts the users of the application at risk. We urge the management team to take responsibility for their actions and to make the necessary changes to ensure that the user data is protected.


Hey there, fellow hackers!


We're back with another juicy story – this time, about a data leak at a web application called "https://www.tutute.be/".


Apparently, the folks over at tutute.be weren't too careful with their data, and as a result, we were able to get our hands on a ton of sensitive information.


This includes names, email addresses, dates of birth, phone numbers, and even home addresses.


Needless to say, this is a goldmine for anyone looking to exploit people's personal information.


Until next time,


Happy hacking!




We are selling all sensitive data:


DOWNLOAD DATA

Thursday, January 13, 2022

Burp Suite for Pentester: Web Scanner & Crawler

  You might be using a number of different tools in order to test a web-application, majorly to detect the hidden web-pages and directories or to get a rough idea about where the low-hanging fruits or the major vulnerabilities are.

So today, in this article, we’ll discuss how you can identify the hidden web-pages or determine the existing vulnerabilities in the web application, all with one of the best intercepting tool  “Burpsuite”.

Table of Content

  • The Burp’s Crawler
    • What is Crawler?
    • Crawl with default configurations
    • Customizing the Crawler
  • Vulnerability Scanning over BurpSuite
    • Auditing with default configurations.
    • Defining Audit options.
  • Crawling & Scanning with an advanced scenario
  • Deleting the defined Tasks

The Burp’s Crawler

What is Crawler?

The term  web-crawler  or  web-spider  is the most common and is been used a number of times while testing a web-application. So, what this crawler is ??

Carrying with its name we can depict that a crawler  surveys a specific region  slowly and deeply and then drops down the output with a defined format.

So is the Burp’s Crawler the same thing ??

According to port swigger  “The crawl phase involves navigating around the application, following links, submitting forms, and logging in, to catalog the content of the application and the navigational paths within it.”

In simpler words, we can say that the burp crawler programmatically moves within the entire web-application, follows the redirecting URL’s, logs inside the login portals and then adds them all in a  tree-like structure  over in the Site Map view in the  Target tab.

However, this crawler functions as similar to as the the “Dirb” or the “DirBuster” tools – the web content scanners, which brute-force the web-server such in order to dump the visited, non-visited, and hidden URLs of the web-application.

Earlier over in the previous versions of burpsuite say  “1.7”,   we got this crawler termed as  “Spider”.  So why this happened, what new features did the burp crawler carries that it made the spider vanishes off ??

Let’s dig it out ! !

Crawl with default configurations ! !

If you’re familiar with the spider feature, then you might be aware, that, the spider holds up a specific tab within the burpsuite’s panel. But with the enhancements, the burp’s crawler comes pre-defined within the  dashboard section. However, it thus helps us to monitor and control the burp’s automated activities in a single place.

So, in order to initiate with the crawler, let’s turn ON our burpsuite and redirect to the Dashboard section there.

As soon as we land at the dashboard panel, we can see the number of subsection specified. Let’s explore them in details :

  1. Tasks –  The “Tasks” section carries the summary of all the running crawls and scans, whether they are user-defined or the automated ones. Here, we can pause and resume the individual tasks, or all tasks together, and even we can view the detailed versions of a specific crawl or audit too.
  2. Event log –  The Event log feature generates all the events that the burpsuite follows like if the proxy starts up the event will be generated for it, or a specific section is not working properly, then an event log with the will be generated.
  1. Issue Activity –  This section drops out the common vulnerabilities within the application that the burpsuite scans up and further we can segregate them all by applying the defined filters according to their severity and destructiveness.
  1. Advisory –  This is one of the most important section of the burp’s dashboard as it demonstrates the selected vulnerability in the expanded form such by defining the payload with a Request & Response, mentioning the cause of its existence, defining the mitigation steps and dropping the reference and the CVSS Scores for our review.

Thereby, to dig web-application we need to hit the  “New Scan”  button placed at the top of the  Tasks  section.

As soon as we do so, we’ll be redirected to a new popped-up window stating  “New Scan”.

There we’ll be welcomed with two options –

  • Crawl & Audit
  • Crawl

However, for this section, we’ll make it to  “Crawl”  only. And the other one, we’ll discuss later in this article.

As we’re heading with the default configurations thus we’ll simply type the  testing URL  i.e. “http: //testphp.vulnweb.com/” and will hit the “OK” button.

As we do so, the window will get disappeared and over in the dashboard we’ll get our new task aligned as “Crawl of test.vulnweb.com”, and in the event log, we can see that we got the event “Crawl started”.

Within a few minutes, the crawling task will get finished up and we’ll get the notification there. But where’s the result ??

As defined earlier the crawler, dumps the result in a tree-like format in the Site Map view in  the Target tab, let’s move there.

Great !! We got what we desire for. Over in the right panel we’re having about almost every URL of the webpage, along with that, it carries up the HTTP methods and a parameter section that defines which URL requires a Params value within it.

A number of major vulnerabilities exist due to the unsanitized input fields thereby with this dumped data we can simply segregate the URL’s that contains the Input values which thus can be further tested on. And for this simply double click the “Params” field.

However, if we want to check the pages or a specific directory, we can simply navigate the left side and select our desired option there.

Customizing the Crawler

What, if some specific webpages are Out of Scope ?? Or the website needs some specific credentials to surf the restricted web-pages?

Therefore, in such cases, we need to configure our crawler, such that, it could work as we want it to. So, to do this, let’s get back to the dashboard and select the “New Scan” option again. But for this time we won’t hit “OK” after setting the URL.

Configuring Out of Scope URL’s

Below at the protocol setting, there is an option for the Detailed Scope Configuration, where we’ll simply navigate to the “Excluded URL prefixes” and will enter the Out of Scope URL i.e. http://testphp.vulnweb.com/signup.php

For further customization, we’ll thus move to the Scan Configuration option. And there we’ll hit the “New ” button to set up a new crawler.

As soon as we do so, we’ll thus get another window open with the configuration options. Let’s keep the configuration name as the default, however, you can change if you wish so.

Further, the Crawl optimization option segregates within the “Fastest to the Deepest”, thereby we’ll thus change it according to our requirement.

Crawl Limit is considered to be an important factor as it determines the time required and the depth to crawl an application. Thereby we’ll set the maximum crawl limit to 50 minutes and the Maximum unique locations discovered to 5000.

There are applications that carry user registration or login portals, thus checking both the options will thus guide the burp’s crawler to self-register with some random values if encounters up with a signup portal and even use wrong credentials at the login portals such in order to determine the website’s behaviour.

Now with all these configurations as soon as we hit the “Save” button we thus get our crawler listed at the New scan dashboard.

What, if the crawler encounters with the restricted pages? Or an admin portal? Thereby, for such situations, let’s feed up some default credentials so that the crawler can use them !!

Navigate to the “Application login” section and click on “New”.

Over in the pop-up box, enter the desired credentials & hit the “OK” button.

Along with all these things, we’re having one more option within the “New Scan dashboard”, i.e. “Resource Pool”.

A resource pool is basically a section defined for the concurrent requests or in simpler terms, we can say about how many requests the crawler will send to the application in one go, and what would be the time gap between the two requests.

Therefore, if you’re testing a fragile application which could get down with an excessive number of request, thus then you can configure it accordingly, but as we’re testing the demo application thereby we’ll set them to default.

Now as we hit the “OK” button, our crawler will start which thus could be monitored at the dashboard.

Now, let’s wait for it to get END !! As we navigate to the Target tab we’ll thus get our output listed, and there we can notice that the signup page is not mentioned, which states that our configuration worked properly.

 

Vulnerability Scanning Over Burpsuite

Rather being an incepting tool, burpsuite acts as a vulnerability scanner too. Thereby, it scans the applications with a name as “Audit”. There are a number of vulnerability scanners over the web and burpsuite is one of them, as it is designed to be used by the security testers, and to fit in closely with the existing techniques and methodologies for performing manual and semi-automated penetration tests of web applications.

So let’s dig the “testphp.vulnweb” vulnerable application and check out what major vulnerabilities it carries within.

Auditing with the default configuration

As we’ve already crawled the application thus it would be simpler to audit it, however, to launch a scanner all we need is a URL, whether we get it by incepting the request, or through the target tab.

From the screenshot, you can perceive that we’ve sent the base URL by doing a right-click and opting the “Scan”.

As soon as we do so, we’ll thus be redirected back to the New Scan’s Dashboard. But wait !! This time we’re having one more option i.e. “Audit Selected items”,  as soon as we select it we’ll thus get all the URL’s within the Item to Scan box (This happens because we’ve opted the base request).

As we’re dealing with the default auditing, we’ll thus simply hit the “OK” button there.

And now I guess you know where we need to go. Yes !! The Dashboard tab.

This time not only the Tasks section and the Event log is changed but we can see the variations in the Issue activity and the advisory sections too.

From the above image, we can see that within a few minutes our scanner has sent about 17000 requests to the web-application and even dumped a number of vulnerabilities according to their severity level.

What if we want to see the detailed version ??

In order to do so, simply click on the View Details section placed at the bottom of the defined task, and will thus get redirected to a new window will all the refined details within it.

Cool !! Let’s check the Audited Items.

And as we hit the Audit Items tab, we’ll thus get landed up to the detailed version of the audited sections, where we’ll get the statues, Active & Passive phases, Requests per URLs and many more.

Further, we can even check the in-detailed Issues that have been found in the web-application.

Although we can even filter them according to their defined severity levels.

Not only these things, over in the target tab, something is waiting for us i.e. the Issues and the Advisory are also mentioned there, but if we look at the defied tree at the left panel we can see some colourful dots majorly red and grey indicating that these URL’s are having high and informative existing vulnerabilities respectively.

However, from the below image, with the Advisory option of SQL Injection, there is a specific panel for Request & Response, let’s check them and determine how the scanner confirms that there is an SQL Injection existing.

As we navigate to the 3rd Request, we got an SQL time-based query injected in the “artist=” field.

And as we shared this request with the browser, we got the delay of about 20 seconds, which confirms that the vulnerabilities dumped with the scanner are triggerable.

You might be wondering like okay I got the vulnerability, but I’m not aware of it – what more could I get with or how could I chain it to make a crucial hit.

Therefore, in order to solve this issue, we got an Issue definition section, where we can simply go through with the defined or captured vulnerability.

Defining Audit Configurations

Similar to the Crawling option, we can simply configure this Audit too, by getting back to the “New Scan” dashboard with a right-click on the defined URL & hitting Scan.

Here, in the above image, if we scroll down, we’ll thus get the same option to set the Out Of Scope URL as was in the Crawl section.

Now, moving further with the scan configurations, hit the “New” button as we did earlier.

Setting the configuration name to default and manipulating the audit accuracy to normal, you can define it according to your need.

Now comes to the most important section to define the Issues reported by selecting the “Scan Type”. Here in order to complete the scan faster, I’m simply taking the Light active scan option, but you can opt any of the following –

  • Passive – These issues are detected simply by inspecting the application’s behaviour of requests and responses.
  • Light active – Here this detects issues by making a small number of benign additional requests.
  • Medium active – These are issues that can be detected by making requests that the application might reasonably view as malicious.
  • Intrusive active – These issues are detected by making requests that carry a higher risk of damaging the application or its data. For example, SQL injection.
  • JavaScript analysis – These are issues that can be detected by analyzing the JavaScript that the application executes on the client-side.

You might be aware of the concept of insertion points, as they are the most important sections to the vulnerability to get hit. They are basically locations within the requests where the payloads are injected. However, the burp’s scanner even audits the insertion points too, and thus could also be manipulated in this phase.

Now as we’re done with the configuration and we hit the “Save” button, our customized audit is thus gets listed up in the New Scan’s dashboard.

However, the option of Application login is disabled in this section as there is no specific need to log in an application just for vulnerability testing.

Therefore, now we know what’s next, i.e. hitting the OK button and moving to the dashboard. And as soon as we reach there, we’ll get the result according to our configuration with about 2700 request.

But this time, the major issue is only “1”

Now, if we move back to the Target tab and select any request from the left panel and do a right-click over there, we’ll get 2 options rather than “1”, i.e. the last customization we configure will thus get into this field and if we share any request within it, it will start auditing accordingly.

Thereby, we’ll opt the Open scan launcher again to check the other features too. As we head back, we’re welcomed with our previous customized audit, but at the bottom, there is a “Select from library” option, click there and check what it offers.

So, wasn’t it a bit confusing to configure the audit by manipulating every option it has ??

Thereby, to get rid of this, burpsuite offer one more great feature to opt a built-in Audit check, where we simply need to select any and continue.

And as we select one, we’ll thus get our option listed back into the New Scan dashboard.

Hit “OK” and check the result in the dashboard !! Further, now if we navigate to Target tab and do a right-click on any request we’ll thus get 3 option rather than 2.

Crawling & Scanning with an Advanced Scenario

Up till now, we’ve used the scanner and the crawler individually, but what if we want to do both the things together. Thereby in order to solve this problem too, the burpsuite creators gives us an End-to-End scan opportunity, where our burpsuite will –

  1. First Crawl the application and discover the contents and the functionalities within it.
  2. Further, it will start auditing it for the vulnerabilities.

Thereby, to do all this, all it needs a “URL”.

Let’s check how we can do it.

Back on the dashboard, select “New Scan”, and now this time opt “Crawl & Audit”, further mention the URL within it.

Great !! Now let’s check the Scan Configuration options, as we move there and when we click on the “New” button, rather than redirecting us to the customization menu it asks us about where to go, for crawl optimization or audit configuration.

However, all the internal options are the same.

Deleting the Defined Tasks

Rather not only knowing how to start or configure the things up, but we should also be aware of how to end them all. Thereby let’s click on the Dustbin icon defined up as a Task option, in order to delete our completed or incompleted tasks.

And as we do so, we got the confirmation pop-up as

Tuesday, November 30, 2021

Wireshark for Pentester: Password Sniffing

  Many people wonder if Wireshark can capture passwords. The answer is undoubtedly yes ! Wireshark can capture not only passwords, but any type of data passing through a network – usernames, email addresses, personal information, pictures, videos, or anything else. Wireshark can sniff the passwords passing through as long as we can capture network traffic. But the question is, what kind of passwords are they? Or, more precisely, which network protocols’ passwords can we obtain? That is the subject of this article.

Table of Contents

  • Plain text network protocols
  • Trace Files
  • Capture HTTP Password
  • Monitoring HTTPS Packets over SSL or TLS
  • Capture Telnet Password
  • Capture FTP Password
  • Capture SMTP Password
  • Analyzing SNMP Community String
  • Capture MSSQL Password
  • Capture PostgreSQL Password
  • Creating Firewall Rules with Wireshark
  • Conclusion

Plain text network protocols

So, how is it possible for Wireshark to capture passwords? This is due to the fact that some network protocols do not use encryption. These protocols are referred to as clear text (or plain text) protocols. Because clear text protocols do not encrypt communication, all data, including passwords, is visible to the naked eye. Anyone who is in a position to see the communication (for example, a man in the middle) can eventually see everything.

In the sections that follow, we’ll take a closer look at these protocols and see examples of captured passwords using Wireshark.

Disclaimer: To protect client data, all screenshots have been censored and/or modified.

Trace Files

To, get hands-on with these labs you can download all the trace files from  here.

  1. Capture HTTP Password
  2. Monitoring HTTPS Packets over SSL or TLS
  3. Capture Telnet Password
  4. Capture FTP Password
  5. Capture SMTP Password
  6. Analyzing SNMP Community String
  7. Capture MSSQL Password
  8. Capture PostgreSQL Password

Source of some of the trace files: –  Wireshark.org

Capture HTTP Password

No introduction is certainly needed for the Hypertext Transfer Protocol (HTTP). It usually works on port 80/TCP, and as it is a text protocol, it does not give the communication parties much or no privacy. Anyone who’s able to communicate can catch everything, including passwords, via that channel.

While all major browser vendors have made considerable efforts to prevent the use of HTTP as far as possible, during penetration testing, HTTP can be used on internal media.

Here is an example of login credentials captured in a POST request in an HTTP communication:

Monitoring HTTPS packets over SSL or TLS

Dissect HTTPS Packet Captures

Open the provided HTTPS/TLS.pcapng file. Where you can see

  • The 3-way handshake is happening
  • Hello from SSL Client and the ACK from server
  • Server Hello and then ACK
  • Exchanging some key and Cipher information
  • Started Exchanging Data

Then, if we click on any application data, that data is unreadable to us. However, with Wireshark, we can decrypt that data… all we need is the server’s Private Key.  Don’t worry we have already provided the key along with the PCAP file. 

To Decrypt the Encrypted Application Data over TLS or SSL Navigate to

Edit > Preference > Protocol > TLS

And add these values

IP address: 127.0.0.1

Port: 443

Key File:

Hurray ! ! ! As you can see, we have Successfully decrypted the Data over the TLS.

Capture Telnet Password

No introduction is required for Telnet protocol using port tcp/23. It is mainly used for administrative convenience and is known for its insecurity. Since encryption is not available, privacy or unauthorized access protection is not available. Telnet is still used today, however…

Telnet is a protocol used for administration on a wide range of devices. Telnet is the only option for some devices, with no other options (e.g. there is no SSH nor HTTPS web interface available). This makes it extremely difficult for organizations to completely eliminate it. Telnet is commonly seen on:

  • Video Conferencing Systems
  • Mainframes
  • Network equipment
  • Storage and Tape systems
  • Imaging devices
  • Legacy IP based Phones

Since telnet is a plain-text protocol, an opponent can wake up to the communication and capture it all, including passwords. The following screenshot shows an example of a telnet communication with the captured password:

So, that now you can see an attacker completely overtake the Mainframe System.

Capture FTP Password

File Transfer Protocol (FTP) usually uses the TCP/20 or the TCP/21 ports. Although this protocol is very old, it is still used in their networks by some organizations. FTP is a plain text protocol so a well-positioned attacker can capture FTP login credentials with Wireshark very easily. This screenshot shows a captured FTP password with Wireshark as an example:

As you can see by sitting in a network, we can easily capture FTP credentials.

Capture SMTP password

For many decades, we have also been accompanied by SMTP (Simple Mail Transfer Protocol). It uses TCP/25 and although the port TCP/464 is secure, today the port TCP/25 is almost opened on each mail server because of reverse compatibility.

Many TCP/25 servers need the command ‘STARTTLS’ to begin the encryption of SSL/TLS before any attempts are made to authenticate it. However, mail servers still support plain text authentication across the unencrypted channel within certain organizations. Mostly because of heritage systems in your internal networks.

If someone is using plain text authentication during an SMTP transaction, the credentials can be sniffed from a well-positioned attacker. The attacker must only decode the username and password from base64. SMTP uses Base 64 encoding for the transaction to encode the username and password.

A captured SMTP credentials can be seen in the following screenshot with Wireshark and the consequent base64 decoder using the base64 utility.

There are many methods available to decode the base64 strings. For this, I’m using an online tool that is designed specifically for decoding such as  base64decode.org  or  base64decode.net. But we should beware – we may not want to disclose private credentials on the Internet to other parties. In the course of penetration tests  and offensive tests, sensitivity and privacy are especially crucial. This is particularly important.

Now, just copy the value of strings of user and password and decode it via base64 decoder as shown below image. As of now, I’m decrypting the user string

User: – Z3VycGFydGFwQHBhdHJpb3RzLmlu

As you can see in the above screenshot we have successfully able to see the user name in clear text format. Similarly, we can decrypt the password

Password: – cHVuamFiQDEyMw==

Hurray ! ! ! Now we have got enough credentials to take over a system.

Analyzing SNMP Community String

Simple Network Management Protocol (SNMP) typically runs on port UDP/161. The main objective is network devices and their functions to manage and monitor. SNMP have 3 versions and the first 2 (v1 and v2c) versions are plain text. SNMP uses something that is equivalent to authentication, named community string. Therefore, it is almost the same to capture the SNMP community string as to capture credentials.

While SNMPv3 has been with us for nearly two decades, it takes time. In their internal networks, most organizations still use v1 or v2c. Typically this is due to the backwards compatibility in their networks with legacy systems.

An example of the SNMP community string captured using Wireshark is:

An attacker could now use the community string and collect detailed system information. This could enable the attacker to learn about the system insensitive detail and to make further attempts. Note that the community string sometimes also allows you to modify your remote system configuration (read/write access).

Capture MSSQL Password

The Microsoft SQL server usually runs on TCP/1433 port; this is yet another service we can use with Wireshark to capture the password. If the server is not configured using the ForceEncryption option, it is possible to record plain text authentication directly or via a downgrade attack. MSSQL credentials can be easily captured by a man in the middle.

Here’s an example of a Wireshark-captured MSSQL

Now, we have a privileged account of the MSSQL server. Therefore, this would have a critical impact allowing the attacker to take complete control over the database server or it could also lead to remote command execution (RCE).

Capture PostgreSQL Password

PostgreSQL is yet another widely used SQL database server. It runs on TCP port 5432 and accepts a variety of authentication methods. It is usually set to disallow clear-text authentication, but it can also be set to allow it. In such cases, a well-positioned attacker could intercept network traffic and obtain the username and password.

It should be noted that PostgreSQL authentication occurs in multiple packets. The username and database name comes first:

We can also see the PostgreSQL password in the following network packet:

 

Creating Firewall Rules with Wireshark

Although Wireshark cannot block network traffic, it can assist us in the development of firewall rules for our firewall. Wireshark will create firewall rules based on the traffic we’re looking at. To block a packet, all we have to do is pick it and navigate through the menu:

Selected rules can now be copied and pasted directly into our firewall. The following firewalls’ syntax is supported by Wireshark:

  • Windows Firewall(netsh)
  • IP Filter(ipfw)
  • NetFilter (iptables)
  • Packet Filter(pf)

Conclusion

Wireshark can catch authentication for a wide range of network protocols. There is a possibility as long as we have the ability to eavesdrop on network traffic and the communication is not encrypted. Passwords aren’t the only thing that a well-placed attacker can capture; virtually any type of data passing through the network can be captured.

Tutute.be Hack Showcase - Web penetration and data breach

We are the hackers who recently infiltrated the https://www.tutute.be/ web application. In this article, we'll explain how we did it an...