Welcome to Understanding Link Analysis. The purpose of my site is to discuss the methods behind leveraging visual analytics to discover answers and patterns buried within data sets.

Visual analytics provides a proactive response to threats and risks by holistically examining information. As opposed to traditional data mining, by visualizing information, patterns of activity that run contrary to normal activity surface within very few occurances.

We can dive into thousands of insurance fraud claims to discover clusters of interrelated parties involved in a staged accident ring.

We can examine months of burglary reports to find a pattern leading back to a suspect.

With the new generation of visualization software our team is developing, we can dive into massive data sets and visually find new trends, patterns and threats that would take hours or days using conventional data mining.

The eye processes information much more rapidly when information is presented as images, this has been true since children started learning to read. As our instinct develops over time so does our ability to process complex concepts through visual identification. This is the power of visual analysis that I focus on in my site.

All information and data used in articles on this site is randomly generated with no relation to actual individuals or companies.

Analyzing Medical Fraud

One of the most prevalent and costliest fraud is medical provider and billing fraud. It is also one type of insurance fraud which incorporates numerous other criminal activities into it. Through analyzing and investigating medical fraud, you will find

  • Staged Accident Rings
  • Money Laundering
  • Legal Provider Fraud
  • Drug Offenses
  • Tax Evasion
  • Wire and Mail Fraud
Medical fraud is probably one of the easiest investigations to migrate into RICO statues as the activity incorporates so many other criminal activities.

Medical fraud analysis and investigation is very complicated as it is so paper and record intensive. I can't imagine putting together a medical fraud investigation without visual analysis as there are so many records to incorporate and rationalize to detect the activity.

The best place to start is through strategic or proactive analysis of billing data. There is a wealth of information contained on HICFA medical billing forms that can be utilized to detect patterns and relationships between doctors, clinics, patients and the procedures they are billing.

Medical Billing Analysis

The first place I am going to start is by patterning what medical providers are billing me to find indicators of fraud such as over billing, billing for services not rendered and improper or illegal procedures.

This task can be accomplished in two different ways, one by utilizing desktop applications such as Microsoft access or excel and the other by leveraging visual analysis. Both of the methods assume that you have already data based your medical billing into a format which can be incorporated into your visual analysis software or brought into desktop applications.

Lets start with utilizing excel to analyze billing patterns. For this scenario I am going to download six months of medical billing from a problematic geographical area, in this example we are going to use Naples Florida. We can limit this data set down a little more by eliminating or focusing on specific specialties that are more prone to fraudulent activity. For the purpose of this exercise I am going to focus on chiropractors billing from Naples Florida to bring my data set to a manageable size.

Below is an example data set downloaded or data based from my medical billing.

By utilizing pivot tables, I am able to get an overview of what this group of providers in the area is billing for and can detect any irregularities.

Right away I spot some billing irregularities, there is a Chiropractor who is billing 99205 for several claims. For those who are not familiar with medical billing, for a person to be billed this particular code, you should be close to dead laying in a hospital, not walking into a chiropractic office.

I can also look in the pivot tables for the same person visiting multiple clinics in different claims. This person is either really unlucky or this can be an indicator of staged accident activity. Another option is to look at patterns of billing between chiropractic offices, radiology clinics and medical equipment supply companies which can be an indicator of patient brokering, particularly if the same person owns all three locations.

Through simple of use of pivot tables from one set of data, I can look at the results in several different ways without having to run any additional queries to locate targets of opportunity for further investigation.

Medical Provider and Patient Analysis

Now that I have examined all of my medical billing for Naples Florida, I have narrowed down a few providers which may be involved in medical fraud or other nefarious insurance fraud activities.

At this point I want to produce a visualization of the clinic and the patients to look for signs of fraud such as medical providers who are associated with numerous clinics and patients who are treating at them, particularly if those patients are involved in multiple medical claims.

I am going to import my medical data into my analytical software to detect these relationships. Like the other import specifications we have put together, there are unique identifiers in medical billing which we can leverage for our visualization.

For clinics in my data, I am going to utilize the tax id as the unique identifier. Next, I want to link the medical providers to the clinics. The unique identifier for the doctor is going to be the license number which is captured on the HICFA form, this will prevent two doctors with the same name from being created as one entity.

Next I am going to link the patients to either the clinic, the doctor or to both depending on the type of claim it is. For auto related medical claims, I have found it is best to link the patients to the clinics, for medicare or Medicaid fraud, linking the patients to the doctors is most effective, that is because in auto related medical fraud, normally the accidents are also fraudulent, staged by runners who work for a group of clinics.

In both scenarios I am look for the migration of doctors across numerous clinics, and the patients following them to their associated clinics which is a good indicator of fraud, such as in the visualization below

What we are seeing here is multiple doctors associated with multiple medical clinics. Along with this pattern we also see the patients following to each clinic the group of doctors are associated with.

I can then leverage visual analysis to examine the medical billing to see if the same CPT codes are being used for each patient who is treating at the group of clinics

From here I am going to incorporate in the legal providers which are representing the patients treating. Medical fraud normally requires several people filling roles in the scam:

  • The runner who works for the clinics and brings in the patients through staged accidents or solicitation
  • The lawyer who is associated with the scheme who "represents" the claimants in order to maximize the benefits from the insurance company.
  • The doctor who "treats" the patients
  • The clinic who houses the scheme and bills the insurance company.
To understand the relationships through all these moving parts, we can visualize the events from the medical provider, lawyer and clinics in a timeline to find patterns.

Ultimately, the goal of the investigation is to find the top of the pyramid, normally the person who is getting all of the money. Rarely are the claimants in charge of the scheme and are normally paid very little considering the profit. In the case of medical fraud it can be the doctor, lawyer or the person who finances the clinics and following the money is the only way to determine who is ultimately responsible.

This is where we shift from our medical fraud analysis to our financial analysis included on the site. In this article we have developed a query to make a data set of medical data for analysis, utilized pivot tables to develop targets for investigation and used visual analysis to discover the scheme.

Aside from using visual analysis to help you investigate medical fraud, it will also help your counsel, prosecutor and jury more easily understand the fraud to see it in a visual representation. While medical fraud is one of the most complicated fraud schemes to investigate, it's one of the hardest to explain to a criminal or civil jury, one of the reasons why prosecutors are reluctant to present it. The way visualization helps you understand and organize your investigation will ultimate serve to help the people you are explain it to.

Visualizing Network Information For Analysis And Investigation

Those who are engaged in internal investigations will no doubt end up having to incorporate network analysis into their investigations. With so many companies and criminals for that matter, going paperless; email logs, network and firewall logs and server logs have increasingly been integrated into criminal and corporate investigations.

If you lucky enough to have a strong network security department or vendor who provides network trace or packet sniffing services, you are at least halfway ahead of the game. The next task is how to organize the data that has been provided to you in hundreds of thousands of lines of network traffic logs.

There are two scenarios that I would like to cover, and while they are just the tip of the iceberg for network analysis, they are the most common types to be leveraged by investigators who are not specialists in network analysis.

The first scenario I would like to examine is an analysis of client network traffic. As a law enforcement or corporate investigator, you receive information that an individual is utilizing a network for the purpose of downloading child pornography. You request your network lead or IT specialist to put a packet sniffer on the individuals network to capture the information being sent and received from the suspects computer.

For the purpose of this first scenario I am going to utilize a freeware program call Wireshark, a network analysis package that is similar to those use in a corporate environment. For demonstration, I ran a network trace on my own laptop for approximately 10 minutes which produced 22,000 lines of IP and packet information (none of the IP's actually go to a child pornography site, this will be simulated). During the 10 minutes, I performed multiple tasks online, including visiting one site repeatedly which is my simulated child pornography site.

Network Trace Analysis:

The first step I am going to need to take in order to visualize the packet information I have captured from Wireshark is to export the log file into a format that can be imported by my visualization software.

From the screen shot above you can see that wireshark has logged all packet information and has the ability to export that data into a comma delineated format which is perfect for importing into my visualization software. Exported into excel as a comma delineated file the information is organized below

My investigative objective is to determine what web sites this client is going to and the velocity they are going to them. For this type of investigation, it resembles telephone polling analysis in the way that there is a origination and destination entity as the primary links in the visualization. Like telephone toll analysis I am going to visualize velocity between the originating and destination entities by link line width, making the links with the highest velocity thicker. Assuming that someone who is crazy or sick enough to hit child porno links at work, the chances are they do it quite a bit, those are the sites I want to initially focus on.

I begin setting up my import specification by establishing the identity of the originating IP as the originating computer in my visualization assigning it the "source" column as the identity. The originating IP is linked to the destination IP. To contrast the difference I am color coding the destination IP or computer icon, a different color, in this case its red. I am assigning the destination IP the "destination" column as the identity.

There is some additional information that I want to include in my origination and destination entities that is important in my analysis and captured by wireshark, this includes the "info" column which contains the packet information sent between the two IP's.

For the link line I am going to assign the label of the link line the total number of occurrences in the data. This is important when creating link lines which are linear based on occurrence. There is also a date and time column captured in the data which I am going to map to the link line. This will be important in my second visualisation where I am creating a timeline.

Now that the import specification has been created, lets pull the trigger on it and see what our network visualization looks like.

As you can see, I have managed to condense down the 22,000 lines of network log into a visualization that quickly identifies the highest velocity destination IP's for my investigation. This was a fairly small sample and only from one client host, the same analysis works for multiple clients with hundreds of thousands of lines.

I can then take the same data and alter my import specification to create a timeline to illustrate the network traffic in a timeline, indicating what date and time the client computer connected to specific web sites.

For my timeline, the import specification is going to differ a bit. I am going to assign the originating IP and destination IP as the theme line entities. For the link line I have two options, I can create a new link line every time the originating IP connected to the destination IP and bind it to the time line or I can make the link line linear based on occurrence just like the link chart, however only the first occurrence will be bound to the time line as only one link line will be created for each unique connection between the two IP addresses.

For those who are engaged full time in network analysis and investigations, there are much more complex scenarios you will be tackling such as server log connections, server file access, firewall logs and the like. All of these scenarios can be visualized by employing the same import methodology just at a larger scale such as in botnet, hacking or virus investigations. Hopefully the examples provided give some insight and ideas into creating much more complex network analysis visualizations.

Email / Exchange Server Communication Analysis:

The second scenario is common in a wide variety of investigations, the visualization of outlook, lotus or exchange server communication.

In this scenario, just like before, I have completed my network analysis of an individual suspected of accessing child porn sites on a client computer. The next step in my investigation is to determine who this person communicates with the most.

I am going to leverage visual analysis to import in outlook or exchange server logs to analyze the flow of communication by email this person has. This type of analysis can be considered association analysis, you could also theoretically use this type of analysis across multiple email accounts for social networking analysis within an organization.

The first step is to export the outlook logs into a comma delineated format for import into my visual analysis software. The output from outlook or an exchange server is going to look similar to the illustration below.

From this data, there are several types of visualizations I can make. For determining strength of associations I would import the data into a link chart using linear widths for link lines to determine who this person communicates with. For time sensitive investigations, I would import this information into a time line.

In email data there are three entities to create and link, each have a different importance in an investigation. The first entity is establish by the "from" field in the outlook log. Since this field is going to change based on if email is coming in our going out, its important to create a link chart with directed links to establish the flow of communication.

For example Andrew Marane could be in the from field if I were sending a message out or I could appear in the "to" field if I were receiving the message. In my visualization there will be one "Andrew Marane" entity created so by using directed links I can establish who "Andrew Marane" sends messages to and receives from.

I am going to begin to set up my import specification from the outlook data. I am going to utilize a theme line layout but there is a difference to the way I am going to import in the data compared to other imports. I have four potential entities but I can only utilize two in a theme line. All of the entities relate to who sent the message, or the "from" entity.

In order to show the string of communication in my visualization I am going to have an import spec to import and visualize:

From linked to To
From linked to CC
From linked to BCC

Each import specification resembles the other, the only change is to the identity of the second entity for each of the different message roles. When I am ready to import my data for visualization I am going to run the three import specifications in a row to bring all the data into one chart.

Lets run our import specification and see what it looks like:

From this visualization, we learn several different things. First, I changed the link label color to correspond to the role, in this case red is a BCC link. My suspect is BCC'ing Brandy White on almost every communication. Also because of the velocity, the link label is thickest on the subjects who communicate the most.

This was a fairly small demonstration file for email. The majority of email logs I have looked at contains thousands of line entries. Within each line entry you can have anywhere between one to ten different people the message is sent to, copied to or blind copied on. By visualizing this information we can quickly establish a pattern of networking and communication between groups of individuals under investigation.

Creating Timelines From RSS News Feeds

Those who analyze worldwide events for threat analysis are constantly plugged in to multiple news and information sources. When a threat occurs somewhere in the world, capturing and analyzing the information being received from your sources can be a daunting task.

One way to organize this information is to set up an automated process for capturing and time lining the relevant data for real time analysis. In this article I am going to discuss one of the methods for capturing information from multiple news sources and setting up an import specification to bring the data into a visualization utilizing link analysis software and Microsoft desktop components.

Capturing Data:

I am going to start with two different news feeds, one from ABS-CBN in Southeast Asia and another from BBC in the United Kingdom. Ultimately you can set up RSS feed captures on as many news sources as you like.

The first step is to capture the xml schema from the news source RSS feeds. Really Simple Syndication is a family of Web feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format. An RSS document (which is called a "feed", "web feed", or "channel") includes full or summarized text, plus metadata such as publishing dates and authorship. Almost all news sources incorporate RSS feeds in their site allowing users to subscribe to the feed.

Lets start by capturing the schema in ABS-CBN and bringing the data into Microsoft Excel. First I am going to locate the RSS page on ABS-CBN's web site. As you see below, the RSS location is found on the RSS logo:

I am going to click on the logo to bring up the RSS page for ABS-CBN. Once I have landed on the ABS-CBN RSS page located at http://www.abs-cbnnews.com/rss.xml I view the page source information by clicking view on my browsers menu and selecting "View Page Source". This brings up the xml schema for the RSS feed page that I am going to copy and paste into Microsoft Excel to bring in the pages data into a spreadsheet allowing me to set up an import specification for my visualization software.

The XML schema contains the data source for the news feed along with column and delineation information that excel can use to organize the information in a spreadsheet. Once I have this page open with the schema viewed I am going to select save on the xml window and save the schema to my local computer.

My next step is to open Microsoft Excel and bring the XML schema into my spreadsheet. I accomplish this by going to Excel and selecting the Data tab then select "Other Data Sources" for the source menu. Then I am going to select "From XML Data Import" from the Other Data Sources sub-menu.

Once the XML data source is selected, Microsoft Excel is going to prompt for an XML file location as the data source. I am going to select the ABS CBN.xml file I saved from my browser. Once I have selected the XML schema file, excel is going to import the data into a spreadsheet. Keep in mind that every news source has a different RSS schema and displays different information but for the most part all show basic information that we are going to utilize for our timeline.

In this particular case, ABS-CBN provides several fields we can utilize; A one sentence summary of the news article; a publication date and time that the story was posted and a source which is going to our theme line identity for the time line. The only thing lacking from ABS-CBN's RSS feed is a narrative column which BBC provides, this column is used in the description field of the visual import to provide some additional information aside from the summary.

I put the ABS-CBN xml into tab 1 in my spreadsheet and now I am going to navigate to BBC and follow the same steps from above placing the XML into tab 2 of the same workbook. From the example below you will notice that BBC uses different column setups but is similar to the ABS-CBN data however it adds the description information.

We have now captured two different news sources that we are going to import into visualization software to create a time line based on the information being provided. Keep in mind you can create as many XML feeds in excel for as many different sources as you like and refresh the entire workbook at one time.

Importing RSS Data Into Visualization Software:

Now that I have set up my RSS feeds in Microsoft Excel and I am going to save my workbook on my local computer and begin setting up an import specification to bring the data into my visualization software creating a time bound theme line for the news stories.

In real life, I would be capturing data on a major event, such as a bombing, for specific incident or threat analysis. Regretfully today everyone is behaving themselves so we are going to be importing in garden variety news stories into a theme line but this should give you a good example of the process for leveraging and organizing news information for threat analysis.

I am going to open up i2 and begin building my import specification. This specification is going to differ from the others we have discussed in previous articles as the layout and format is going to revolve around time binding events for a theme line.

When I select my RSS feed Excel workbook as my data source for my import you can see that ABS CBN is located on the first tab and the BBC is located on the second. We are going to use both tabs, however since every news site has a different XML schema, thus different columns, each import spec is going to be different for each news source. The good news is, once an import spec is established for a specific news feed, you can reuse that import spec over and over again for new postings and imports.

Even thought the data captured and columns may differ, the columns used in the import are going to be pretty standard. The first change in the import from the others is that I am selecting the theme line layout for events as my import layout.

Next I am going to examine my columns and data to determine where to place them in my theme line. For both the ABS-CBN data and the BBC data, the posting date and time is going to assigned to the data and time in my import specification. Also in both, the source is going to be assigned to the theme line entity identity. This is going to create a new theme line for each news source and link the appropriate stories to the appropriate source.

Next I need to set up the identity of my event frame. Just like every entity, the identity of the event frame needs to be a unique identifier for each story so that each change in the story will create a new event frame. I am going to assign the one line description in the news feed as the identity. As news articles will replicate when imported into Excel, this will ensure that only one event frame will be created for each unique news story.

For the BBC news feed, the RSS XML includes a story description which contains several lines of information about the story. The ABS-CBN also incorporates a description but embeds pictures and video in their feed which cannot be imported by excel so for ABS-CBN I am going to use the hyper link to the article in the description which is going to allow me to click on my chart and go to the story to see the full news description.

Another standard field that is included in most news organization RSS feeds is a category field which is a very valuable field to bring into my visualization allowing me to search within my chart for specific categories. This is very important if multiple events are occurring in different parts of the world that you are analyzing. Each site interprets the category differently, in the case of BBC the category refers the geographic location of the story such as Asia, Africa or Europe. I am going to bring in the category field as an attribute to my event frame.

This is going to create a search able attribute field on my visualization that I can use to isolate news stories to a certain area or event.

It is now time to run my import specification and see what we have. On a side note, I have been creating import specifications for 15 years and anytime I am creating a new specification from scratch I still rarely get it on the first try. I also constantly revisit old specifications and find new ways to improve on them. Even though you can save and reuse specs in your software, review them from time to time and see if there is any way to improve on them.

I am going to execute both my BBC and ABS-CBN import specification and bring them into the same chart.

As you can see, each news feed different slightly in the import specification but both clearly organize news stories into a time line for analysis. Each theme line is based on the news source and the category attribute can be used to isolate on specific stories from each theme line. This makes the process of news collection for threat analysis much easier and more organized.

At any point I can refresh each theme line in my chart with any new news stories in a two step process.

First I am going to update my Microsoft Excel spreadsheet containing the data by opening the workbook, selecting the data tab and selecting "refresh all" from the data sub menu. This is going to ping the data source in the XML schema and bring in any new data to each of the tabs from the corresponding news sources.

Next I am going to open my theme line chart, open my saved import specifications and run my BBC and ABS-CBN import spec again bringing in the new data to my theme line.

This is where keeping all of your RSS news feeds in the same workbook comes in handy. In Excel you can refresh every RSS feed at one time, saving time when you need to bring in new data. For those who are good at writing macro's in excel, you can set up an automatic refresh within excel to pull in new data as well.

This best way to learn and refine your ability to visualize news stories is to experiment with different news sources and theme line layouts. There really is no one right way of accomplishing this task but in the end just remember to follow some basic rules for event analysis:

  • Ensure each theme line is assigned to identify the news source, otherwise you are not going to be able to refer back to the source or know where a specific story originated from.
  • Ensure you create a unique identity for each event frame based on the one line description from the feed. Otherwise you may either miss a story or replicate existing stories in your import.
  • Ensure that you use the post date and time as your date and time fields in your import to properly time line events. RSS feeds have several date fields in them such as publication date or posting date but you want to ensure you capture the stories event date. Review the data and find the appropriate field which indicates the date and time the story occurred, every RSS feed from news organizations contains one.
As always if you have any questions regarding this article or any other please feel free to write me at linkanalysis@gmail.com.

Visualizing Financial Information

Performing anti-money laundering or financial investigations can be daunting depending on the number of accounts and transactions you have to review. In general, those involved in money laundering, embezzlement or conducting financial fraud mask important transactions under thousands of ambiguous ones to mask their activity.

Visual analysis can assist you in finding key transactions for your investigation by identifying transaction flow and account relationships. Visually you can follow the flow of commodity much easier then a manual method.

The relationships between accounts and entities associated with those accounts are also important in identifying key players in financial investigations. These associations create "clusters" of interrelated entities in your data and just in many other criminal activities that are visual analyzed, these clusters are indicators of non-standard activity.

If I were to perform a visual analysis on my bank information, which I am not certain could be construed as normal but for the purpose of this article we will assume it is, you would see a transaction flow of payments going out to my auto loan, my home loan and utilities. You would see money flowing in from my salary, interest from investments and the like. In a visualization perhaps it would look something like this.

Now lets pretend I am a drug dealer and have to launder the proceeds from my crime in order to avoid detection. First, I am going to have more then one checking account, one investment account and one savings account like I have now. I am going to need multiple accounts in order to wash the money I am making. Probably one of these accounts is going to be outside the U.S., they are not going to be under the same names and use a variety of identifying information.

Aside from the number of accounts, when I visualize this information, my transactions are going to behave very differently. Instead of outbound transactions like my house payment, and inbound transactions like my paycheck, they are going to have transactions bilaterally flowing through all of them. Instead of seeing one directional link from my checking account to my savings account for my vacation fund, we are going to see multiple transactions flowing between each of the accounts I own.

The key to visualizing financial information to discover criminal activity is to follow the money. As this is the objective for visual analysis, our data and import specification is going to be a bit different from other types of criminal analysis.

One of the main differences in importing financial data is that links are going to have their own identities, usually the transaction number or DUNS number if dealing with electronic transactions. I am going to want to create a new directional link line for each transaction between two accounts in order to analyze the flow of commodities between the accounts.

Starting with financial data in an excel spreadsheet, I am going to begin setting up my import specification.

Data Review:

Visual analysis needs all the information it can gets its hand on in order to better represent the activity. In financial analysis this is key to produce unique identifiers between accounts, account holders and transactions. When obtaining or downloading data for import its extremely important to capture all elements available in the transaction flow.

The following is an example of a download of transactional data for several accounts. Note that this data is randomized but the fields are a fairly accurate representation of what is captured. While I am analyzing multiple accounts, I have merged the information from each account into one spreadsheet for import. This data is from accounts conducting electronic transactions so the data I have captured represent international clearinghouse data fields required for electronic transactions.

The fields include the account and routing number of both the originating and destination accounts for my account entities; transaction or DUNS numbers for the electronic transactions for my link identifiers; the bank name derived from the routing number; the account holder of the originating and destination account received from the financial institution including unique identifiers such as SSN for my account holders entities and the transaction amounts, dates and times.

The import specification for this data is going to be a bit different from other visualizations I have covered because instead of simply confirming relationships between entities I need to visualize actual activity between each entity.

Visualization Import:

I will start with the easiest entities to set up, the originating and destination account holders. We have captured fields in our financial data of account holders name and social security number which will create the unique identifiers for those entities.

Next we move on to the originating and destination accounts which are linked to the account holders. For these entities we have captured the routing and account numbers which will create the unique identifiers for each account entity.

Finally we have create the identifiers for the links. As opposed to some of the other analysis we have done, such as insurance fraud, we are going to create multiple links between accounts based on each unique transaction number. In essence we are creating an identity to the link line to ensure that each transaction is visually represented by a link and contains the corresponding information we want to display.

Under multiplicity of connections I have selected "multiple" and assigned the link label the transaction number field. I have also included the transaction data in the link label and assigned the transaction amount to the description on the link line so that I can follow the transactions by amount.

Now its time to see what my data looks like in the visualization. I execute the import into the visualization software and look at the initial representation.

The visualization has established clusters of interrelated accounts and transactions placing the largest on the left working it's way to the smaller on the right. The more interrelated accounts, the larger the cluster and the more unusual is the activity it represents. Normal financial accounts do not have multiple cross directional relationships to each other so the I am going to focus on the larger cluster.

Here we can see how multiple accounts are sending large amounts of money between them, eventually making their way back to host accounts; a scenario which is typical in money laundering activity or at least Filipino politicians.

From my visualization I am now able to easily track the flow of commodity between accounts based on transaction amount and date. I can see which accounts are acting as pivot accounts and which ones are actually holding money simplifying my investigation. One of the things we forget in analysis is we are able to prioritize which accounts and holders to look at based on a review of their activity.

This particular import focused on a money laundering investigation, but the visualization principals are similar for the majority of financial fraud investigations like embezzlement, internal fraud, tax evasion and the like.

Being able to detail the information in this article is challenging so anyone wishing to see the entire demonstration chart for financial analysis I would be glad to forward it to you, just drop an email to linkanalysis@gmail.com.

(All Data Used In This Example Is Randomly Generated With No Relation To Actual Individuals Or Companies)