Showing posts with label Snippets. Show all posts
Showing posts with label Snippets. Show all posts

Sunday, October 27, 2013

Selenium Tips: CSS Selectors in Selenium Demystified


Following my previous TOTW about improving your locators, this blog post will show you some advanced CSS rules and pseudo-classes that will help you move your XPATH locators to CSS, a native approach on all browsers.

Next sibling

Our first example is useful for navigating lists of elements, such as forms or ul items. The next sibling will tell selenium to find the next adjacent element on the page that’s inside the same parent. Let’s show an example using a form to select the field after username.
 
<form>
<input class="username"></input>
<input class="alias"></input>
</form>
Let’s write a css selector that will choose the input field after "username". This will select the "alias" input, or will select a different element if the form is reordered. css=form input.username + input

Attribute values

If you don’t care about the ordering of child elements, you can use an attribute selector in selenium to choose elements based on any attribute value. A good example would be choosing the ‘username’ element of the form without adding a class.
 
<form>
<input name="username"></input>
<input name="password"></input>
<input name="continue" type="button"></input>
<input name="cancel" type="button"></input>
</form>
We can easily select the username element without adding a class or an id to the element.
 
css=form input[name='username']
We can even chain filters to be more specific with our selections.
 
css=input[name='continue'][type='button']
Here Selenium will act on the input field with name="continue" and type="button"

Choosing a specific match

CSS selectors in Selenium allow us to navigate lists with more finess that the above methods. If we have a ul and we want to select its fourth li element without regard to any other elements, we should use nth-child or nth-of-type.
<ul id="recordlist">
<p>Heading</p>
 
    <li>Cat</li>
    <li>Dog</li>
    <li>Car</li>
    <li>Goat</li>
 
</ul>
If we want to select the fourth li element (Goat) in this list, we can use the nth-of-type, which will find the fourth li in the list.
  
css=ul#recordlist li:nth-of-type(4)
On the other hand, if we want to get the fourth element only if it is a li element, we can use a filtered nth-child which will select (Car) in this case.
 
css=ul#recordlist li:nth-child(4)
Note, if you don’t specify a child type for nth-child it will allow you to select the fourth child without regard to type. This may be useful in testing css layout in selenium.
 
css=ul#recordlist *:nth-child(4)

Sub-string matches

CSS in Selenium has an interesting feature of allowing partial string matches using ^=, $=, or *=. I’ll define them, then show an example of each:
^=  Match a prefix
$=  Match a suffix
*=  Match a substring
 
css=a[id^='id_prefix_']
A link with an "id" that starts with the text "id_prefix_"
 
css=a[id$='_id_sufix']
A link with an "id" that ends with the text "_id_sufix"
 
css=a[id*='id_pattern']
A link with an "id" that contains the text "id_pattern"

Matching by inner text

And last, one of the more useful pseudo-classes, :contains() will match elements with the desired text block:
  
css=a:contains('Log Out')
This will find the log out button on your page no matter where it’s located. This is by far my favorite CSS selector and I find it greatly simplifies a lot of my test code.
Tune in next week for more Selenium Tips from Sauce Labs.
Sauce Labs created Sauce OnDemand, a Selenium-based testing service that allows you to test across multiple browsers in the cloud. With Selenium IDE and Selenium RC compatibilities, you can get complete cross-platform browser testing today.

Taken for personal notepad from http://sauceio.com/index.php/2010/01/selenium-totw-css-selectors-in-selenium-demystified/, all credit goes there

Selenium css locators tutorial with example

Selenium css locators tutorial with example

As you know, Locators in selenium are main elements and CSS Locator is another alternative of Xpath element locator, ID or Name locator or any other element locators in selenium. Full form of CSS is "Cascading Style Sheets" and it define that how to display HTML elements on webpage. Click here to read more about CSS. There are few advantages and also few disadvantages of using CSS element locators at
place of Xpath element locators in selenium.
CSS Locators Main Advantage
Main advantage of using CSS locator is - It is much more faster and simpler than the Xpath Locators in IE and also they are more readable compared to Xpath locators. Also CSS locators are little faster compared to Xpath locators in other browsers.

Note : Need good working examples on selenium IDE? Visit this link for great tutorials on selenium IDE.
Now let me come to our main point - How to write CSS locator syntax manually for selenium. I have derived couple of CSS locator syntax with example as bellow. I written CSS locator syntax for three elements(Search text box, Select language drop down and "Go" button) of wikipedia website home page as shown in bellow image.
CSS locator Examples
1. Selenium CSS locator using Tag and any Attribute
css=input[type=search]
\\\\ This syntax will find "input" tag node which contains "type=search" attribute.
css=input[id=searchInput]
 \\\\ This syntax will find "input" tag node which contains "id=searchInput" attribute.
css=form input[id=searchInput]
 \\\\  This syntax will find form containing "input" tag node which contains "id=searchInput" attribute.
(All three CSS path examples given above will locate Search text box.)
2. Selenium CSS locator using Tag and ID attribute
css=input#searchInput
\\\\ Here, '#' sign is specially used for "id" attribute only. It will find "input" tag node which contains "id=searchInput" attribute. This syntax will locate Search text box.
3. Selenium CSS locator using Tag and class attribute
css=input.formBtn
 \\\\  Here, '.' is specially used for "class" attribute only. It will find "input" tag node which contains "class=formBtn" attribute. This syntax will locate Search button (go).
4.  Selenium CSS locator using tag, class, and any attribute
css=input.formBtn[name=go]
 \\\\ It will find "input" tag node which contains "class=formBtn" class and "name=go" attribute. This syntax will locate Search button (go).
5. Tag and multiple Attribute CSS locator
css=input[type=search][name=search]
\\\\  It will find "input" tag node which contains "type=search" attribute and "name=search" attribute. This syntax will locate Search text box.
6. CSS Locator using Sub-string matches(Start, end and containing text) in selenium
css=input[id^='search']
 \\\\  It will find input node which contains 'id' attribute starting with 'search' text.(Here, ^ describes the starting text).
css=input[id$='chInput']
 \\\\  It will find input node which contains 'id' attribute starting with 'chInput' text. (Here, $ describes the ending text).
css=input[id*='archIn']
 \\\\  It will find input node which contains 'id' attribute containing 'archIn' text. (Here, * describes the containing text).
(All three CSS path examples given above will locate Search text box.)
7. CSS Element locator syntax using child Selectors
css=div.search-container>form>fieldset>input[id=searchInput]
 \\\\  First it will find div tag with "class = search-container" and then it will follow remaining path to locate child node. This syntax will locate Search text box.
8. CSS Element locator syntax using adjacent selectors
css=input + input
 \\\\  It will locate "input" node where another "input" node is present before it on page.(for search tect box).
css=input + select
or
css=input + input + select
 \\\\  It will locate "select" node, where "input" node is present before it on page(for language drop down).

9. CSS Element locator using contains keyword
css=strong:contains("English")  \\\\ It will looks for the element containing text "English" as a value on the page.
taken for snippet from : http://software-testing-tutorials-automation.blogspot.com/2013/06/selenium-css-locators-tutorial-with.html

Thursday, October 24, 2013

Local Area Network : How to fix slow LAN transfer speed of files in Windows 7 ( working to windows 8 )

Recently I had to solve a problem of a very slow transfer of files between two computers on a LAN network using Ethernet cable. Both machines had Windows 7 x64 installed and the transfer speed was ridiculously slow at 10-15kb/s. Using Task Manager under Networking tab, Network Utilization was showing only around 0.25% for Local Area Connection.
I looked around the web for solutions and found quite a few suggestions how to tackle this problem. Those that I tried and the one that finally solved my problem are discussed here.



Turning off “Remote Differential Compression”

One of the first suggestions that I came across was to turn off this Windows Feature in Windows 7.
This suggestion is common on the web but it turns out to be just a myth.
From TechNet:
This is 100% false. Neither Windows Update or file copy operations use RDC at all. 
So I ignored this suggestion and continued looking.


Disabling “TCP Auto-Tuning”

This is another common suggestion that I came across and it uses NETSH command-line utility used for displaying and modifying the network configuration. To make the necessary changes, we need to run that utility as an Administrator.
  1. Open Command Prompt as Administrator:
    • Click on Start Menu
    • Type Command in search box
    • Command Prompt will show up in results. Right-click on it to open Context Menu
    • Select Run as administrator
    • If User Account Control Window shows up asking if you want to allow the following program to make changes, select Yes
  2. Type: netsh interface tcp set global autotuning=disabled
  3. Restart the computer
  4. To verify that the auto-tuning is still disabled type in Command Prompt:
    netsh interface tcp show global
This suggestion still didn’t solve my problem, so I looked further but before doing that I wanted to set Auto-tuning back to the default value by typing this in the Command Prompt (running as an Administrator):
netsh interface tcp set global autotuning=normal

Disabling “Large Send Offload (LSO)”

Large Send Offload is a technique of improving network performance while at the same time reducing CPU overhead. Apparently it does not work very well, so it was suggested to disable it. If you would like to know about LSO, check this MSDN article from 2001.
LSO is an option located in a Device Manager under your network adapter, so this solution requires Administrator Privileges.
Follow these steps:
  1. Open Start Menu, right-click on Computer and select Properties
  2. Under Control Panel Home located on the left side of the window click on Device Manager
  3. You will get a list of all devices on your machine. Expand Network Adapters.
  4. Find your Network Card and double-click on it.
  5. Select Advanced tab. You will get a list filled with different options.
  6. Select Large Send Offload V2 (IPv4) and set the value to Disabled
  7. Do the same for Large Send Offload V2 (IPv6) if it is available
  8. Click OK
After clicking OK, I tried to send a file over the LAN network. The transfer speed started very slow, but it was gradually picking up speed. I decided to restart the computer and try to send that file again and this time it worked like a charm.
Now that sending of files worked as it should, I also checked speed for receiving files. It turned out that it was still slow but all I had to do to fix that was to disable Large Send Offload V2 on the other computer. Once done,  the problem was solved for receiving files as well.

Conclusion

In this post we examined different ways to solve slow speed on a LAN network. One of them is just a common myth, but for other two you need to have administrator privileges. I hope you found this article useful. Consider sharing it on a social networks. Comments are also welcome.
If you solved your slow LAN speed problem in a different way, let me know how and I might add that solution to the list.

Taken from:
http://www.howtosolutions.net/2013/06/fixing-slow-sending-or-receiving-of-files-through-lan-network-using-windows/


Sunday, October 13, 2013

Parsing Json using Jason.net in MVS 2008 C#, .net 3.5, linq



Based here
http://json.codeplex.com/
and here
http://james.newtonking.com/json/help/index.html?topic=html/ParseJsonObject.htm
and SO
http://stackoverflow.com/questions/9107216/print-select-value-from-json-object


here the code...
using (StreamReader sr = new StreamReader(txtfile))
  {
   String line = sr.ReadToEnd();
   //MessageBox.Show(line);
   JObject stuff = JObject.Parse(line);
   if (stuff != null)
   {
    //JToken response = stuff["results"]; //parsing response
    JArray venues = (JArray)stuff["results"];
    JValue names = (JValue)venues[0]["title"];
    MessageBox.Show(names.ToString());
   }
  }
                

Wednesday, March 20, 2013

A Better Razor Foreach Loop

Yesterday, during my ASP.NET MVC 3 talk at Mix 11, I wrote a useful helper method demonstrating an advanced feature of Razor, Razor Templated Delegates.
There are many situations where I want to quickly iterate through a bunch of items in a view, and I prefer using the foreach statement. But sometimes, I need to also know the current index. So I wrote an extension method to IEnumerable<T> that accepts Razor syntax as an argument and calls that template for each item in the enumeration.
public static class HaackHelpers {
  public static HelperResult Each<TItem>(
      this IEnumerable<TItem> items, 
      Func<IndexedItem<TItem>, 
      HelperResult> template) {
    return new HelperResult(writer => {
      int index = 0;

      foreach (var item in items) {
        var result = template(new IndexedItem<TItem>(index++, item));
        result.WriteTo(writer);
      }
    });
  }
}
This method calls the template for each item in the enumeration, but instead of passing in the item itself, we wrap it in a new class, IndexedItem<T>.
public class IndexedItem<TModel> {
  public IndexedItem(int index, TModel item) {
    Index = index;
    Item = item;
  }

  public int Index { get; private set; }
  public TModel Item { get; private set; }
}
And here’s an example of its usage within a view. Notice that we pass in Razor markup as an argument to the method which gets called for each item. We have access to the direct item and the current index.
@model IEnumerable<Question>

<ol>
@Model.Each(@<li>Item @item.Index of @(Model.Count() - 1): @item.Item.Title</li>)
</ol>
If you want to try it out, I put the code in a package in my personal NuGet feed for my code samples. Just connect NuGet to http://nuget.haacked.com/nuget/ and Install-Package RazorForEach. The package installs this code as source files in App_Code.
UPDATE: I updated the code and package to be more efficient (4/16/2011).

taken without any change from here : http://haacked.com/archive/2011/04/14/a-better-razor-foreach-loop.aspx

Saturday, February 23, 2013

Creating a Multi Page PDF from a TIFF | TIFF to PDF Converter


I've been working on a project recently that had a requirement to do a tiff to pdf conversion on the fly, and serve these pdf's over the web. The added wrinkle was that these tiff files were stored in a database - so I wasn't going to reading or writing from the filesystem. This isn't a huge problem, but it did throw 90% of the examples out of the window!

I opted to use PdfSharp to do the conversion this - it's a really great open source library and did exactly what I needed.

So here we go:

snippet 1:
byte[] bytes = GetMyByteData();

using (MemoryStream memoryStream = new MemoryStream(bytes))
{
 memoryStream.Position = 0;
 memoryStream.Write(bytes, 0, bytes.Length);
 
 System.Drawing.Image image = System.Drawing.Image.FromStream(memoryStream, true, true);
 
 //This is where the next code goes!!
}


To start with I retrieved my data from the database into a byte array, wrote it into a memory stream object and finally created an Image object from the memory stream. Next onto creating the pdf document:

snippet 2:
PdfDocument doc = new PdfDocument();
XGraphics xgr;

PdfPage page = new PdfPage();
doc.Pages.Add(page);
xgr = XGraphics.FromPdfPage(page);

XImage ximg = XImage.FromGdiPlusImage(image);
xgr.DrawImage(ximg, 0, 0);


As you can see from the code, this is where PdfSharp comes into play (I opted for the GDI+ version) - creating a PdfDocument, XGraphics object, PdfPage and loading the image into page. I guess the real magic here when using the XImage.FromGdiPlusImage method to load the in memory image file into a pdf writeable object.

Finally was writing this back to the response stream (in ASP.NET obviously!):

snippet 3:
using (MemoryStream responseStream = new MemoryStream())
{
 doc.Save(responseStream, false);
 responseStream.Position = 0;

 context.Response.ClearContent();
 context.Response.ClearHeaders();
 context.Response.BufferOutput = true;
 context.Response.ContentType = "application/pdf";
 context.Response.AddHeader("content-disposition", "inline;filename=mypdf.pdf");

 responseStream.CopyTo(context.Response.OutputStream);

 context.Response.Flush();
 context.Response.Close();
 context.Response.End();
}

doc.Close();


I wont go into too much detail about this, its pretty straight forward stuff. For me the 2 things really worth mentioning is the doc.Save() method which saves the pdf to a new memory stream, and the responseStream.CopyTo method which copies one stream to another (new to .net 4 I think!).

This all worked fine but there was one further complication - the TIFFs might be multi-page. In those examples the pdf would only ever contain the first page. To overcome this I had to loop over the page frames and add a new pdf page for each one. This replaces snippet 2 with the following:

snippet 2(v2):
PdfDocument doc = new PdfDocument();
XGraphics xgr;

int count = image.GetFrameCount(FrameDimension.Page);
for (int pageNum = 0; pageNum < count; pageNum++) {  image.SelectActiveFrame(FrameDimension.Page, pageNum);   PdfPage page = new PdfPage();  doc.Pages.Add(page);  xgr = XGraphics.FromPdfPage(page);   XImage ximg = XImage.FromGdiPlusImage(image);  xgr.DrawImage(ximg, 0, 0); } 


I was pleasantly surprised with how straight forward this was to achieve, and in particular how quickly it all worked.

Matt

source : http://www.codenutz.com/2011/10/creating-multi-page-pdf-from-tiff-tiff.html

Wednesday, February 20, 2013

Using cURL in PHP to access HTTPS (SSL/TLS) protected sites

From PHP, you can access the useful cURL Library (libcurl) to make requests to URLs using a variety of protocols such as HTTP, FTP, LDAP and even Gopher. (If you’ve spent time on the *nix command line, most environments also have the curl command available that uses the libcurl library)
In practice, however, the most commonly-used protocol tends to be HTTP, especially when using PHP for server-to-server communication. Typically this involves accessing another web server as part of a web service call, using some method such as XML-RPC or REST to query a resource. For example, Delicious offers a HTTP-based API to manipulate and read a user’s posts. However, when trying to access a HTTPS resource (such as the delicious API), there’s a little more configuration you have to do before you can get cURL working right in PHP.

The problem

If you simply try to access a HTTPS (SSL or TLS-protected resource) in PHP using cURL, you’re likely to run into some difficulty. Say you have the following code: (Error handling omitted for brevity)
// Initialize session and set URL.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);

// Set so curl_exec returns the result instead of outputting it.
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    
// Get the response and close the channel.
$response = curl_exec($ch);
curl_close($ch);
If $url points toward an HTTPS resource, you’re likely to encounter an error like the one below:
Failed: Error Number: 60. Reason: SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
The problem is that cURL has not been configured to trust the server’s HTTPS certificate. The concepts of certificates and PKI revolves around the trust of Certificate Authorities (CAs), and by default, cURL is setup to not trust any CAs, thus it won’t trust any web server’s certificate. So why don’t you have problems visiting HTTPs sites through your web browser? As it happens, the browser developers were nice enough to include a list of default CAs to trust, covering most situations, so as long as the website operator purchased a certificate from one of these CAs.

The quick fix

There are two ways to solve this problem. Firstly, we can simply configure cURL to accept any server(peer) certificate. This isn’t optimal from a security point of view, but if you’re not passing sensitive information back and forth, this is probably alright. Simply add the following line before calling curl_exec():
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
This basically causes cURL to blindly accept any server certificate, without doing any verification as to which CA signed it, and whether or not that CA is trusted. If you’re at all concerned about the data you’re passing to or receiving from the server, you’ll want to enable this peer verification properly. Doing so is a bit more complicated.

The proper fix

The proper fix involves setting the CURLOPT_CAINFO parameter. This is used to point towards a CA certificate that cURL should trust. Thus, any server/peer certificates issued by this CA will also be trusted. In order to do this, we first need to get the CA certificate. In this example, I’ll be using the https://api.del.icio.us/ server as a reference.
First, you’ll need to visit the URL with your web browser in order to grab the CA certificate. Then, (in Firefox) open up the security details for the site by double-clicking on the padlock icon in the lower right corner:

Then click on “View Certificate”:

Bring up the “Details” tab of the cerficates page, and select the certificate at the top of the hierarchy. This is the CA certificate.



 Then click “Export”, and save the CA certificate to your selected location, making sure to select the X.509 Certificate (PEM) as the save type/format.




Now we need to modify the cURL setup to use this CA certificate, with CURLOPT_CAINFO set to point to where we saved the CA certificate file to.
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_CAINFO, getcwd() . "/CAcerts/BuiltinObjectToken-EquifaxSecureCA.crt"); 
The other option I’ve included,CURLOPT_SSL_VERIFYHOST can be set to the following integer values:
  • 0: Don’t check the common name (CN) attribute
  • 1: Check that the common name attribute at least exists
  • 2: Check that the common name exists and that it matches the host name of the server
If you have CURLOPT_SSL_VERIFYPEER set to false, then from a security perspective, it doesn’t really matter what you’ve set
CURLOPT_SSL_VERIFYHOST
to, since without peer certificate verification, the server could use any certificate, including a self-signed one that was guaranteed to have a CN that matched the server’s host name. So this setting is really only relevant if you’ve enabled certificate verification.
This ensures that not just any server certificate will be trusted by your cURL session. For example, if an attacker were to somehow redirect traffic from api.delicious.com to their own server, the cURL session here would not properly initialize, since the attacker would not have access to a server certificate (i.e. would not have the private key) trusted by the CA we added. These steps effectively export the trusted CA from the web browser to the cURL configuration.

More information

If you have the CA certificate, but it is not in the PEM format (i.e. it is in a binary or DER format that isn’t Base64-encoded), you’ll need to use something like OpenSSL to convert it to the PEM format. The exact command differs depending on whether you’re converting from PKCS12 or DER format.
There is a CURLOPT_CAPATH option that allows you to specify a directory that holds multiple CA certificates to trust. But it’s not as simple as dumping every single CA certificate in this directory. Instead, they CA certificates must be named properly, and the OpenSSL c_rehash utility can be used to properly setup this directory for use by cURL.

Taken from HERE : http://unitstep.net/blog/2009/05/05/using-curl-in-php-to-access-https-ssltls-protected-sites/

Logging in to HTTPS websites using PHP cURL



To log in to a HTTPS website using PHP cURL you need to do the following:

enable cURL by uncommenting the line extension=php_curl.dll in your php.ini file.

Set up cURL to either accept all certificates or add the needed certificate authority to cURLs CA list (check out http://unitstep.net/blog/2009/05/05/using-curl-in-php-to-access-https-ssltls-protected-sites/)

Then you need to load the page to get the session cookie:
// Create temp file to store cookies
$ckfile = tempnam ("/tmp", "CURLCOOKIE");

// URL to login page
$url = "https://www.securesiteexample.com";

// Get Login page and its cookies and save cookies in the temp file
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); // Accepts all CAs
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_COOKIEJAR, $ckfile); // Stores cookies in the temp file
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch); 


Now you have the cookie, you can POST login values (check the source of the login page to check if you need any other fields too)
$fields = array(
$fields = array(
'username' => 'yourusername',
'password' => 'yourpassword',
);
$fields_string = '';
foreach($fields as $key=>$value) {
$fields_string .= $key . '=' . $value . '&';
}
rtrim($fields_string, '&');

// Post login form and follow redirects
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); // Accepts all CAs
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, count($fields));
curl_setopt($ch, CURLOPT_POSTFIELDS, $fields_string);
curl_setopt($ch, CURLOPT_COOKIEFILE, $ckfile); //Uses cookies from the temp file
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // Tells cURL to follow redirects
$output = curl_exec($ch); 
Now you should be able to access any pages within the password-restricted area by just including the cookies for each call:

$url = "https://www.securesiteexample.com/loggedinpage.html";
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); // Accepts all CAs
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_COOKIEFILE, $ckfile); //Uses cookies from the temp file
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);


Source from:
http://www.herikstad.net/2011/06/logging-to-https-websites-using-php.html

Tuesday, December 25, 2012

C# Pulling images from google images ( support proxy )

 public string getHtmltt(string url)
    {

        string responseData = "";
        try
        {
            string host = string.Empty;

            if (url.Contains("/search?"))
            {
                host = url.Remove(url.IndexOf("/search?"));

                if(host.Contains("//"))
                {
                    host = host.Remove(0, host.IndexOf("//")).Replace("//","").Trim();
                }
            }
            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
            request.Accept = "application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*";
            request.AllowAutoRedirect = true;
            request.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)";
            request.Timeout = 60000;
            request.Method = "GET";
            request.KeepAlive = false; ;


           // request.Host = "www.google.com.af";
            request.Host = host;
            request.Headers.Add("Accept-Language", "en-US");

            //request.Proxy = null;
           // WebProxy prx = new WebProxy("199.231.211.107:3128");

            WebProxy prx = new WebProxy(proxies[0].ToString().Trim());

            request.Proxy = prx;
            HttpWebResponse response = (HttpWebResponse)request.GetResponse();
            if (response.StatusCode == HttpStatusCode.OK)
            {
                Stream responseStream = response.GetResponseStream();
                StreamReader myStreamReader = new StreamReader(responseStream);
                responseData = myStreamReader.ReadToEnd();
            }

            foreach (Cookie cook in response.Cookies)
            {
                inCookieContainer.Add(cook);
            }
            response.Close();



        }
        catch (System.Exception e)
        {
            responseData = "An error occurred: " + e.Message;

        }

        return responseData;

    }

Saturday, December 8, 2012

[SOLVED] windows 8 no IPv4?

To install IPv4, run Command Prompt as an administrator, type this
netsh interface ipv4 install
Restart and you done....

Wednesday, October 24, 2012

Functions for Sanitize and Create Ansii Slug

This is my function to creating slug, or sanitaze string...
Slug
function toSlug($string,$space="-") {
    if (function_exists('iconv')) {
        $string = @iconv('UTF-8', 'ASCII//TRANSLIT', $string);
    }
    $string = preg_replace("/[^a-zA-Z0-9 -]/", "", $string);
    $string = strtolower($string);
    $string = str_replace(" ", $space, $string);
    return $string;
}


function Slug($string)
{
    return strtolower(trim(preg_replace('~[^0-9a-z]+~i', '-', html_entity_decode(preg_replace('~&([a-z]{1,2})(?:acute|cedil|circ|grave|lig|orn|ring|slash|th|tilde|uml);~i', '$1', htmlentities($string, ENT_QUOTES, 'UTF-8')), ENT_QUOTES, 'UTF-8')), '-'));
}


Sanitaze
function cleanInput($input) {
 
  $search = array(
    '@]*?>.*?@si',   // Strip out javascript
    '@<[\/\!]*?[^<>]*?>@si',            // Strip out HTML tags
    '@]*?>.*?@siU',    // Strip style tags properly
    '@@'         // Strip multi-line comments
  );
 
    $output = preg_replace($search, '', $input);
    return $output;
  }
  
 function sanitize($input) {
    if (is_array($input)) {
        foreach($input as $var=>$val) {
            $output[$var] = sanitize($val);
        }
    }
    else {
        if (get_magic_quotes_gpc()) {
            $input = stripslashes($input);
        }
        $input  = cleanInput($input);
        $output = mysql_real_escape_string($input);
    }
    return $output;
}