Sales Tel: +63 945 7983492  |  Email Us    
SMDC Residences

Air Residences

Features and Amenities

Reflective Pool
Function Terrace
Seating Alcoves

Air Residences

Green 2 Residences

Features and Amenities:

Wifi ready study area
Swimming Pool
Gym and Function Room

Green 2 Residences

Bloom Residences

Features and Amenities:

Recreational Area
2 Lap Pools
Ground Floor Commercial Areas

Bloom Residences

Leaf Residences

Features and Amenities:

3 Swimming Pools
Gym and Fitness Center
Outdoor Basketball Court

Leaf Residences

Contact Us

Contact us today for a no obligation quotation:


+63 945 7983492
+63 908 8820391

Copyright © 2018 SMDC :: SM Residences, All Rights Reserved.


































































000-103 dumps with Real exam Questions and Practice Test - smresidences.com.ph

Great Place to download 100% free 000-103 braindumps, real exam questions and practice test with VCE exam simulator to ensure your 100% success in the 000-103 - smresidences.com.ph

Pass4sure 000-103 dumps | Killexams.com 000-103 real questions | http://smresidences.com.ph/

000-103 AIX 6.1 Basic Operations

Study Guide Prepared by Killexams.com IBM Dumps Experts

Exam Questions Updated On :



Killexams.com 000-103 Dumps and Real Questions

100% Real Questions - Exam Pass Guarantee with High Marks - Just Memorize the Answers



000-103 exam Dumps Source : AIX 6.1 Basic Operations

Test Code : 000-103
Test Name : AIX 6.1 Basic Operations
Vendor Name : IBM
: 81 Real Questions

Weekend look at is sufficient to pass 000-103 examination with I were given.
There were many approaches for me to reach to my target vacation spot of high score inside the 000-103 but i was no longerhaving the first-class in that. So, I did the quality aspect to me by means of taking place on-line 000-103 study assist of the killexams.com mistakenly and determined that this mistake turned into a sweet one to be remembered for an extendedtime. I had scored well in my 000-103 observe software program and thats all due to the killexams.com exercise test which became to be had on line.


a whole lot much less effort, top notch information, assured success.
I am saying from my revel in that in case you remedy the query papers separately then you may truely crack the exam. killexams.com has very powerful examine material. Such a very beneficial and helpful website. Thanks Team killexams.


I clearly experienced 000-103 examination questions, there's not anything like this.
I got a good result with this bundle. Very good quality, questions are accurate and I got most of them on the exam. After I have passed it, I recommended killexams.com to my colleagues, and everyone passed their exams, too (some of them took Cisco exams, others did Microsoft, VMware, etc). I have not heard a bad review of killexams.com, so this must be the best IT training you can currently find online.


it's miles first-rate best to put together 000-103 examination with ultra-cutting-cuttingmodern dumps.
The Practice exam is excellent, I passed 000-103 paper with a score of 100 percent. Well worth the cost. I will be back for my next certification. First of all let me give you a big thanks for giving me prep dumps for 000-103 exam. It was indeed helpful for the preparation of exams and also clearing it. You wont believe that i got not a single answer wrong !!!Such comprehensive exam preparatory material are excellent way to score high in exams.


No cheaper source of 000-103 found but.
I have been given severa questions ordinary from this aide and made an amazing 88% in my 000-103 exam. At that point, my associate proposed me to take after the Dumps aide of killexams.com as a quick reference. It carefully secured all the material thru short solutions which have been beneficial to do not forget. My subsequent development obliged me to pick killexams.com for all my future tests. I was in an trouble the way to blanket all of the material indoors 3-week time.


real exam questions of 000-103 exam are Awesome!
Im ranked very high amongst my elegance friends at the list of exceptional college students however it quality happened once I registered in this killexams.com for a few exam help. It turned into the immoderate marks studying software in this killexams.com that helped me in becoming a member of the excessive ranks in conjunction with exclusive exceptional college students of my magnificence. The sources on this killexams.com are commendable due to the fact they may be unique and enormously beneficial for practise thru 000-103 pdf, 000-103 dumps and 000-103 books. I am happy to put in writing these words of appreciation due to the truth this killexams.com merits it. Thanks.


forget the entirety! just forcus on those 000-103 questions.
its far the location where I sorted and corrected all my errors in 000-103 topic. after I searched study material for the exam, i found the killexams.com are the top class one that is one among the reputed product. It enables to perform the exam higher than whatever. i used to be glad to discover that was completely informative material within the mastering. it is ever high-quality supporting material for the 000-103 exam.


How long practice is needed for 000-103 test?
I didnt plan to use any braindumps for my IT certification test, however being beneath strain of the difficulty of 000-103 exam, I ordered this package. i was inspired through the pleasant of these material, they are in reality worth the cash, and i agree with that they may value more, that is how outstanding they are! I didnt have any trouble even astaking my exam thanks to Killexams. I without a doubt knew all questions and answers! I got 97% with just a few days exam education, except having some work enjoy, which changed into clearly helpful, too. So yes, killexams.com is genuinely rightly and incredibly advocated.


can i locate touch data trendy 000-103 certified?
You need to ace your on line 000-103 tests i have a pleasant and easy manner of this and that is killexams.com and its 000-103 test examples papers which can be a real image of very last test of 000-103 exam tests. My percent in very lastcheck is ninety five%. killexams.com is a product for folks that usually need to move on of their lifestyles and want to do somethingextra ordinary. 000-103 trial test has the ability to decorate your confidence degree.


less attempt, high-quality knowledge, guaranteed fulfillment.
killexams.com query monetary team became virtually appropriate. I cleared my 000-103 exam with sixty eight.25% marks. The questions were surely suitable. They preserve updating the database with new questions. And guys, pass for it - they never disappoint you. Thanks so much for this.


IBM AIX 6.1 Basic Operations

WebSphere vs. .internet: IBM and Microsoft Go head to head | killexams.com Real Questions and Pass4sure dumps

After undertaking a few benchmarks, Microsoft concluded that .internet presents superior performance and cost-efficiency ratio than WebSphere. IBM rebutted Microsoft’s findings and performed different assessments proving that WebSphere is advanced to .internet. Microsoft answered through rejecting a few of IBM’s claims as false and repeating the assessments on distinctive hardware with distinct consequences.

summary

Microsoft has benchmarked .net and WebSphere and published the benchmark source code, run suggestions, use guidelines and a findings record published at wholoveswindows.com entitled Benchmarking IBM WebSphere 7 on IBM Power6 and AIX vs. Microsoft .internet on HP BladeSystem and windows Server 2008.   This benchmark suggests a lots larger transactions per 2nd (TPS) cost and improved cost/efficiency ratio when the use of WebSphere 7 on home windows Server 2008 over WebSphere on AIX 5.3, and even more suitable results when the use of .net on windows Server 2008 over WebSphere on the same OS. The can charge/efficiency ratio for the utility benchmark used is:

IBM vigour 570 with WebSphere 7 and AIX 5.three HP BladeSystem C7000 with WebSphere 7 and home windows Server 2008 HP BladeSystem C7000 with .web and home windows Server 2008 $32.45 $7.92 $three.ninety nine

IBM has rebutted Microsoft’s benchmark and known as some of their claims as false, and performed a special benchmark, with distinctive outcomes. The benchmark used together with the findings have been posted in Benchmarking AND BEATING Microsoft’s .net three.5 with WebSphere 7! (PDF). The source code of the benchmark turned into now not posted. The effects show WebSphere as an improved performing core-tier than .net with 36% extra TPS for one application benchmark and from 176% to 450% improved throughput for one in all IBM’s ordinary benchmarks.

Microsoft spoke back to IBM and defended their claims and benchmarking outcomes with Response to IBM’s Whitepaper Entitled Benchmarking and Beating Microsoft .web three.5 with WebSphere 7 (PDF). Microsoft has additionally re-run their benchmark, modified to consist of a distinct verify flow corresponding to the one used by IBM of their assessments, operating it on distinct hardware, a single multi-core server, founding that indeed WebSphere is stronger than .net if using IBM’s look at various movement but best a little bit improved, between 3% and %6, no longer as suggested by using IBM. anyway that, these later findings don't change the normal ones given that the benchmark was run on a different hardware configuration. within the end, Microsoft invites IBM to “an unbiased lab to perform additional checking out”.

Microsoft trying out .web towards WebSphere

Microsoft has conducted a series of checks evaluating WebSphere/Java in opposition t .web on three different platforms. The details of the benchmarks carried out and the verify results were posted in the whitepaper entitled Benchmarking IBM WebSphere® 7 on IBM® Power6™ and AIX vs. Microsoft® .internet on Hewlett Packard BladeSystem and windows Server® 2008 (PDF).

platforms proven:

  • IBM vigor 570 (power 6) running IBM WebSphere 7 on AIX 5.3
  • 8 IBM Power6 cores at 4.2GHz
  • 32 GB RAM
  • AIX 5.3
  • 4 x 1 GB NICs
  • Hewlett Packard BladeSystem C7000 running IBM WebSphere 7 on home windows Server 2008
  • four Hewlett Packard ProLiant BL460c blades
  • One Quad-Core Intel® Xeon® E5450 (three.00GHz, 1333MHz FSB, 80W) Processor/blade
  • 32 GB RAM/blade
  • home windows Server 2008/sixty four-bit/blade
  • 2 x 1 GB NICs/blade
  • Hewlett Packard BladeSystem C7000 running .internet on home windows Server 2008
  • identical because the old one however the functions proven run on .net in its place of WebSphere.
  • a number of three checks have been carried out on every platform:

  • exchange web software Benchmarking The purposes proven have been IBM’s change 6.1 and Microsoft’s StockTrader 2.04. This collection of assessments have evaluated the performance of comprehensive facts-pushed internet applications working on appropriate of the above mentioned systems. The internet pages accessed had one or constantly more operations serviced by using courses contained with the aid of the enterprise layer and ending with synchronous database calls.
  • trade middle Tier net functions Benchmarking This benchmark was intended to measure the performance of the net service layer executing operations which ended up in database transactions. The verify changed into akin to internet utility, however operations have been counted in my opinion.
  • WS examine web capabilities Benchmarking This look at various turned into just like the outdated one but there was no enterprise good judgment nor database entry. This become based on WSTest workload originally devised by using sun and augmented by Microsoft. The features tier provided three operations: EchoList, EchoStruct and GetOrder. Having no company logic, the verify measured most effective the raw performance of the net service software.
  • Two database configurations were used, one for the all-IBM platform and a different for the different two: IBM DB2 V9.5 commercial enterprise version with IBM DB2 V9.5 JDBC drivers for facts access and SQL Server 2008 databases commercial enterprise version. Two databases have been deploy for every configuration operating on HP BL680c G5 blades:

  • 4 Quad-Core Intel XEON CPUs, @2.4GHZ (16 cores in each blade)
  • sixty four GB RAM
  • four x 1GB NICs
  • IBM DB 9.5 enterprise edition sixty four-bit or Microsoft SQL Server 2008 sixty four-bit
  • Microsoft windows Server 2008 sixty four-bit, enterprise version
  • 2 4GB HBAs for fiber/sans entry to the EVA 4400 storage
  • The storage became secured on HP StorageWorks EVA 4400 Disk Array:

  • 96 15K drives total
  • 4 logical volumes along with 24 drives each and every
  • Database server 1: Logical extent 1 for logging
  • Database server 1: Logical extent 2 for database
  • Database server 2: Logical extent 3 for logging
  • Database server 2: Logical extent four for database
  • The internet utility benchmark used 32 client machines working examine scripts. every computer simulated hundreds of purchasers having a 1 2d suppose time. The tests used an tailored version of IBM’s exchange 6.1 utility on SUT #1 & #2 and Microsoft’s StockTrader utility on SUT #3.

    image

    For the net provider and WSTest benchmarks, Microsoft used 10 valued clientele with a 0.1s suppose time. For WSTest, the databases have been not accessed. Microsoft has created a WSTest-compliant benchmark for WebSphere 7 and JAX-WS and another in C# for .web using WCF.

    image

    Microsoft’s whitepaper incorporates greater details on how the checks had been carried out together with the DB configuration, DB access used, caching configuration, look at various scripts, tuning parameters used and others.

    Conclusion

    The benchmarking consequences including the expenses/efficiency ratio are proven in the following table:

      IBM vigor 570 with WebSphere 7 and AIX 5.three HP BladeSystem C7000 with WebSphere 7 and home windows Server 2008 HP BladeSystem C7000 with .web and windows Server 2008 total center-Tier equipment cost $260,128.08 $87,161.00 $50,161.00 exchange internet utility Benchmark 8,016 TPS eleven,004 TPS 12,576 TPS charge/performance $32.forty five $7.92 $three.99 exchange core Tier internet provider Benchmark 10,571 TPS 14,468 TPS 22,262 TPS cost/performance $24.61 $6.02 $2.25 WSTest EchoList test10,536 TPS 15,973 TPS 22,291 TPS can charge/performance $24.sixty nine $5.46 $2.25 WSTest EchoStruct examine11,378 TPS 16,225 TPS 24,951 TPS can charge/efficiency $22.86 $5.37 $2.01 WSTest GetOrder test11,009 TPS 15,491 TPS 27,796 TPS cost/performance $23.sixty three $5.63 $1.80

    according to Microsoft’s benchmarking results, running WebSphere on HP BladeSystem with windows Server 2008 is set 30% extra productive and the can charge-efficiency ratio is 5 instances lower than working WebSphere on IBM vigour 570 with AIX 5.3. The .internet/windows Server 2008 configuration is much more efficient and the cost/performance ratio drops to half compared to WebSphere/windows Server 2008 and it is 10 times smaller than WebSphere/energy 570/AIX. The charge-performance ratio is so excessive for the primary platform because the fee of the complete core-tier is over $250,000 whereas the performance is decrease than the other systems.

    Microsoft’s benchmarking whitepaper (PDF) includes an appendix with complete particulars of the hardware and software fees. The benchmarking checks used, together with supply code, are posted on StockTrader site.

    IBM’s Rebuttal

    In one other paper, Benchmarking AND BEATING Microsoft’s .web three.5 with WebSphere 7! (PDF), IBM has rejected Microsoft’s benchmark and created one other one displaying that WebSphere is performing more suitable than .web.

    Microsoft had observed that StockTrader is comparable to IBM’s exchange application:

    Microsoft created an utility that's functionally comparable to the IBM WebSphere exchange software, both in terms of consumer performance and middle-tier database access, transactional and messaging conduct.

    IBM rejected Microsoft’s claim:

    The software claims to be “functionally equivalent” to the IBM WebSphere exchange 6.1 sample software. It isn't a “port” of the application in any sense. Little, if any, of the fashioned software design turned into ported. Microsoft has made this an utility that showcases using its proprietary technologies. a major indication of here's the indisputable fact that the .internet StockTrader application is not a universally available net software for the reason that it will probably best be accessed by using information superhighway Explorer, and never by way of different internet browsers.

    furthermore, IBM noted that change become now not designed to benchmark WebSphere’s efficiency however reasonably to

    serve as a sample application illustrating the usage of the points and functions contained in WebSphere and how they concerning utility efficiency. moreover, the utility served as a pattern which allowed developers to explore the tuning capabilities of WebSphere.

    IBM had other complaints regarding Microsoft’s benchmark:

    Microsoft created a totally new utility [StockTrader] and claimed useful equivalence at the utility stage. The fact is that the Microsoft version of the application used proprietary SQL statements to entry the database, not like the customary version of change 6.1 which turned into designed to be a transportable and commonplace software.

    They employed customer facet scripting to shift one of the vital software function to the customer.

    They established internet functions capabilities by means of inserting an pointless HTTP server between the WebSphere server and the customer.

    And If that changed into now not satisfactory, they failed to correctly display screen and regulate the WebSphere utility server to achieve peak efficiency.

    IBM’s aggressive task office group (CPO) has ported StockTrader 2.0 to WebSphere developing CPO StockTrader and claiming: “we did a port that faithfully reproduced Microsoft’s application design. The intent was to achieve an apples-to-apples comparison.” So, trader 6.1 became ported by way of Microsoft from WebSphere to .internet below the name StockTrader and ported once again with the aid of IBM returned to WebSphere beneath the name CPO StockTrader. IBM benchmarked CPO StockTrader against StockTrader and obtained stronger consequences for WebSphere against .web:

    image

    IBM has also recommended they are the use of pleasant financial institution, an application supposed to benchmark WebSphere in opposition t .web. in this look at various WebSphere outperforms .web a couple of instances:

    image

    in their StockTrader vs. CPO StockTrader benchmark, IBM used scripts simulating user undertaking: “login, getting costs, stock purchase, inventory promote, viewing of the account portfolio, then a logoff” and operating in stress mode devoid of believe times. 36 clients have been simulated, sufficient to drive every server at maximum throughput and utilization. The statistics back turned into validated and error have been discarded.

    The entrance end turned into carried out with WebSphere 7/home windows Server 2008 in a single case and .web three.5 with IIS 7/home windows Server 2008 within the other. The returned conclusion database turned into DB2 8.2 and SQL Server 2005, each on home windows Server 2003.

    The hardware used for trying out become:

    performance checking out device HardwareX345 8676 Server2 X three.06 GHz Intel Processor with Hyper Thread Technology8 GB RAM18.2 GB 15K rpm SCSC tough Disk Drive1 GB Ethernet interfaceApplication Server Hardware IBM X3950 Server, 8 x three.50 Ghz, Intel Xeon Processors with Hyper Thread know-how, sixty four GB RAMDatabase Server HardwareX445 8670 Server, 8x 3.0 Ghz. Intel Xeon Processors with Hyper Thread expertise, 16 GB RAMUltraSCSI 320 Controller , EXP 300 SCSI expansion Unit, 14x 18.2 GB 15K rpm hard Disk pressure configured as 2 Raid Arrays.One for Logs & One for Database, each and every array is constituted of 7 difficult disks in a Raid 0 configuration.The Ethernet community spine The isolated community hardware is made from 3x 3Comm SuperStack 4950 switches and one 3 Comm SuperStack 4924 switch operating at 1 GB.

    The software and hardware configuration for the pleasant bank benchmark turned into comparable to the StockTrader one.

    IBM’s whitepaper incorporates advice concerning the friendly bank software, but doesn't element to the supply code. It also mentions that the application was originally designed for .net Framework 1.1 and changed into just recompiled on .net three.5 without being up to date to make use of the newest applied sciences.

    Microsoft Response to IBM’s Rebuttal

    Microsoft has answered to IBM’s rebuttal in yet yet another whitepaper, Response to IBM’s Whitepaper Entitled Benchmarking and Beating Microsoft .internet three.5 with WebSphere 7 (PDF). in this doc, Microsoft defends their customary benchmarking consequences and affirms that IBM made some false claims in their rebuttal doc entitled Benchmarking AND BEATING Microsoft’s .web 3.5 with WebSphere 7!, and IBM did not use an appropriate benchmarking procedure.  more has been posted at wholoveswindows.com.

    really, Microsoft observed right here claims are false:

  • IBM claim: The .net StockTrader does not faithfully reproduce the IBM trade application performance.Microsoft response: this declare is false; the .internet StockTrader 2.04 faithfully reproduces the IBM WebSphere alternate application (using ordinary .net Framework technologies and coding practices), and may be used for fair benchmark comparisons between .net 3.5 and IBM WebSphere 7.
  • IBM claim: The .net StockTrader uses customer-aspect script to shift processing from the server to the client.Microsoft response: this declare is false, there is no client-aspect scripting in the .web StockTrader software.
  • IBM declare: The .internet StockTrader uses proprietary SQL.Microsoft response: the .net StockTrader uses regular SQL statements coded for SQL Server and/or Oracle; and offers an information entry layer for each. The IBM WebSphere 7 trade application in a similar way uses JDBC queries coded for DB2 and/or Oracle. Neither implementation makes use of saved tactics or features; all company logic runs in the application server. simple pre-prepared SQL statements are used in both purposes.
  • IBM claim: The .internet StockTrader is not programmed as a universally available, skinny-customer net software. hence it runs best on IE, not in Firefox or other browsers.Microsoft response: in fact, the .net StockTrader web tier is programmed as a universally obtainable, pure thin client net software. however, a simple problem in theuse of HTML comment tags factors concerns in Firefox; these comment tags are being updated to permit the ASP.net application to correctly render in any industry commonplace browser, including Firefox.
  • IBM claim: The .internet StockTrader has errors beneath load.Microsoft response: here's false, and this doc contains additional benchmark checks and Mercury LoadRunner details proving this IBM claim to be false.
  • also, Microsoft complained that IBM had developed friendly financial institution for .net Framework 1.1 years in the past the usage of out of date technologies:

    IBM’s friendly bank benchmark makes use of an out of date .net Framework 1.1 software that comprises technologies equivalent to DCOM which have been obsolete for many years. This benchmark should still be utterly discounted except Microsoft has the chance to overview the code and replace it for .net 3.5, with more recent technologies for ASP.web, transactions, and home windows communique basis (WCF) TCP/IP binary remoting (which replaced DCOM because the favorite remoting expertise).

    Microsoft considered IBM failed by way of no longer featuring the supply code for CPO StockTrader and pleasant financial institution functions and reiterated the indisputable fact that all of the supply code for Microsoft’s benchmark applications worried in this case had been made public.

    Microsoft additionally noticed that IBM had used a modified check script which “blanketed a heavier emphasis on buys and additionally blanketed a promote operation”. Microsoft re-performed their benchmark using IBM’s modified check script stream, one including the operations purchase and sell beside Login, Portfolio, Logout, on a single 4-core utility server putting forward that

    these checks are in response to IBM’s revised script and are supposed to satisfy some of these IBM rebuttal test situations as outlined in IBM’s response paper. They should not be regarded in any method as a change to their usual outcomes (performed on distinctive hardware, and distinctive check script circulation); because the long-established results remain valid.

    The verify changed into carried on:

    software Server(s) Database(s) 1 HP ProLiant BL460c1 Quad-core Intel Xeon E5450 CPU (three.00 GHz)32 GB RAM2 x 1GB NICsWindows Server 2008 sixty four-bit.internet three.5 (SP1) 64-bitIBM WebSphere sixty four-bit 1 HP ProLiant DL380 G52 Quad-core Intel Xeon E5355 CPUs (2.67 GHz)sixty four GB RAM2 x 1GB NICsWindows Server 2008 sixty four-bitSQL Server 2008 sixty four-bitDB2 V9.7 sixty four-bit

    The result of the verify indicates an identical performance for WebSphere and .web.

    image

    one in every of IBM’s complaints had been that Microsoft inserted an pointless HTTP net server in front of WebSphere reducing the variety of transactions per 2d. Microsoft admitted that, however brought:

    using this HTTP Server become totally discussed within the normal benchmark paper, and is finished based on IBM’s personal most appropriate apply deployment instructions for WebSphere. In one of these setup, IBM recommends the usage of the IBM HTTP Server (Apache) because the entrance end net Server, which then routes requests to the IBM WebSphere application server. In their checks, they co-located this HTTP on the same desktop as the utility Server. this is equivalent to the .web/WCF internet carrier checks, the place they hosted the WCF internet functions in IIS 7, with co-discovered IIS 7 HTTP Server routing requests to the .internet application pool processing the WCF service operations. So in each tests, they tested an equivalent setup, the usage of IBM HTTP Server (Apache) as the entrance conclusion to WebSphere/JAX-WS capabilities; and Microsoft IIS 7 because the entrance end to the .net/WCF services. for this reason, they stand behind all their common effects.

    Microsoft performed yet a different verify, the WSTest, without the middleman HTTP internet server on a single quad-core server just like the outdated one, and acquired right here influence:

    image

    each tests carried out by Microsoft on a single server exhibit WebSphere protecting a mild performance potential over .net but no longer as a lot as IBM pretended of their paper. anyway that, Microsoft remarked that IBM didn't touch upon middle-tier can charge comparison which vastly favors Microsoft.

    Microsoft persevered to challenge IBM to

    meet us [Microsoft] in an impartial lab to operate further checking out of the .internet StockTrader and WSTest benchmark workloads and pricing evaluation of the center tier software servers validated in their benchmark record. moreover, they invite the IBM aggressive response crew to their lab in Redmond, for discussion and additional checking out of their presence and under their assessment.

    remaining Conclusion

    often, a benchmark contains

  • a workload
  • a set of rules describing how the workload is to be processed – run guidelines -
  • a manner trying to be sure that the run suggestions are revered and results are interpreted correctly
  • A benchmark is constantly supposed to evaluate two or more programs as a way to check which one is more suitable for performing certain tasks. Benchmarks are additionally used by companies to improve their hardware/application earlier than it goes to their purchasers through checking out diverse tuning parameters and measuring the consequences or by means of spotting some bottlenecks. Benchmarks can also be used for advertising and marketing functions, to prove that a certain device has better performance than the competitor’s.

    within the beginning, benchmarks had been used to measure the hardware performance of a gadget, like the CPU processing energy. Later, benchmarks have been created to test and evaluate applications like SPEC MAIL2001 and even application servers like SPECjAppServer2004.

    There isn't any perfect benchmark. The workload can be tweaked to desire a undeniable platform, or the records can also be misinterpreted or incorrectly extrapolated. To be convincing, a benchmark has to be as transparent as viable. The workload definition should still be public, and if possible the source code may still be made obtainable for these interested to look at. a transparent set of run rules are necessary so different parties can repeat the equal exams to see the results for themselves. the style effects are interpreted and their which means need to be disclosed.

    We don't seem to be aware about a response from IBM to Microsoft’s closing paper. it could be entertaining to see their reaction. probably, the most desirable solution to clear issues up is for IBM to make the supply code of their checks public so anybody interested might look at various and see for themselves where is the fact. except then they can best speculate on the correctness and validity of these benchmarks.


    IBM i market Survey Fills within the Blanks | killexams.com Real Questions and Pass4sure dumps

    The IBM midrange group has a attractiveness for keeping the status quo. but that doesn’t suggest it’s proof against trade. Shifts within the economic system, the expanding pressures from business managers to do extra with less, and the awareness that aggressive talents comes with modernization mix to disrupt status quo thinkers. but does it in reality? statistics that pertain to IBM i stores are pretty much non-existent. a new stack of assistance coming from a survey conducted through HelpSystems adjustments that.

    The finished consequences of the survey have yet to be made public. however I’ve realized a couple of issues which are fantastic. for example:

    About sixty three % of the IBM i groups during this survey are running the 7.1 edition of the working system. And 24 % are at 6.1. mixed the total is 87 %, which leaves simplest single-digit percentages for 7.2, V5R3, and the early releases.

    IBM i 6.1 and 7.1 dominate because the most regular models of the working gadget, in line with the survey effects.

    fit these numbers with these:

    Of the survey takers, 38 percent use a single power techniques server to run their businesses, and 50 % spoke of that they had between two and 5 IBM i techniques carrying the workloads. My arithmetic competencies lead me to the conclusion that 88 percent of all survey takers healthy into the five-servers-or-much less class. Then factor into those numbers multiple IBM i partition being used by sixty two percent of the survey group.

    IBM i shops with between two and five servers outnumbered retail outlets with best a single server in keeping with survey responses.

    according to what you recognize to this point, would you guess there's a more suitable number of taking part groups with fewer than 1,000 personnel or a superior number with greater than 1,000 personnel?

    The survey identifies basically 60 p.c in the smaller workforce category and that leaves 40 percent within the 1,000-employees-and-up category.

    software modernization tops the checklist of “considerations” for all collaborating companies, with 59 p.c checking that field. 2nd on the list of issues is excessive availability. Third is the dwindling body of workers with IBM i abilities.

    here is simply the tip of the iceberg. And what i will share today is fairly primary stuff, even though it provides colour to an otherwise blurry photograph of what the IBM i community looks like.

    When the complete document is released in March, it will encompass details that expand the data. as an instance, there will be statistics regarding using partitions that may also be compared with the server facts outlined above. And together with that may be statistics regarding relocating servers to off-site places and tended to my managed carrier suppliers.

    other survey questions dip into themes akin to business intelligence and records analytics, tape backup and catastrophe recuperation, the frequency of AIX and Linux on the equal vigor Server as IBM i and on other servers in the IT department.

    The degree of self belief that should still be placed in this survey falls short of one hundred percent. display me a survey that is irrefutable and i’ll display you an exceptional gold nugget (or accuse you of selling swamp land in Florida). but, on the very least, this places handles on a pot filled with topics that have relied on choicest guesses and hoped for outcomes.

    the vast majority of this information become gathered in September and October 2014. HelpSystems encouraged participation through sending emails to an inventory of its consumers and possibilities. if you are an avid reader of The four Hundred, you’ll bear in mind an article titled “attempting to find IBM i solutions” that additionally encouraged the IBM i group to take part in this survey.

    IT Jungle and PowerWire participated within the building of the survey and are offering unique coverage of the effects.

    the whole number of surveys gathered and tabulated was 350, with all however 52 of these coming from North the us.

    I see this preliminary survey as a baseline for measuring shifts with persevered measurements in the future. developments are elaborate to establish and not using a foundation. You have to know the place you started to know how some distance you’ve come. This lays the groundwork for additional surveys, analysis, and reporting.

    by itself, as single reference, it offers records from which evaluations can be made. It reveals fees of pride/dissatisfaction and the prevalence/scarcity of particular products and applied sciences.

    It can also be a useful tool to aid support or validate IT strategies and method and used to discover developments that otherwise would have long past unnoticed.

    thought management and trusted consultant is enormously favored popularity that HelpSystems hopes to obtain by means of taking on this task. It has proved to be effective during the past as PowerTech, a HelpSystems company, has produced a State of IBM i security record for 10 years.

    This survey and the white paper HelpSystems plans to unencumber in March add substantiation to enterprise/technology initiatives that are infrequently quantified by means of IBM or individuals of the IBM i ISV neighborhood.

    IT Jungle plans to publish greater particulars of this survey and evaluation of particular subject areas as that suggestions turns into attainable.

    To gain a duplicate of the survey and a white paper authored through HelpSystems’ vice president of technical capabilities Tom Huntington, follow this hyperlink and fill out a web kind along with your contact tips.

    connected stories

    below New CEO, HelpSystems Snaps Up Rival Halcyon

    attempting to find IBM i answers

    HelpSystems Grows With RJS And Coglin Mill Acquisitions

    State Of IBM i security? Dismal As normal, PowerTech Says

    the most referred to IBM i trends And expertise

    assist/systems Buys Dartware To construct Out Heterogeneous Monitoring

    help/methods Buys show off BI items from IBM


    home windows gadget Programming: manner management | killexams.com Real Questions and Pass4sure dumps

    This chapter explains the fundamentals of process administration and also introduces the primary synchronization operations and wait features that will be critical all through the relaxation of the publication.

    This chapter is from the booklet 

    A procedure carries its personal impartial virtual address house with each code and facts, included from different processes. each and every system, in turn, carries one or greater independently executing threads. A thread running inside a process can execute application code, create new threads, create new unbiased processes, and control verbal exchange and synchronization among the many threads.

    by way of creating and managing tactics, functions can have dissimilar, concurrent projects processing info, performing computations, or speaking with different networked programs. it's even possible to increase software efficiency with the aid of exploiting distinctive CPU processors.

    This chapter explains the basics of procedure management and additionally introduces the fundamental synchronization operations and wait capabilities that may be critical all over the leisure of the publication.

    each procedure contains one or more threads, and the home windows thread is the simple executable unit; see the subsequent chapter for a threads introduction. Threads are scheduled on the groundwork of the commonplace elements: availability of elements such as CPUs and actual reminiscence, priority, fairness, and the like. windows has long supported multiprocessor programs, so threads can be allocated to separate processors inside a laptop.

    From the programmer's perspective, each home windows technique includes elements such as the following accessories:

  • One or extra threads.
  • A digital tackle area that's distinct from different processes' address spaces. be aware that shared reminiscence-mapped info share physical reminiscence, however the sharing methods will probably use diverse digital addresses to entry the mapped file.
  • One or extra code segments, including code in DLLs.
  • One or greater records segments containing global variables.
  • environment strings with environment variable suggestions, such because the existing search path.
  • The manner heap.
  • elements such as open handles and other lots.
  • every thread in a procedure shares code, world variables, atmosphere strings, and components. each thread is independently scheduled, and a thread has the following facets:

  • A stack for method calls, interrupts, exception handlers, and automated storage.
  • Thread local Storage (TLS)—An arraylike assortment of pointers giving every thread the capability to allocate storage to create its own pleasing records atmosphere.
  • An argument on the stack, from the growing thread, which is continually pleasing for every thread.
  • A context structure, maintained via the kernel, with computer register values.
  • figure 6-1 suggests a manner with a few threads. This determine is schematic and doesn't point out precise memory addresses, neither is it drawn to scale.

    This chapter shows how to work with processes which includes a single thread. Chapter 7 shows how to use distinct threads.


    Obviously it is hard assignment to pick solid certification questions/answers assets concerning review, reputation and validity since individuals get sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets concerning exam dumps update and validity. The vast majority of other's sham report objection customers come to us for the brain dumps and pass their exams cheerfully and effectively. They never trade off on their review, reputation and quality because killexams review, killexams reputation and killexams customer certainty is vital to us. Uniquely they deal with killexams.com review, killexams.com reputation, killexams.com sham report grievance, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. In the event that you see any false report posted by their rivals with the name killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com dissension or something like this, simply remember there are constantly terrible individuals harming reputation of good administrations because of their advantages. There are a great many fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams hone questions, killexams exam simulator. Visit Killexams.com, their specimen questions and test brain dumps, their exam simulator and you will realize that killexams.com is the best brain dumps site.

    Back to Braindumps Menu


    CAT-440 questions answers | C2140-839 braindumps | 1K0-001 test questions | 000-081 study guide | 310-878 test prep | 000-M70 Practice test | 650-368 braindumps | HP2-B71 sample test | HP0-240 examcollection | HP0-Y28 dumps | HP0-746 pdf download | 1K0-002 free pdf download | HP2-E34 practice questions | 000-113 braindumps | COG-205 free pdf | A2040-986 brain dumps | C2010-657 test prep | ENOV613X-3DE real questions | 000-971 practice test | 1Z0-877 free pdf |


    Searching for 000-103 exam dumps that works in real exam?
    killexams.com helps a great many competitors pass the exams and get their confirmations. They have a great many effective audits. Their dumps are solid, reasonable, refreshed and of really best quality to beat the challenges of any IT confirmations. killexams.com exam dumps are latest refreshed in profoundly outflank way on customary premise and material is discharged occasionally. 000-103 real questions are their quality tested.

    Just go through their Questions bank and sense assured approximately the 000-103 test. You will pass your exam at high marks or your money back. They have aggregated a database of 000-103 Dumps from actual test so that you can come up with a chance to get ready and pass 000-103 exam on the important enterprise. Simply install their Exam Simulator and get ready. You will pass the exam. killexams.com Huge Discount Coupons and Promo Codes are as beneath;
    WC2017 : 60% Discount Coupon for all tests on website
    PROF17 : 10% Discount Coupon for Orders greater than $69
    DEAL17 : 15% Discount Coupon for Orders more than $99
    DECSPECIAL : 10% Special Discount Coupon for All Orders
    Detail is at http://killexams.com/pass4sure/exam-detail/000-103

    In the occasion that would you say you are overwhelmed how to pass your IBM 000-103 Exam? Thanks to the certified killexams.com IBM 000-103 Testing Engine you will make sense of how to manufacture your capacities. A large portion of the understudies start understanding when they find that they have to appear in IT accreditation. Their brain dumps are intensive and to the point. The IBM 000-103 PDF archives make your vision gigantic and help you a ton in prep of the certification exam.

    killexams.com astounding 000-103 exam simulator is to a great degree empowering for their customers for the exam prep. Massively essential questions, focuses and definitions are included in brain dumps pdf. Social event the data in a single place is a veritable help and Ass you prepare for the IT certification exam inside a concise time span cross. The 000-103 exam offers key core interests. The killexams.com pass4sure dumps holds the fundamental questions or thoughts of the 000-103 exam

    At killexams.com, they give totally verified IBM 000-103 planning resources the best to pass 000-103 exam, and to get guaranteed by IBM. It is a best choice to accelerate your situation as a specialist in the Information Technology industry. They are satisfied with their reputation of helping people pass the 000-103 test in their first attempt. Their success rates in the past two years have been totally incredible, on account of their cheery customers presently prepared to induce their situations in the most optimized plan of attack. killexams.com is the fundamental choice among IT specialists, especially the ones planning to climb the movement levels snappier in their individual organizations. IBM is the business pioneer in information advancement, and getting certified by them is a guaranteed way to deal with win with IT positions. They empower you to do actually that with their radiant IBM 000-103 getting ready materials.

    IBM 000-103 is uncommon all around the world, and the business and programming courses of action gave by them are gotten a handle on by each one of the associations. They have helped in driving a substantial number of associations on the shot method for accomplishment. Broad learning of IBM things are seen as a basic ability, and the specialists guaranteed by them are incredibly regraded in all organizations.

    We give certified 000-103 pdf exam questions and answers braindumps in two game plans. Download PDF and Practice Tests. Pass IBM 000-103 real Exam quickly and successfully. The 000-103 braindumps PDF sort is open for examining and printing. You can print progressively and practice customarily. Their pass rate is high to 98.9% and the similarity rate between their 000-103 ponder manage and honest to goodness exam is 90% Considering their seven-year educating foundation. Do you require success in the 000-103 exam in just a single attempt? I am correct presently examining for the IBM 000-103 real exam.

    As the main thing in any capacity imperative here is passing the 000-103 - AIX 6.1 Basic Operations exam. As all that you require is a high score of IBM 000-103 exam. The only a solitary thing you need to do is downloading braindumps of 000-103 exam prep coordinates now. They won't let you down with their unrestricted guarantee. The specialists in like manner keep pace with the most cutting-edge exam to give most of updated materials. Three Months free access to download update 000-103 test through the date of procurement. Every candidate may bear the cost of the 000-103 exam dumps through killexams.com with ease. Every now and again markdown for anyone all.

    Inside seeing the honest to goodness exam material of the brain dumps at killexams.com you can without quite a bit of a stretch develop your claim to fame. For the IT specialists, it is fundamental to enhance their capacities as demonstrated by their position need. They make it straightforward for their customers to carry accreditation exam Thanks to killexams.com certified and authentic exam material. For a mind blowing future in its realm, their brain dumps are the best decision.

    A best dumps creating is a basic segment that makes it basic for you to take IBM certifications. In any case, 000-103 braindumps PDF offers convenience for candidates. The IT certification is a huge troublesome endeavor if one doesn't find honest to goodness bearing as obvious resource material. Subsequently, they have real and updated material for the arranging of certification exam.

    It is fundamental to gather to the guide material in case one needs toward save time. As you require bundles of time to scan for updated and genuine examination material for taking the IT certification exam. If you find that at one place, what could be better than this? Its fair killexams.com that has what you require. You can save time and dodge trouble in case you buy Adobe IT accreditation from their site.

    You should get the most updated IBM 000-103 Braindumps with the correct answers, set up by killexams.com specialists, empowering the likelihood to understand finding out about their 000-103 exam course in the greatest, you won't find 000-103 consequences of such quality wherever in the market. Their IBM 000-103 Practice Dumps are given to candidates at performing 100% in their exam. Their IBM 000-103 exam dumps are latest in the market, enabling you to prepare for your 000-103 exam in the right way.

    killexams.com Huge Discount Coupons and Promo Codes are as under;
    WC2017: 60% Discount Coupon for all exams on website
    PROF17: 10% Discount Coupon for Orders greater than $69
    DEAL17: 15% Discount Coupon for Orders greater than $99
    DECSPECIAL: 10% Special Discount Coupon for All Orders


    If you are possessed with adequately Passing the IBM 000-103 exam to start acquiring? killexams.com has driving edge made IBM exam tends to that will guarantee you pass this 000-103 exam! killexams.com passes on you the correct, present and latest updated 000-103 exam questions and open with 100% unlimited guarantee. numerous associations that give 000-103 brain dumps yet those are not actual and latest ones. Course of action with killexams.com 000-103 new questions is a most perfect way to deal with pass this accreditation exam in basic way.

    000-103 | 000-103 | 000-103 | 000-103 | 000-103 | 000-103


    Killexams P2090-046 questions answers | Killexams 000-435 braindumps | Killexams 310-065 Practice test | Killexams 000-875 practice questions | Killexams 000-M227 dumps | Killexams ITILFND VCE | Killexams 98-369 free pdf | Killexams CBEST free pdf | Killexams C9010-252 practice exam | Killexams 1Z0-569 study guide | Killexams 000-448 free pdf download | Killexams HP0-M44 exam prep | Killexams 4A0-M02 brain dumps | Killexams 9L0-400 sample test | Killexams JN0-314 practice test | Killexams S90-18A study guide | Killexams 920-196 braindumps | Killexams P2170-035 cram | Killexams AZ-100 real questions | Killexams HP2-T11 practice test |


    killexams.com huge List of Exam Braindumps

    View Complete list of Killexams.com Brain dumps


    Killexams A2040-988 braindumps | Killexams 050-650 free pdf download | Killexams 1D0-541 exam questions | Killexams HP2-T25 dumps questions | Killexams 250-308 Practice Test | Killexams CCA-500 practice test | Killexams A2010-598 test questions | Killexams MB5-626 questions answers | Killexams 1Z0-493 study guide | Killexams 9A0-082 free pdf | Killexams 1Z0-225 practice questions | Killexams HP3-C32 brain dumps | Killexams 1D0-437 bootcamp | Killexams 000-N05 sample test | Killexams 190-620 exam prep | Killexams 00M-650 test prep | Killexams HP5-Z01D braindumps | Killexams 000-274 dump | Killexams COG-635 questions and answers | Killexams HP2-E49 practice questions |


    AIX 6.1 Basic Operations

    Pass 4 sure 000-103 dumps | Killexams.com 000-103 real questions | http://smresidences.com.ph/

    Windows System Programming: Process Management | killexams.com real questions and Pass4sure dumps

    This chapter explains the basics of process management and also introduces the basic synchronization operations and wait functions that will be important throughout the rest of the book.

    This chapter is from the book 

    A process contains its own independent virtual address space with both code and data, protected from other processes. Each process, in turn, contains one or more independently executing threads. A thread running within a process can execute application code, create new threads, create new independent processes, and manage communication and synchronization among the threads.

    By creating and managing processes, applications can have multiple, concurrent tasks processing files, performing computations, or communicating with other networked systems. It is even possible to improve application performance by exploiting multiple CPU processors.

    This chapter explains the basics of process management and also introduces the basic synchronization operations and wait functions that will be important throughout the rest of the book.

    Every process contains one or more threads, and the Windows thread is the basic executable unit; see the next chapter for a threads introduction. Threads are scheduled on the basis of the usual factors: availability of resources such as CPUs and physical memory, priority, fairness, and so on. Windows has long supported multiprocessor systems, so threads can be allocated to separate processors within a computer.

    From the programmer's perspective, each Windows process includes resources such as the following components:

  • One or more threads.
  • A virtual address space that is distinct from other processes' address spaces. Note that shared memory-mapped files share physical memory, but the sharing processes will probably use different virtual addresses to access the mapped file.
  • One or more code segments, including code in DLLs.
  • One or more data segments containing global variables.
  • Environment strings with environment variable information, such as the current search path.
  • The process heap.
  • Resources such as open handles and other heaps.
  • Each thread in a process shares code, global variables, environment strings, and resources. Each thread is independently scheduled, and a thread has the following elements:

  • A stack for procedure calls, interrupts, exception handlers, and automatic storage.
  • Thread Local Storage (TLS)—An arraylike collection of pointers giving each thread the ability to allocate storage to create its own unique data environment.
  • An argument on the stack, from the creating thread, which is usually unique for each thread.
  • A context structure, maintained by the kernel, with machine register values.
  • Figure 6-1 shows a process with several threads. This figure is schematic and does not indicate actual memory addresses, nor is it drawn to scale.

    This chapter shows how to work with processes consisting of a single thread. Chapter 7 shows how to use multiple threads.


    How to Create a Pokemon Spawn Locations Recorder with CouchDB | killexams.com real questions and Pass4sure dumps

    In a previous article, you’ve been introduced to CouchDB. This time, you’re going to create a full-fledged app where you can apply the things you learned. You’re also going to learn how to secure your database at the end of the tutorial.

    Overview of the Project

    You’re going to build a Pokemon spawn locations recorder.

    This will allow users to save the locations of the monsters they encounter on Pokemon Go. Google Maps will be used to search for locations and a marker placed to pinpoint the exact location. Once the user is satisfied with the location, the marker can be interacted with, when it will show a modal box which allows the user to enter the name of the Pokemon and save the location. When the next user comes along and searches the same location, the values added by previous users will be plotted in the map as markers. Here’s what the app will look like:

    pokespawn screen

    The full source code for the project is available on Github.

    Setting Up the Development Environment

    If you don’t have a good, isolated dev environment set up, it’s recommended you use Homestead Improved.

    The box doesn’t come with CouchDB installed, so you’ll need to do that manually; but not just plain CouchDB. The app needs to work with geo data (latitudes and longitudes): you’ll supply CouchDB with the bounding box information from Google Maps. The bounding box represents the area currently being shown in the map, and all the previous coordinates users have added to that area would be shown on the map as well. CouchDB cannot do that by default, which is why you need to install a plugin called GeoCouch in order to give CouchDB some spatial superpowers.

    The simplest way to do that is by means of the GeoCouch docker container. You can also try to install GeoCouch manually but it requires you to install CouchDB from source and configure it all by hand. I don’t really recommend this method unless you have a unix beard.

    Go ahead and install Docker into the VM you’re using, and come back here once you’re done.

    Installing GeoCouch

    First, clone the repo and navigate inside the created directory.

    git clone git@github.com:elecnix/docker-geocouch.git cd docker-geocouch

    Next, open the Dockerfile and replace the script for getting CouchDB with the following:

    # Get the CouchDB source RUN cd /opt; wget http://www-eu.apache.org/dist/couchdb/source/${COUCH_VERSION}/a$ tar xzf /opt/apache-couchdb-${COUCH_VERSION}.tar.gz

    You need to do this because the download URL that’s currently being used is already failing.

    Build the docker image:

    docker build -t elecnix/docker-geocouch:1.6.1 .

    This will take a while depending on your internet connection so go grab a snack. Once it’s done, create the container and start it:

    docker create -ti -p 5984:5984 elecnix/docker-geocouch:1.6.1 docker start <container id>

    Once it has started, you can test to see if it’s running by executing the following command:

    curl localhost:5984

    Outside the VM, if you forwarded ports properly, that’ll be:

    curl 192.168.33.10:5984

    It should return the following:

    {"couchdb":"Welcome","uuid":"2f0b5e00e9ce08996ace6e66ffc1dfa3","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}

    Note that I’ll constantly refer to 192.168.33.10 throughout the article. This is the IP assigned to Scotchbox, which is the Vagrant box I used. If you’re using Homestead Improved, the IP is 192.168.10.10. You can use this IP to access the app. If you’re using something else entirely, adapt as needed.

    Setting Up the Project

    You’re going to use the Slim framework to speed up the development of the app. Create a new project using Composer:

    php composer create-project slim/slim-skeleton pokespawn

    pokespawn is the name of the project, so go ahead and navigate to that directory once Composer is done installing. Then, install the following extra packages:

    composer require danrovito/pokephp guzzlehttp/guzzle gregwar/image vlucas/phpdotenv

    Here’s a brief overview on each one:

  • danrovito/pokephp – for easily talking to the Pokemon API.
  • guzzlehttp/guzzle – for making requests to the CouchDB server.
  • gregwar/image – for resizing the Pokemon sprites returned by the Pokemon API.
  • vlucas/phpdotenv – for storing configuration values.
  • Setting Up the Database

    Access Futon from the browser and create a new database called pokespawn. Once created, go inside the database and create a new view. You can do that by clicking on the view dropdown and selecting temporary view. Add the following inside the textarea for the Map Function:

    function(doc){ if(doc.doc_type == 'pokemon'){ emit(doc.name, null); } }

    create new view

    Once that’s done, click on the save as button, add pokemon as the name of the design document, and by_name as the view name. Press on save to save the view. Later on, you’ll be using this view to suggest Pokemon names based on what the user has entered.

    save view

    Next, create a design document for responding to spatial searches. You can do that by selecting Design documents in the view dropdown then click on new document. Once in the page for creating a design document, click on the add field button and add spatial as the field name, and the following as the value:

    { "points": "function(doc) {\n if (doc.loc) {\n emit([{\n type: \"Point\",\n coordinates: [doc.loc[0], doc.loc[1]]\n }], [doc.name, doc.sprite]);\n }};" }

    This design document utilizes the spatial functions provided by GeoCouch. The first thing it does is check whether the document has a loc field in it. The loc field is an array containing the coordinates of a specific location, with the first item containing the latitude and the second item containing the longitude. If the document meets this criteria, it uses the emit() function just like a normal view. The key is a GeoJSON geometry and the value is an array containing the name of the Pokemon and the sprite.

    When you make a request to the design document, you need to specify the start_range and the end_range which has the format of a JSON array. Each item can either be a number or a null. null is used if you want an open range. Here’s an example request:

    curl -X GET --globoff 'http://192.168.33.10:5984/pokespawn/_design/location/_spatial/points?start_range=[-33.87049924568689,151.2149563379288]&end_range=[33.86709181198735,151.22298150730137]'

    And its output:

    { "update_seq": 289, "rows":[{ "id":"c8cc500c68f679a6949a7ff981005729", "key":[ [ -33.869107336588, -33.869107336588 ], [ 151.21772705984, 151.21772705984 ] ], "bbox":[ -33.869107336588, 151.21772705984, -33.869107336588, 151.21772705984 ], "geometry":{ "type":"Point", "coordinates":[ -33.869107336588, 151.21772705984 ] }, "value":[ "snorlax", "143.png" ] }] }

    If you want to learn more about what specific operations you can do with GeoCouch, be sure to read the documentation or the Wiki.

    Creating the Project

    Now you’re ready to write some code. First you’re going to take a look at the code for the back-end then move on to the front-end code.

    Poke Importer

    The app requires some Pokemon data to be already in the database before it can be used, thus the need for a script that’s only executed locally. Create a poke-importer.php file at the root of your project directory and add the following:

    <?php require 'vendor/autoload.php'; set_time_limit(0); use PokePHP\PokeApi; use Gregwar\Image\Image; $api = new PokeApi; $client = new GuzzleHttp\Client(['base_uri' => 'http://192.168.33.10:5984']); //create a client for talking to CouchDB $pokemons = $api->pokedex(2); //make a request to the API $pokemon_data = json_decode($pokemons); //convert the json response to array foreach ($pokemon_data->pokemon_entries as $row) { $pokemon = [ 'id' => $row->entry_number, 'name' => $row->pokemon_species->name, 'sprite' => "{$row->entry_number}.png", 'doc_type' => "pokemon" ]; //get image from source, save it then resize. Image::open("https://raw.githubusercontent.com/PokeAPI/sprites/master/sprites/pokemon/{$row->entry_number}.png") ->resize(50, 50) ->save('public/img/' . $row->entry_number . '.png'); //save the pokemon data to the database $client->request('POST', "/pokespawn", [ 'headers' => [ 'Content-Type' => 'application/json' ], 'body' => json_encode($pokemon) ]); echo $row->pokemon_species->name . "\n"; } echo "done!";

    This script makes a request to the Pokedex endpoint of the Pokemon API. This endpoint requires the ID of the Pokedex version that you want it to return. Since Pokemon Go only currently allows players to catch Pokemon from the first generation, supply 2 as the ID. This returns all the Pokemon from the Kanto region of the original Pokemon game. Then loop through the data, extract all the necessary information, save the sprite, and make a new document using the extracted data.

    Routes

    Open the src/routes.php file and add the following routes:

    <?php $app->get('/', 'HomeController:index'); $app->get('/search', 'HomeController:search'); $app->post('/save-location', 'HomeController:saveLocation'); $app->post('/fetch', 'HomeController:fetch');

    Each of the routes will respond to the actions that can be performed throughout the app. The root route returns the home page, the search route returns the Pokemon name suggestions, the save-location route saves the location and the fetch route returns the Pokemon in a specific location.

    Home Controller

    Under the src directory, create an app/Controllers folder and inside create a HomeController.php file. This will perform all the actions needed for each of the routes. Here is the code:

    <?php namespace App\Controllers; class HomeController { protected $renderer; public function __construct($renderer) { $this->renderer = $renderer; //the twig renderer $this->db = new \App\Utils\DB; //custom class for talking to couchdb } public function index($request, $response, $args) { //render the home page return $this->renderer->render($response, 'index.html', $args); } public function search() { $name = $_GET['name']; //name of the pokemon being searched return $this->db->searchPokemon($name); //returns an array of suggestions based on the user input } public function saveLocation() { $id = $_POST['pokemon_id']; //the ID assigned by CouchDB to the Pokemon return $this->db->savePokemonLocation($id, $_POST['pokemon_lat'], $_POST['pokemon_lng']); //saves the pokemon location to CouchDB and returns the data needed to plot the pokemon in the map } public function fetch() { return json_encode($this->db->fetchPokemons($_POST['north_east'], $_POST['south_west'])); //returns the pokemon's within the bounding box of Google map. } }

    The Home Controller uses the $renderer which is passed in via the constructor to render the home page of the app. It also uses the DB class which you’ll be creating shortly.

    Talking to CouchDB

    Create a Utils/DB.php file under the app directory. Open the file and create a class:

    <?php namespace App\Utils; class DB { }

    Inside the class, create a new Guzzle client. You’re using Guzzle instead of some of the PHP clients for CouchDB because you can do anything you want with it.

    private $client; public function __construct() { $this->client = new \GuzzleHttp\Client([ 'base_uri' => getenv('BASE_URI') ]); }

    The config is from the .env file at the root of the project. This contains the base URL of CouchDB.

    BASE_URI="http://192.168.33.10:5984"

    searchPokemon is responsible for returning the data used by the auto-suggest functionality. Since CouchDB doesn’t actually support the LIKE condition that you’re used to in SQL, you’re using a little hack to mimic it. The trick here is using start_key and end_key instead of just key which only returns exact matches. fff0 is one of the special unicode characters allocated at the very end of the basic multilingual plane. This makes it a good candidate for appending at the end of the actual string being searched, which makes the rest of the characters become optional because of its high value. Note that this hack only works for short words so it’s more than enough for searching for Pokemon names.

    public function searchPokemon($name) { $unicode_char = '\ufff0'; $data = [ 'include_docs' => 'true', 'start_key' => '"' . $name . '"', 'end_key' => '"' . $name . json_decode('"' . $unicode_char .'"') . '"' ]; //make a request to the view you created earlier $doc = $this->makeGetRequest('/pokespawn/_design/pokemon/_view/by_name', $data); if (count($doc->rows) > 0) { $data = []; foreach ($doc->rows as $row) { $data[] = [ $row->key, $row->id ]; } return json_encode($data); } $result = ['no_result' => true]; return json_encode($result); }

    makeGetRequest is used for performing the read requests to CouchDB and makePostRequest for write.

    public function makeGetRequest($endpoint, $data = []) { if (!empty($data)) { //make a GET request to the endpoint specified, with the $data passed in as a query parameter $response = $this->client->request('GET', $endpoint, [ 'query' => $data ]); } else { $response = $this->client->request('GET', $endpoint); } return $this->handleResponse($response); } private function makePostRequest($endpoint, $data) { //make a POST request to the endpoint specified, passing in the $data for the request body $response = $this->client->request('POST', $endpoint, [ 'headers' => [ 'Content-Type' => 'application/json' ], 'body' => json_encode($data) ]); return $this->handleResponse($response); }

    savePokemonLocation saves the coordinates to which the Google map marker is currently pointing, along with the name and the sprite. A doc_type field is also added for easy retrieval of all the documents related to locations.

    public function savePokemonLocation($id, $lat, $lng) { $pokemon = $this->makeGetRequest("/pokespawn/{$id}"); //get pokemon details based on ID //check if supplied data are valid if (!empty($pokemon->name) && $this->isValidCoordinates($lat, $lng)) { $lat = (double) $lat; $lng = (double) $lng; //construct the data to be saved to the database $data = [ 'name' => $pokemon->name, 'sprite' => $pokemon->sprite, 'loc' => [$lat, $lng], 'doc_type' => 'pokemon_location' ]; $this->makePostRequest('/pokespawn', $data); //save the location data $pokemon_data = [ 'type' => 'ok', 'lat' => $lat, 'lng' => $lng, 'name' => $pokemon->name, 'sprite' => $pokemon->sprite ]; return json_encode($pokemon_data); //return the data needed by the pokemon marker } return json_encode(['type' => 'fail']); //invalid data }

    isValidCoordinates checks if the latitude and longitude values have a valid format.

    private function isValidCoordinates($lat = '', $lng = '') { $coords_pattern = '/^[+\-]?[0-9]{1,3}\.[0-9]{3,}\z/'; if (preg_match($coords_pattern, $lat) && preg_match($coords_pattern, $lng)) { return true; } return false; }

    fetchPokemons is the function that makes the request to the design document for spatial search that you created earlier. Here, you specify the southwest coordinates as the value for the start_range and the northeast coordinates as the value for the end_range. The response is also limited to the first 100 rows to prevent requesting too much data. Earlier, you’ve also seen that there are some data returned by CouchDB that aren’t really needed. It would be useful to extract and then return only the data needed on the front-end. I chose to leave that as an optimization for another day.

    public function fetchPokemons($north_east, $south_west) { $north_east = array_map('doubleval', $north_east); //convert all array items to double $south_west = array_map('doubleval', $south_west); $data = [ 'start_range' => json_encode($south_west), 'end_range' => json_encode($north_east), 'limit' => 100 ]; $pokemons = $this->makeGetRequest('/pokespawn/_design/location/_spatial/points', $data); //fetch all pokemon's that are in the current area return $pokemons; }

    handleResponse converts the JSON string returned by CouchDB into an array.

    private function handleResponse($response) { $doc = json_decode($response->getBody()->getContents()); return $doc; }

    Open composer.json at the root directory and add the following right below the require property, then execute composer dump-autoload. This allows you to autoload all the files inside the src/app directory and make it available inside the App namespace:

    "autoload": { "psr-4": { "App\\": "src/app" } }

    Lastly, inject the Home Controller into the container. You can do that by opening the src/dependencies.php file and add the following to the bottom:

    $container['HomeController'] = function ($c) { return new App\Controllers\HomeController($c->renderer); };

    This allows you to pass the Twig renderer to the Home Controller and makes HomeController accessible from the router.

    Home Page Template

    Now you’re ready to proceed with the front-end. First, create a templates/index.html file at the root of the project directory and add the following:

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>PokéSpawn</title> <link rel="stylesheet" href="lib/picnic/picnic.min.css"> <link rel="stylesheet" href="lib/remodal/dist/remodal.css"> <link rel="stylesheet" href="lib/remodal/dist/remodal-default-theme.css"> <link rel="stylesheet" href="lib/javascript-auto-complete/auto-complete.css"> <link rel="stylesheet" href="css/style.css"> <link rel="icon" href="favicon.ico"><!-- by Maicol Torti https://www.iconfinder.com/Maicol-Torti --> </head> <body> <div id="header"> <div id="title"> <img src="img/logo.png" alt="logo" class="header-item" /> <h1 class="header-item">PokéSpawn</h1> </div> <input type="text" id="place" class="controls" placeholder="Where are you?"><!-- text field for typing the location --> </div> <div id="map"></div> <!-- modal for saving pokemon location --> <div id="add-pokemon" class="remodal" data-remodal-id="modal"> <h3>Plot Pokémon Location</h3> <form method="POST" id="add-pokemon-form"> <div> <input type="hidden" name="pokemon_id" id="pokemon_id"><!-- id of the pokemon in CouchDB--> <input type="hidden" name="pokemon_lat" id="pokemon_lat"><!--latitude of the red marker --> <input type="hidden" name="pokemon_lng" id="pokemon_lng"><!--longitude of the red marker --> <input type="text" name="pokemon_name" id="pokemon_name" placeholder="Pokémon name"><!--name of the pokemon whose location is being added --> </div> <div> <button type="button" id="save-location">Save Location</button><!-- trigger the submission of location to CouchDB --> </div> </form> </div> <script src="lib/zepto.js/dist/zepto.min.js"></script><!-- event listening, ajax --> <script src="lib/remodal/dist/remodal.min.js"></script><!-- for modal box --> <script src="lib/javascript-auto-complete/auto-complete.min.js"></script><!-- for autocomplete text field --> <script src="js/main.js"></script> <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_GOOGLEMAP_APIKEY&callback=initMap&libraries=places" defer></script><!-- for showing a map--> </body> </html>

    In the <head> are the styles from the various libraries that the app uses, as well as the styles for the app. In the <body> are the text field for searching locations, the map container, and the modal for saving a new location. Below those are the scripts used in the app. Don’t forget to replace YOUR_GOOGLEMAP_APIKEY in the Google Maps script with your own API key.

    JavaScript

    For the main JavaScript file (public/js/main.js), first create variables for storing values that you will be needing throughout the whole file.

    var modal = $('#add-pokemon').remodal(); //initialize modal var map; //the google map var markers = []; //an array for storing all the pokemon markers currently plotted in the map

    Next, create the function for initializing the map. A min_zoomlevel is specified to prevent users from zooming out until they can see the entirety of the world map. You’ve already added a limit to the results that can be returned by CouchDB, but this is also a nice addition to prevent the users from expecting that they can select data from the whole world.

    function initMap() { var min_zoomlevel = 18; map = new google.maps.Map(document.getElementById('map'), { center: {lat: -33.8688, lng: 151.2195}, //set disableDefaultUI: true, //hide default UI controls zoom: min_zoomlevel, //set default zoom level mapTypeId: 'roadmap' //set type of map }); //continue here... }

    Create the marker for pin-pointing locations that users want to add. Then, add an event listener for opening the modal for adding locations when the marker is pressed:

    marker = new google.maps.Marker({ map: map, position: map.getCenter(), draggable: true }); marker.addListener('click', function(){ var position = marker.getPosition(); $('#pokemon_lat').val(position.lat()); $('#pokemon_lng').val(position.lng()); modal.open(); });

    Initialize the search box:

    var header = document.getElementById('header'); var input = document.getElementById('place'); var searchBox = new google.maps.places.SearchBox(input); //create a google map search box map.controls[google.maps.ControlPosition.TOP_LEFT].push(header); //position the header at the top left side of the screen

    Add various map listeners:

    map.addListener('bounds_changed', function() { //executes when user drags the map searchBox.setBounds(map.getBounds()); //make places inside the current area a priority when searching }); map.addListener('zoom_changed', function() { //executes when user zooms in or out of the map //immediately set the zoom to the minimum zoom level if the current zoom goes over the minimum if (map.getZoom() < min_zoomlevel) map.setZoom(min_zoomlevel); }); map.addListener('dragend', function() { //executes the moment after the map has been dragged //loop through all the pokemon markers and remove them from the map markers.forEach(function(marker) { marker.setMap(null); }); markers = []; marker.setPosition(map.getCenter()); //always place the marker at the center of the map fetchPokemon(); //fetch some pokemon in the current viewable area });

    Add an event listener for when the place in the search box changes.

    searchBox.addListener('places_changed', function() { //executes when the place in the searchbox changes var places = searchBox.getPlaces(); if (places.length == 0) { return; } var bounds = new google.maps.LatLngBounds(); var place = places[0]; //only get the first place if (!place.geometry) { return; } marker.setPosition(place.geometry.location); //put the marker at the location being searched if (place.geometry.viewport) { // only geocodes have viewport bounds.union(place.geometry.viewport); } else { bounds.extend(place.geometry.location); } map.fitBounds(bounds); //adjust the current map bounds to that of the place being searched fetchPokemon(); //fetch some Pokemon in the current viewable area });

    The fetchPokemon function is responsible for fetching the Pokemon that were previously plotted in the currently viewable area of the map.

    function fetchPokemon(){ //get the northeast and southwest coordinates of the viewable area of the map var bounds = map.getBounds(); var north_east = [bounds.getNorthEast().lat(), bounds.getNorthEast().lng()]; var south_west = [bounds.getSouthWest().lat(), bounds.getSouthWest().lng()]; $.post( '/fetch', { north_east: north_east, south_west: south_west }, function(response){ var response = JSON.parse(response); response.rows.forEach(function(row){ //loop through all the results returned var position = new google.maps.LatLng(row.geometry.coordinates[0], row.geometry.coordinates[1]); //create a new google map position //create a new marker using the position created above var poke_marker = new google.maps.Marker({ map: map, title: row.value[0], //name of the pokemon position: position, icon: 'img/' + row.value[1] //pokemon image that was saved locally }); //create an infowindow for the marker var infowindow = new google.maps.InfoWindow({ content: "<strong>" + row.value[0] + "</strong>" }); //when clicked it will show the name of the pokemon poke_marker.addListener('click', function() { infowindow.open(map, poke_marker); }); markers.push(poke_marker); }); } ); }

    This is the code for adding the auto-suggest functionality of the text field for entering the name of a Pokemon. A renderItem function is specified to customize the HTML used for rendering each suggestion. This allows you to add the ID of the Pokemon as a data attribute which you then use to set the value of the pokemon_id field once a suggestion is selected.

    new autoComplete({ selector: '#pokemon_name', //the text field to add the auto-complete source: function(term, response){ //use the results returned by the search route as a data source $.getJSON('/search?name=' + term, function(data){ response(data); }); }, renderItem: function (item, search){ //the code for rendering each suggestions. search = search.replace(/[-\/\\^$*+?.()|[\]{}]/g, '\\$&'); var re = new RegExp("(" + search.split(' ').join('|') + ")", "gi"); return '<div class="autocomplete-suggestion" data-id="' + item[1] + '" data-val="' + item[0] + '">' + item[0].replace(re, "<b>$1</b>")+'</div>'; }, onSelect: function(e, term, item){ //executed when a suggestion is selected $('#pokemon_id').val(item.getAttribute('data-id')); } });

    When the Save Location button is pressed, a request is made to the server to add the Pokemon location to CouchDB.

    $('#save-location').click(function(e){ $.post('/save-location', $('#add-pokemon-form').serialize(), function(response){ var data = JSON.parse(response); if(data.type == 'ok'){ var position = new google.maps.LatLng(data.lat, data.lng); //create a location //create a new marker and use the location var poke_marker = new google.maps.Marker({ map: map, title: data.name, //name of the pokemon position: position, icon: 'img/' + data.sprite //pokemon image }); //create an infowindow for showing the name of the pokemon var infowindow = new google.maps.InfoWindow({ content: "<strong>" + data.name + "</strong>" }); //show name of pokemon when marker is clicked poke_marker.addListener('click', function() { infowindow.open(map, poke_marker); }); markers.push(poke_marker); } modal.close(); $('#pokemon_id, #pokemon_lat, #pokemon_lng, #pokemon_name').val(''); //reset the form }); }); $('#add-pokemon-form').submit(function(e){ e.preventDefault(); //prevent the form from being submited on enter }) Styles

    Create a public/css/styles.css file and add the following styles:

    html, body { height: 100%; margin: 0; padding: 0; } #header { text-align: center; } #title { float: left; padding: 5px; color: #f5716a; } .header-item { padding-top: 10px; } h1.header-item { font-size: 14px; margin: 0; padding: 0; } #map { height: 100%; } .controls { margin-top: 10px; border: 1px solid transparent; border-radius: 2px 0 0 2px; box-sizing: border-box; -moz-box-sizing: border-box; height: 32px; outline: none; box-shadow: 0 2px 6px rgba(0, 0, 0, 0.3); } #place { background-color: #fff; margin-left: 12px; padding: 0 11px 0 13px; text-overflow: ellipsis; width: 300px; margin-top: 20px; } #place:focus { border-color: #4d90fe; } #type-selector { color: #fff; background-color: #4d90fe; padding: 5px 11px 0px 11px; } #type-selector label { font-family: Roboto; font-size: 13px; font-weight: 300; } #target { width: 345px; } .remodal-wrapper { z-index: 100; } .remodal-overlay { z-index: 100; } Securing CouchDB

    By default CouchDB is open to all. This means that once you expose it to the internet, anyone can wreak havoc in your database. Anyone can do any database operation by simply using Curl, Postman or any other tool for making HTTP requests. In fact, this temporary state even has a name: the “admin party”. You’ve seen this in action in the previous tutorial and even when you created a new database, a view and a design document earlier. All of these actions can only be performed by the server admin but you’ve gone ahead and done it without logging in or anything. Still not convinced? Try executing this on your local machine:

    curl -X PUT http://192.168.33.10:5984/my_newdatabase

    You’ll get the following as a response if you don’t already have a server admin on your CouchDB installation:

    {"ok":true}

    Yikes, right? The good news is there’s an easy fix. All you have to do is create a server admin. You can do so with the following command:

    curl -X PUT http://192.168.33.10:5984/_config/admins/kami -d '"mysupersecurepassword"'

    The command above creates a new server admin named “kami” with the password “mysupersecurepassword”.

    By default, CouchDB doesn’t have any server admin so once you create one, the admin party is over. Note that server admins have god-like powers so you’re probably better off creating only one or two. Then create a handful of database admins who can only perform CRUD operations. You can do so by executing the following command:

    curl -HContent-Type:application/json -vXPUT http://kami:mysupersecurepassword@192.168.33.10:5984/_users/org.couchdb.user:plebian --data-binary '{"_id": "org.couchdb.user:plebian","name": "plebian","roles": [],"type": "user","password": "mypass"}'

    If successful, it will return a response similar to the following:

    * Trying 192.168.33.10... * Connected to 192.168.33.10 (192.168.33.10) port 5984 (#0) * Server auth using Basic with user 'root' > PUT /_users/org.couchdb.user:plebian HTTP/1.1 > Host: 192.168.33.10:5984 > Authorization: Basic cm9vdDpteXN1cGVyc2VjdXJlcGFzc3dvcmQ= > User-Agent: curl/7.47.0 > Accept: */* > Content-Type:application/json > Content-Length: 101 > * upload completely sent off: 101 out of 101 bytes < HTTP/1.1 201 Created < Server: CouchDB/1.6.1 (Erlang OTP/R16B03) < Location: http://192.168.33.10:5984/_users/org.couchdb.user:plebian < ETag: "1-9c4abdc905ecdc9f0f56921d7de915b9" < Date: Thu, 18 Aug 2016 07:57:20 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 87 < Cache-Control: must-revalidate < {"ok":true,"id":"org.couchdb.user:plebian","rev":"1-9c4abdc905ecdc9f0f56921d7de915b9"} * Connection #0 to host 192.168.33.10 left intact

    Now you can try the same command from earlier with a different database name:

    curl -X PUT http://192.168.33.10:5984/my_awesomedatabase

    And CouchDB will shout at you:

    {"error":"unauthorized","reason":"You are not a server admin."}

    For this to work, you now have to supply your username and password in the URL like so:

    curl -X PUT http://{your_username}:{your_password}@192.168.33.10:5984/my_awesomedatabase

    Ok, so that’s it? Well, not really because the only thing you’ve done is limit database operations that can only be done by server admins. This includes things like creating a new database, deleting a database, managing users, full-admin access to all databases (including system tables), CRUD operations to all documents. This leaves you with unauthenticated users still having the power to do CRUD stuff on any database. You can give this a try by logging out of Futon, pick any database you want to mess around with and do CRUD stuff in it. CouchDB will still happily perform those operations for you.

    So, how do you patch up the remaining holes? You can do that by creating a design document that will check if the username of the user who is trying to perform a write operation (insert or update) is the same as the name of the user that’s allowed to do it. In Futon, log in using a server admin or database admin account, select the database you want to work with, and create a new design document. Set the ID as _design/blockAnonymousWrites, add a field named validate_doc_update, and set the value to the following:

    function(new_doc, old_doc, userCtx){ if(userCtx.name != 'kami'){ throw({forbidden: "Not Authorized"}); } }

    The new version of the document, the existing document, and the user context are passed in as an argument to this function. The only thing you need to check is the userCtx which contains the name of the database, the name of the user who’s doing the operation, and an array of roles assigned to the user.

    A secObj is also passed as the fourth argument, but you don’t really need to work on it; that’s why it’s omitted. Basically, the secObj describes what admin privileges have been set on the database.

    Once you’ve added the value, save the design document, log out, and try to create a new document or update an existing one and watch CouchDB complain at you.

    block anonymous writes

    Since you’re only checking for the username, you might be thinking that attackers can simply guess the username and supply any value to the password and it would work. Well, not really, because CouchDB first checks if the username and password are correct before the design document even gets executed.

    Alternatively, if you have many users in a single database, you can also check for the role. The function below will throw an error at any user who doesn’t have the role of “pokemon_master”.

    function(new_doc, old_doc, userCtx) { if(userCtx.roles.indexOf('pokemon_master') == -1){ throw({forbidden: "Not Authorized"}); } }

    If you want to learn more about how to secure CouchDB, be sure to check out the following resources:

    Securing the App

    Let’s wrap up by updating the app to use the security measures that you’ve applied to the database. First update the .env file: change the BASE_URI with just the IP address and the port, and then add the username and password of the CouchDB user that you’ve created.

    BASE_URI="192.168.33.10:5984" COUCH_USER="plebian" COUCH_PASS="mypass"

    Then, update the constructor of the DB class to use the new details:

    public function __construct() { $this->client = new \GuzzleHttp\Client([ 'base_uri' => 'http://' . getenv('COUCH_USER') . ':' . getenv('COUCH_PASS') . '@' . getenv('BASE_URI') ]); } Conclusion

    That’s it! In this tutorial, you learned how to create a Pokemon spawn locations recorder app with CouchDB. With the help of the GeoCouch plugin, you were able to perform spatial queries, and you learned how to secure your CouchDB database.

    Do you use CouchDB in your projects? What for? Any suggestions / features to add into this little project of ours? Let us know in the comments!

    Wern is a web developer from the Philippines. He loves building things for the web and sharing the things he has learned by writing in his blog. When he's not coding or learning something new, he enjoys watching anime and playing video games.


    LSI Nytro WarpDrive WLP4-200 Enterprise PCIe Review | killexams.com real questions and Pass4sure dumps

    August 17th, 2012 by Kevin OBrien

    The LSI Nytro WarpDrive WLP4-200 represents LSI's second-generation effort in the enterprise PCIe application acceleration space. LSI builds on an extensive history of enterprise storage products with the newly rebranded line of acceleration products dubbed LSI Nytro. The Nytro family includes the PCIe WarpDrive of course, but also encompasses LSI's Nytro XD caching and Nytro MegaRAID products that leverage intelligent caching with on-board flash for acceleration, offering customers an entire suite of options as they evaluate high-performance storage. The Nytro WarpDrive comes in a variety of configurations, including both eMLC and SLC versions, with capacities ranging from 200GB up to 1.6TB.

    Like the WarpDrive SLP-300 predecessor, the new Nytro WarpDrives work in much the same way RAIDing multiple SSDs together. The Nytro WarpDrive uses fewer controllers/SSDs this time around, opting for four instead of six in the original. The controllers have also been updated; the Nytro WarpDrive utilizes four latest-generation LSI SandForce SF-2500 controllers that are paired with SLC or eMLC NAND depending on the model. These SSDs are then joined together in RAID0 through an LSI PCIe to SAS bridge to form a 200GB to 1600GB logical block device. The drive is then presented to the operating system, which in this case could mean multiple Windows, Linux, UNIX variants, with a well-established LSI driver that in many cases is built into the OS itself.

    In addition to LSI's renowned host compatibility and stability reputation, the other core technology component of the Nytro WarpDrive are the SandFroce controllers. LSI used the prior generation SF-1500 controllers in the SLP-300 first generation PCIe card; this time around they're using the SF-2500 family. While the controller itself has improved, there's also the added engineering benefit now that LSI has acquired SandForce. While the results may be more subtle, the benefits are there nonetheless and include improved support for the drive via firmware updates and generally a more tightly integrated unit.

    While stability and consistent performance across operating systems are important, those features just open the door. Performance is key and the Nytro WarpDrive doesn't disappoint. At the top end, the cards deliver sequential 4K IOPS of 238,000 read and 133,000 write, along with sequential 8K IOPS of 189,000 read and 137,000 write. Latency is the other just as important performance spec; the Nytro WarpDrive posts latency as low as 50 microseconds.

    In this review they apply their full suite of enterprise benchmarks, across both Windows and Linux, with a robust set of comparables, including the prior generation LSI card and other leading application accelerators. Per their usual depth all of their detailed performance charts and content is delivered on a single page to make consumption of these data points as easy as possible.

    LSI Nytro WarpDrive Specifications

  • Single Level Cell (SLC)
  • 200GB Nytro WarpDrive WLP4-200
  • Sequential IOPS (4K) - 238,000 Read, 133,000 Write
  • Sequential Read and Write IOPS (8K) - 189,000 Read, 137,000 Write
  • Bandwidth (256K) - 2.0GB/s Read, 1.7GB/s Write
  • 400GB Nytro WarpDrive WLP4-400
  • Sequential IOPS (4K) - 238,000 Read, 133,000 Write
  • Sequential Read and Write IOPS (8K) - 189,000 Read, 137,000 Write
  • Bandwidth (256K) - 2.0GB/s Read, 1.7GB/s Write
  • Enterprise Multi Level Cell (eMLC)
  • 400GB Nytro WarpDrive BLP4-400
  • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
  • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
  • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
  • 800GB Nytro WarpDrive BLP4-800
  • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
  • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
  • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
  • 1600GB Nytro WarpDrive BLP4-1600
  • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
  • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
  • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
  • Average Latency < 50 microseconds
  • Interface - x8 PCI Express 2.0
  • Power Consumption - <25 watts
  • Form Factor - Low Profile (half-length, MD2)
  • Environmentals Operational at 0 to 45C
  • OS Compatiblity
  • Microsoft: Windows XP, Vista, 2003, 7; Windows Server 2003 SP2, 2008 SP2, 2008 R2 SP1
  • Linux: CentOS 6; RHEL 5.4, 5.5, 5.6, 5.7, 6.0, 6.1; SLES: 10SP1, 10SP2, 10SP4, 11SP1; OEL 5.6, 6.0
  • UNIX: FreeBSD 7.2, 7.4, 8.1, 8.2; Solaris 10U10, 11 (x86 & SPARC)
  • Hypervisors: VMware 4.0 U2, 4.1 U1, 5.0
  • End of Life Data Retention >6 months SLC, >3 months eMLC
  • Product Health Monitoring Self-Monitoring, Analysis and Reporting Technology (SMART) commands, plus additional SSD monitoring
  • Build and Design

    The LSI Nytro WarpDrive is a Half-Height Half-Length x8 PCI-Express card comprised of four custom form-factor SSDs connected in RAID0 to a main interface board. Being a half-height card, the Nytro WarpDrive is compatibile with more servers by simply swapping the backplane adapter. Shown below is their Lenovo ThinkServer RD240, used in many of their enterprise tests, which supports full-height cards.

    Similar to the previous-generation WarpDrive, LSI uses SandForce processors at the heart of the new Nytro WarpDrive. While the previous generation model used six SATA 3.0Gb/s SF-1500 controllers, the Nytro uses four SATA 6.0Gb/s SF-2500 controllers. The Nytro houses two of these SSDs in two sandwiched heatsink "banks" which are connected to the main board with a small ribbon cable. To interface these controllers with the host computer, LSI uses their own SAS2008 PCIe to SAS bridge, which has wide driver support across multiple operating systems.

    Unlike the first-generation WarpDrive, these passive heatsinks allow the NAND and SandForce controllers to shed heat into a heatsink first, which then gets passively cooled by airflow in the server chassis. This reduces hot-spots and ensures more stable hardware performance over the life of the product.

    A view from above the card shows the tightly sandwiched aluminum plates below, between, and on top of the custom SSDs that power the Nytro WarpDrive. The Nytro also supports legacy HDD indicator lights, for those who want that level of monitoring to be externally visible.

    The LSI Nytro WarpDrive is fully PCIe 2.0 x8 power compliant, and only consumes <25 watts of power during its operation. This allows it to operate without any external power attached and gives it more hardware compatibility over devices such as the Fusion-io "Duo" devices that require external power (or support for drawing power over PCIe spec) to operate at full performance.

    Each of the four SSDs powering the 200GB SLC LSI Nytro WarpDrive has one SandForce SF-2500 controller, and eight 8GB Toshiba SLC Toggle NAND pieces. This gives each SSD a total capacity of 64GB, which is then over-provisioned 22% to have a usable capacity of 50GB.

    Software

    To manage their Nytro WarpDrive products, LSI gives customers the CLI Nytro WarpDrive Management Utility. The management utility allows users to update the firmware, monitor the drive's health, as well as format the WarpDrive to difference capacities by adjusting the level of over-provisioning. Multiple versions of the utility are offered depending on the OS that's required, with Windows, Linux, FreeBSD, Solaris, and VMware supported.

    The Nytro WarpDrive Management Utility is as basic as they come, giving users just enough information or options to get the job done. With most of the time spent with these cards in production, you won't find many IT guys loading this utility up on a day to day basis, although the amount of information felt lacking compared to what other vendors offer.

    From a health monitoring aspect, the LSI management utility really only works to tell you the exact temperature and yes/no response when it comes to figuring out how far into useful life the WarpDrive is. With a percentage reading of Warranty Remaining giving some indication of health, a detailed figure of total bytes written or total bytes read would be much better at letting the user know just how much the card has been used and how much life the future holds for it.

    Another feature that the utility offers that wasn't supported by the first-generation WarpDrive, is the ability to change the over-provisioning level of the logical block device. In a stock configuration the their 200GB SLC Nytro WarpDrive had a usable capacity of 186.26GB, while the performance over-provisioning mode dropped that amount to 149.01GB. A third mode of max capacity over-provisioning was also listed, although it wasn't supported on their model.

    Nytro WarpDrive Formatting Modes (for 200GB SLC):

  • Performance over-provisioning - 149.01GB
  • Nominal over-provisioning - 186.26GB
  • Max capacity over provisioning - Not supported on their review model
  • Testing Background and Comparables

    When it comes to testing enterprise hardware, the environment is just as important as the testing processes used to evaluate it. At StorageReview they offer the same hardware and infrastructure found in many datacenters where the devices they test would ultimately be destined for. This includes enterprise servers as well as proper infrastructure equipment like networking, rack space, power conditioning/monitoring, and same-class comparable hardware to properly evaluate how a device performs. None of their reviews are paid for or controlled by the manufacturer of the equipment they are testing; with relevant comparables picked at their discretion from products they have in their lab.

    StorageReview Enterprise Testing Platform:

    Lenovo ThinkServer RD240

  • 2 x Intel Xeon X5650 (2.66GHz, 12MB Cache)
  • Windows Server 2008 Standard Edition R2 SP1 64-Bit and CentOS 6.2 64-Bit
  • Intel 5500+ ICH10R Chipset
  • Memory - 8GB (2 x 4GB) 1333Mhz DDR3 Registered RDIMMs
  • Review Comparables:

    640GB Fusion-io ioDrive Duo

  • Released: 1H2009
  • NAND Type: MLC
  • Controller: 2 x Proprietary
  • Device Visibility: JBOD, software RAID depending on OS
  • Fusion-io VSL Windows: 3.1.1
  • Fusion-io VSL Linux 3.1.1
  • 200GB LSI Nytro WarpDrive WLP4-200

  • Released: 1H2012
  • NAND Type: SLC
  • Controller: 4 x LSI SandForce SF-2500 through LSI SAS2008 PCIe to SAS Bridge
  • Device Visiblity: Fixed Hardware RAID0
  • LSI Windows: 2.10.51.0
  • LSI Linux: Native CentOS 6.2 driver
  • 300GB LSI WarpDrive SLP-300

  • Released: 1H2010
  • NAND Type: SLC
  • Controller: 6 x LSI SandForce SF-1500 through LSI SAS2008 PCIe to SAS Bridge
  • Device Visiblity: Fixed Hardware RAID0
  • LSI Windows: 2.10.43.00
  • LSI Linus: Native CentOS 6.2 driver
  • 1.6TB OCZ Z-Drive R4

  • Released: 2H2011
  • NAND Type: MLC
  • Controller: 8 x LSI SandForce SF-2200 through custom OCZ VCA PCIe to SAS Bridge
  • Device Visibility: Fixed Hardware RAID0
  • OCZ Windows Driver: 1.3.6.17083
  • OCZ Linux Driver: 1.0.0.1480
  • Enterprise Synthetic Workload Analysis (Stock Settings)

    The way they look at PCIe storage solutions dives deeper than just looking at traditional burst or steady-state performance. When looking at averaged performance over a long period of time, you lose sight of the details behind how the device performs over that entire period. Since flash performance varies greatly as time goes on, their new benchmarking process analyzes the performance in areas including total throughput, average latency, peak latency, and standard deviation over the entire preconditioning phase of each device. With high-end enterprise products, latency is often more important than throughput. For this reason they go to great lengths to show the full performance characteristics of each device they put through their Enterprise Test Lab.

    We have also added performance comparisons to show how each device performs under a different driver set across both Windows and Linux operating systems. For Windows, they use the latest drivers at the time of original review, which each device is then tested under a 64-bit Windows Server 2008 R2 environment. For Linux, they use 64-bit CentOS 6.2 environment, which each Enterprise PCIe Application Accelerator supports. Their main goal with this testing is to show how OS performance differs, since having an operating system listed as compatible on a product sheet doesn't always mean the performance across them is equal.

    All devices tested go under the same testing policy from start to finish. Currently, for each individual workload, devices are secure erased using the tools supplied by the vendor, preconditioned into steady-state with the identical workload the device will be tested with under heavy load of 16 threads with an outstanding queue of 16 per thread, and then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. For tests with 100% read activity, preconditioning is with the same workload, although flipped to 100% write.

    Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregate)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)
  • At this time Enterprise Synthetic Workload Analysis includes four common profiles, which can attempt to reflect real-world activity. These were picked to have some similarity with their past benchmarks, as well as a common ground for comparing against widely published values such as max 4K read and write speed, as well as 8K 70/30 commonly used for enterprise drives. They also included two legacy mixed workloads, including the traditional File Server and Webserver offering a wide mix of transfer sizes. These last two will be phased out with application benchmarks in those categories as those are introduced on their site, and replaced with new synthetic workloads.

  • 4K
  • 100% Read or 100% Write
  • 100% 4K
  • 8K 70/30
  • File Server
  • 80% Read, 20% Write
  • 10% 512b, 5% 1k, 5% 2k, 60% 4k, 2% 8k, 4% 16k, 4% 32k, 10% 64k
  • Webserver
  • 100% Read
  • 22% 512b, 15% 1k, 8% 2k, 23% 4k, 15% 8k, 2% 16k, 6% 32k, 7% 64k, 1% 128k, 1% 512k
  • Looking at 100% 4K write activity under a heavy load of 16 threads and 16 queue over a 6 hour period, they found that the LSI Nytro WarpDrive offered slower but very consistent throughput compared to the other PCIe Application Accelerators. The Nytro WarpDrive started at roughly 33,000 IOPS 4K write, and leveled off at 30,000 IOPS at the end of this preconditioning phase. This compared to the first-generation WarpDrive that peaked at 130,000-180,000 IOPS and leveled off at 35,000 IOPS.

    Average latency during the preconditioning phase quickly settled in at about 8.5ms, whereas the first-generation WarpDrive started around 2ms before tapering off to 7.2ms as it reached steady-state.

    When it comes to max latency there is almost no doubt that SLC is king in terms of spikes that are few and far between. The new Nytro WarpDrive had the lowest consistent max latency in Windows, which increased under its CentOS driver, but still remained very respectable.

    Looking at the latency standard deviation, under Windows the Nytro WarpDrive offered some of the most consistent latency. matched by only the first-generation WarpDrive. In CentOS though, the standard deviation was more than double, at over 20ms versus 7.2ms in Windows.

    After the PCIe Application Accelerators went through their 4K write preconditioning process, they sampled their performance over a longer interval. In Windows the LSI Nytro WarpDrive measured 161,170 IOPS read and 29,946 IOPS write, whereas its Linux performance measured 97,333 IOPS read and 29,788 IOPS write. Read performance in Windows and Linux was higher than the previous-generation WarpDrive, although 4K steady-state performance dropped 5,000 IOPS.

    The LSI Nytro WarpDrive offered the second to lowest 4K read latency, coming in behind the OCZ Z-Drive R4 that uses 8 SF-2200 controllers versus the Nytro WarpDrive's four SF-2500 controllers. Write latency was the slowest in the pack measuring 8.54ms in Windows and 8.591ms in Linux (not counting the OCZ Z-Drive R4 that was not even in the same ballpark).

    Looking at the highest peak latency over the duration of their final 4K read and write testing intervals, the LSI Nytro WarpDrive offered the lowest 4K write latency in the pack with 51ms in Windows. Its Linux performance measured 486ms, as well as a high 4K read blip in Windows measuring 1,002ms, but overall it ranked well versus their other comparables.

    While peak latency will only show the single response time over an entire test, showing standard deviation gives the whole picture as to how well the drive behaves over the entire test. The Nytro WarpDrive came in towards the middle of the pack, with read latency standard deviation roughly twice that of the first-generation WarpDrive. Standard deviation in the write test was only slightly higher in Windows, but fell behind in Linux. In Windows, its write performance still came in towards the top of the pack, above the Fusion ioDrive Duo and OCZ Z-Drive R4.

    The next preconditioning test works with a more realistic read/write workload spread, versus the 100% write activity in their 4K test. Here, they have a 70% read and 30% write mix of 8K transfers. Looking at their 8K 70/30 mixed workload under a heavy load of 16 threads and 16 queue over a 6 hour period the Nytro WarpDrive quickly leveled off at 87,000 IOPS, finishing as the fastest drive in the group in Windows. The Nytro WarpDrive levled off at around 70,000 IOPS in Linux, although that was still the fastest Linux performance in the group as well.

    In their 8K 70/30 16T/16Q workload, the LSI Nytro WarpDrive offered by far the most consistent average latency, staying level at 2.9ms throughout their Windows test, and 3.6ms in Linux.

    Similar to the behavior they measured in their 4K write preconditioning test, the SLC-based Nytro WarpDrive also offered extremely low peak latency over the duration of the 8K 70/30 preconditioning process. Its performance in Windows hovered around 25ms, while its Linux performance floated higher around 200ms.

    While peak latency over small intervals gives you an idea of how a device is performing in a test, looking at its standard deviation shows you closely those peaks were grouped. The Nytro WarpDrive in Windows offered the lowest standard deviation in the group, measuring almost half of the first-generation WarpDrive. In Linux the standard deviation was much higher, by almost a factor of four, although that still ranked middle/top of the pack.

    Compared to the fixed 16 thread, 16 queue max workload they performed in the 100% 4K write test, their mixed workload profiles scale the performance across a wide range of thread/queue combinations. In these tests they span their workload intensity from 2 threads and 2 queue up to 16 threads and 16 queue. The LSI Nytro WarpDrive was able to offer substantially higher performance at lower thread count workloads with a queue depth between 4 to 16. This advantage played out largely over the entire test looking at its Windows performance, although in Linux that advantage was capped to roughly 70,000 IOPS where the R4 (in Windows) was able to beat it in some areas.

    On the other half of through throughput equation, the LSI Nytro WarpDrive consistently offered some of the lowest latency in their 8K 70/30 tests. In Windows, the Nytro WarpDrive came in at the top of the pack, while the Z-Drive R4 in Windows beat the Nytro's performance in Linux.

    In their 8K 70/30 test the SLC-based LSI Nytro WarpDrive in Windows had more 1,000ms+ peak latency spikes, whereas the Linux driver kept that suppressed until the higher 16-thread workloads. While this behavior didn't differ from the Fusion ioDrive Duo or Z-Drive R4, it had more high latency spikes than the first-generation WarpDrive in Windows, especially when under more demanding loads.

    While the occasional high spikes might look discouraging, the full latency picture can be seen when looking at the latency standard deviation. In their 8K 70/30 workload, the LSI Nytro WarpDrive offered the lowest standard deviation throughout the bulk of their 8K tests,

    The File Server workload represents a larger transfer-size spectrum hitting each particular device, so instead of settling in for a static 4k or 8k workload, the drive must cope with requests ranging from 512b to 64K. In their File Server throughput test, the OCZ Z-Drive R4 had a commanding lead in both burst and as it neared steady-state. The LSI Nytro WarpDrive started off towards the bottom of the pack between 39-46,000 IOPS, but remained their over the duration of the test, while the Fusion ioDrive Duo and first-generation WarpDrive slipped below it.

    Latency in their File Server workload followed a similar path on the LSI Nytro WarpDrive as it did in the throughput section, where it started off relatively high in terms of its burst capabilities, but stayed there over the duration of the test. This steady as a rock performance allowed it to come in towards the top of the pack, while the others eventually slowed down over the endurance section of the preconditioning phase.

    With its SLC NAND configuration, their 200GB Nytro WarpDrive remained rather calm over the duration of their File Server preconditioning test, offering some of the lowest latency spikes out of the bunch. In this section the first-generation WarpDrive offered similar performance, as did the Fusion ioDrive Duo, although the later had many spikes into the 1,000ms range.

    The LSI Nytro WarpDrive easily came out on top when looking at the latency standard deviation in the File Server preconditioning test. With a single spike, it was nearly flat at 2ms for the duration of this 6 hour process, and proved to be more consistent than the first-generation WarpDrive.

    Once their preconditioning process finished under a high 16T/16Q load, they looked at File Server performance across a wide range of activity levels. Similar to the Nytro's performance in their 8K 70/30 workload, it was able to offer the highest performance at low thread and queue depth levels. This lead was taken over by the OCZ Z-Drive R4 in the File Server workload at levels above 4T/8Q, where the R4's eight controller count helped it stretch its legs further. Over the remaining portion of their throughput test, the Nytro WarpDrive came in second under the Z-Drive R4 in Windows.

    With high throughput also comes low average latency, where the LSI Nytro WarpDrive was able to very good response times at lower queue depths, measuring as low as 0.366ms at 2T/2Q. It wasn't the quickest though, as the ioDrive Duo held the top spot, measuring 0.248ms in the same portion of the test. As the loads increased though, the Nytro WarpDrive came in just under the OCZ Z-Drive R4, utilizing half the controllers.

    Comparing the File Server workload max latency between the OCZ Z-Drive R4 and the LSI Nytro WarpDrive, it's easy to see what the advantage of SLC NAND is. Over the duration of the different test loads, the SLC-based Nytro WarpDrive and first-generation WarpDrive both offered some of the lowest peak response times and fewest overall peaks.

    Our latency standard deviation analysis reiterated that the Nytro WarpDrive was able to come in with class-leading performance over the duration of their File Server workload. The one area where responsiveness started to slip was under at 16T/16Q workload, where the Nytro WarpDrive in Linux had more variation in its latency.

    Our last workload is rather unique in the way they analyze the preconditioning phase of the test compared to the main output. As a workload designed with 100% read activity, it's difficult to show each device's true read performance without a proper preconditioning step. To keep the conditioning workload the same as the testing workload, they inverted the pattern to be 100% write. For this reason the preconditioning charts are much more dramatic than the final workload numbers.

    While it didn't turn into an example of slow and steady wins the race, the Nytro WarpDrive had the lowest burst throughput (not counting the R4's problematic Linux driver's performance), but as the other devices slowed towards the end of the preconditioning process, the Nytro WarpDrive came in second place under the R4 in Windows. This put it ahead of both the ioDrive Duo and first-generation WarpDrive under their heavy 16T/16Q inverted Web Server workload.

    Average latency of the Nytro WarpDrive in their Web Server preconditioning test stayed flat at 20.9ms over the duration of the test. This compared to 31ms from the first-generation WarpDrive towards the second half of the test.

    In terms of most responsive PCIe Application Accelerator, the LSI Nytro WarpDrive came in on top with its performance in Windows during their Web Server Preconditioning test. It kept its peak response times under 120ms in Windows, and right above 500ms in Linux.

    With barely a spike in their Web Server preconditioning test, the LSI Nytro WarpDrive impressed again with its incredibly low latency standard deviation. In Windows, it offered the most consistent performance, coming out on top of the first-generation WarpDrive. Its performance in Linux didn't fare as well, but still came in towards the middle of the pack.

    Switching back to a 100% read Web Server workload after the preconditioning process, the OCZ Z-Drive R4 offered the highest performance in Windows, but only after an effective queue depth of 32. Before that the Nytro WarpDrive was able to come out on top with lower thread counts over a queue depth of 4. The leader in the low thread/low queue depth arena was still the Fusion ioDrive Duo.

    The LSI Nytro WarpDrive was able to offer impressive low-latency in their Web Server workload, measuring as low as 0.267ms in Linux with a 2T/2Q load. Its highest average response time was 4.5ms in Linux with a 16T/16Q load. Overall it performed very well, bested by only the OCZ Z-Drive R4 in Windows under higher effective queue depths.

    All of the PCIe Application Accelerators suffered from some high latency spikes in their Web Server test, with minimal differences between OS, controller or NAND type. Overall Linux was LSI's strong suit for both the Nytro WarpDrive and first-generation WarpDrive, having fewer latency spikes versus the performance in Windows.

    While the peak latency performance may seem problematic, what really matters is how the device performs over the entire duration of the test. This is where latency standard deviation comes in to play, measuring how consistent the latency was overall. While the LSI Nytro WarpDrive in Windows had more frequent spikes compared to its Linux performance, it had a lower standard deviation in Windows under higher effective queue depths.

    Conclusion

    The LSI Nytro WarpDrive WLP4-200 represents a solid step forward for LSI's application acceleration line. It's generally quicker in most areas than the prior generation SLP-300, thanks to the updated SandForce SF-2500 controller and improved firmware used this time around. Structurally it's simpler as well, dropping from six drives in RAID0 to four. LSI has also added a bunch of capacity and NAND options for the Nytro WarpDrive line, giving buyers a range of options from 200GB in SLC up to 1.6TB in eMLC. Overall the offering is more complete and well-rounded, offering flexibillty which should increase the market adoption for the Nytro WarpDrive family at large. 

    A big selling point for LSI is the compatibility of their products on a hardware and OS level. They noted strong performance from the Nytro WarpDrive in both their Windows and Linux tests. The Windows driverset was definitely more polished, offering much higher performance in some areas. While the ioDrive Duo also showed very good multi-OS support, the same can not be said about OCZ's Z-Drive R4, which had a gigantic gap in performance between their Windows and Linux drivers.

    When it comes to management, LSI offers software tools to check the health and handle basic commands for most major operating systems. Their CLI WarpDrive Management Utility is basic, but still gets the job done when it comes to formatting or over-provisioning the drive. The software suite is certainly a bit spartan, but even these tools are appreciated as some in the PCIe storage space don't offer much of anything when it comes to drive management. 

    The most surprising aspect of the LSI Nytro WarpDrive is its behavior in their enterprise workloads. Compared to other PCIe Application Accelerators we've tested, its burst performance wasn't the most impressive, but the fact that it remained rock solid over the duration of their tests was. What it lacked in speed off the line, it more than made up for in consistent latency with incredibly low standard deviation under load. For enterprise applications that demand a narrow window of acceptable response times under load, low max latency and standard deviation seperate the men from the boys. It's also important to remember that SandForce-based drives have compression benefits that aren't highlighted in this type of workload testing. For this reason and to show an even more complete profile of enterprise drive performance, StorageReview is currently building out a robust set of application-level benchmarks that may show further differences between enterprise storage products. 

    Pros

  • Increased performance while reducing controller count
  • Industry leading host system compatibility
  • More NAND and capacity options than previous-generation WarpDrive
  • Incredibly consistent latency under stress
  • Cons

  • Limited software tools for drive management
  • Weaker burst performance (excellent steady-state performance)
  • Bottom Line

    The LSI Nytro WarpDrive WLP4-200 is a solid PCIe application accelerator and will win over enterprise customers for its excellent steady state performance, consistent performance over a variety of uses, and class-leading compatibility with host systems. LSI did a good job with the Nytro WarpDrive from hardware design to smooth operation, with their main complaints being around drive management tools. While it doesn't burst out of the gate as fast as others, that's usually not terribly important to the enterprise and there's something to be said for a drive that works well out of the box, and continues to operate well, in just about any operating system. 

    LSI Application Acceleration Products

    Discuss This Review



    Direct Download of over 5500 Certification Exams

    3COM [8 Certification Exam(s) ]
    AccessData [1 Certification Exam(s) ]
    ACFE [1 Certification Exam(s) ]
    ACI [3 Certification Exam(s) ]
    Acme-Packet [1 Certification Exam(s) ]
    ACSM [4 Certification Exam(s) ]
    ACT [1 Certification Exam(s) ]
    Admission-Tests [13 Certification Exam(s) ]
    ADOBE [93 Certification Exam(s) ]
    AFP [1 Certification Exam(s) ]
    AICPA [2 Certification Exam(s) ]
    AIIM [1 Certification Exam(s) ]
    Alcatel-Lucent [13 Certification Exam(s) ]
    Alfresco [1 Certification Exam(s) ]
    Altiris [3 Certification Exam(s) ]
    Amazon [2 Certification Exam(s) ]
    American-College [2 Certification Exam(s) ]
    Android [4 Certification Exam(s) ]
    APA [1 Certification Exam(s) ]
    APC [2 Certification Exam(s) ]
    APICS [2 Certification Exam(s) ]
    Apple [69 Certification Exam(s) ]
    AppSense [1 Certification Exam(s) ]
    APTUSC [1 Certification Exam(s) ]
    Arizona-Education [1 Certification Exam(s) ]
    ARM [1 Certification Exam(s) ]
    Aruba [8 Certification Exam(s) ]
    ASIS [2 Certification Exam(s) ]
    ASQ [3 Certification Exam(s) ]
    ASTQB [8 Certification Exam(s) ]
    Autodesk [2 Certification Exam(s) ]
    Avaya [101 Certification Exam(s) ]
    AXELOS [1 Certification Exam(s) ]
    Axis [1 Certification Exam(s) ]
    Banking [1 Certification Exam(s) ]
    BEA [5 Certification Exam(s) ]
    BICSI [2 Certification Exam(s) ]
    BlackBerry [17 Certification Exam(s) ]
    BlueCoat [2 Certification Exam(s) ]
    Brocade [4 Certification Exam(s) ]
    Business-Objects [11 Certification Exam(s) ]
    Business-Tests [4 Certification Exam(s) ]
    CA-Technologies [20 Certification Exam(s) ]
    Certification-Board [10 Certification Exam(s) ]
    Certiport [3 Certification Exam(s) ]
    CheckPoint [43 Certification Exam(s) ]
    CIDQ [1 Certification Exam(s) ]
    CIPS [4 Certification Exam(s) ]
    Cisco [318 Certification Exam(s) ]
    Citrix [48 Certification Exam(s) ]
    CIW [18 Certification Exam(s) ]
    Cloudera [10 Certification Exam(s) ]
    Cognos [19 Certification Exam(s) ]
    College-Board [2 Certification Exam(s) ]
    CompTIA [76 Certification Exam(s) ]
    ComputerAssociates [6 Certification Exam(s) ]
    Consultant [2 Certification Exam(s) ]
    Counselor [4 Certification Exam(s) ]
    CPP-Institute [4 Certification Exam(s) ]
    CSP [1 Certification Exam(s) ]
    CWNA [1 Certification Exam(s) ]
    CWNP [13 Certification Exam(s) ]
    CyberArk [1 Certification Exam(s) ]
    Dassault [2 Certification Exam(s) ]
    DELL [11 Certification Exam(s) ]
    DMI [1 Certification Exam(s) ]
    DRI [1 Certification Exam(s) ]
    ECCouncil [22 Certification Exam(s) ]
    ECDL [1 Certification Exam(s) ]
    EMC [128 Certification Exam(s) ]
    Enterasys [13 Certification Exam(s) ]
    Ericsson [5 Certification Exam(s) ]
    ESPA [1 Certification Exam(s) ]
    Esri [2 Certification Exam(s) ]
    ExamExpress [15 Certification Exam(s) ]
    Exin [40 Certification Exam(s) ]
    ExtremeNetworks [3 Certification Exam(s) ]
    F5-Networks [20 Certification Exam(s) ]
    FCTC [2 Certification Exam(s) ]
    Filemaker [9 Certification Exam(s) ]
    Financial [36 Certification Exam(s) ]
    Food [4 Certification Exam(s) ]
    Fortinet [14 Certification Exam(s) ]
    Foundry [6 Certification Exam(s) ]
    FSMTB [1 Certification Exam(s) ]
    Fujitsu [2 Certification Exam(s) ]
    GAQM [9 Certification Exam(s) ]
    Genesys [4 Certification Exam(s) ]
    GIAC [15 Certification Exam(s) ]
    Google [4 Certification Exam(s) ]
    GuidanceSoftware [2 Certification Exam(s) ]
    H3C [1 Certification Exam(s) ]
    HDI [9 Certification Exam(s) ]
    Healthcare [3 Certification Exam(s) ]
    HIPAA [2 Certification Exam(s) ]
    Hitachi [30 Certification Exam(s) ]
    Hortonworks [4 Certification Exam(s) ]
    Hospitality [2 Certification Exam(s) ]
    HP [752 Certification Exam(s) ]
    HR [4 Certification Exam(s) ]
    HRCI [1 Certification Exam(s) ]
    Huawei [21 Certification Exam(s) ]
    Hyperion [10 Certification Exam(s) ]
    IAAP [1 Certification Exam(s) ]
    IAHCSMM [1 Certification Exam(s) ]
    IBM [1533 Certification Exam(s) ]
    IBQH [1 Certification Exam(s) ]
    ICAI [1 Certification Exam(s) ]
    ICDL [6 Certification Exam(s) ]
    IEEE [1 Certification Exam(s) ]
    IELTS [1 Certification Exam(s) ]
    IFPUG [1 Certification Exam(s) ]
    IIA [3 Certification Exam(s) ]
    IIBA [2 Certification Exam(s) ]
    IISFA [1 Certification Exam(s) ]
    Intel [2 Certification Exam(s) ]
    IQN [1 Certification Exam(s) ]
    IRS [1 Certification Exam(s) ]
    ISA [1 Certification Exam(s) ]
    ISACA [4 Certification Exam(s) ]
    ISC2 [6 Certification Exam(s) ]
    ISEB [24 Certification Exam(s) ]
    Isilon [4 Certification Exam(s) ]
    ISM [6 Certification Exam(s) ]
    iSQI [7 Certification Exam(s) ]
    ITEC [1 Certification Exam(s) ]
    Juniper [65 Certification Exam(s) ]
    LEED [1 Certification Exam(s) ]
    Legato [5 Certification Exam(s) ]
    Liferay [1 Certification Exam(s) ]
    Logical-Operations [1 Certification Exam(s) ]
    Lotus [66 Certification Exam(s) ]
    LPI [24 Certification Exam(s) ]
    LSI [3 Certification Exam(s) ]
    Magento [3 Certification Exam(s) ]
    Maintenance [2 Certification Exam(s) ]
    McAfee [8 Certification Exam(s) ]
    McData [3 Certification Exam(s) ]
    Medical [68 Certification Exam(s) ]
    Microsoft [375 Certification Exam(s) ]
    Mile2 [3 Certification Exam(s) ]
    Military [1 Certification Exam(s) ]
    Misc [1 Certification Exam(s) ]
    Motorola [7 Certification Exam(s) ]
    mySQL [4 Certification Exam(s) ]
    NBSTSA [1 Certification Exam(s) ]
    NCEES [2 Certification Exam(s) ]
    NCIDQ [1 Certification Exam(s) ]
    NCLEX [3 Certification Exam(s) ]
    Network-General [12 Certification Exam(s) ]
    NetworkAppliance [39 Certification Exam(s) ]
    NI [1 Certification Exam(s) ]
    NIELIT [1 Certification Exam(s) ]
    Nokia [6 Certification Exam(s) ]
    Nortel [130 Certification Exam(s) ]
    Novell [37 Certification Exam(s) ]
    OMG [10 Certification Exam(s) ]
    Oracle [282 Certification Exam(s) ]
    P&C [2 Certification Exam(s) ]
    Palo-Alto [4 Certification Exam(s) ]
    PARCC [1 Certification Exam(s) ]
    PayPal [1 Certification Exam(s) ]
    Pegasystems [12 Certification Exam(s) ]
    PEOPLECERT [4 Certification Exam(s) ]
    PMI [15 Certification Exam(s) ]
    Polycom [2 Certification Exam(s) ]
    PostgreSQL-CE [1 Certification Exam(s) ]
    Prince2 [6 Certification Exam(s) ]
    PRMIA [1 Certification Exam(s) ]
    PsychCorp [1 Certification Exam(s) ]
    PTCB [2 Certification Exam(s) ]
    QAI [1 Certification Exam(s) ]
    QlikView [1 Certification Exam(s) ]
    Quality-Assurance [7 Certification Exam(s) ]
    RACC [1 Certification Exam(s) ]
    Real Estate [1 Certification Exam(s) ]
    Real-Estate [1 Certification Exam(s) ]
    RedHat [8 Certification Exam(s) ]
    RES [5 Certification Exam(s) ]
    Riverbed [8 Certification Exam(s) ]
    RSA [15 Certification Exam(s) ]
    Sair [8 Certification Exam(s) ]
    Salesforce [5 Certification Exam(s) ]
    SANS [1 Certification Exam(s) ]
    SAP [98 Certification Exam(s) ]
    SASInstitute [15 Certification Exam(s) ]
    SAT [1 Certification Exam(s) ]
    SCO [10 Certification Exam(s) ]
    SCP [6 Certification Exam(s) ]
    SDI [3 Certification Exam(s) ]
    See-Beyond [1 Certification Exam(s) ]
    Siemens [1 Certification Exam(s) ]
    Snia [7 Certification Exam(s) ]
    SOA [15 Certification Exam(s) ]
    Social-Work-Board [4 Certification Exam(s) ]
    SpringSource [1 Certification Exam(s) ]
    SUN [63 Certification Exam(s) ]
    SUSE [1 Certification Exam(s) ]
    Sybase [17 Certification Exam(s) ]
    Symantec [135 Certification Exam(s) ]
    Teacher-Certification [4 Certification Exam(s) ]
    The-Open-Group [8 Certification Exam(s) ]
    TIA [3 Certification Exam(s) ]
    Tibco [18 Certification Exam(s) ]
    Trainers [3 Certification Exam(s) ]
    Trend [1 Certification Exam(s) ]
    TruSecure [1 Certification Exam(s) ]
    USMLE [1 Certification Exam(s) ]
    VCE [6 Certification Exam(s) ]
    Veeam [2 Certification Exam(s) ]
    Veritas [33 Certification Exam(s) ]
    Vmware [58 Certification Exam(s) ]
    Wonderlic [2 Certification Exam(s) ]
    Worldatwork [2 Certification Exam(s) ]
    XML-Master [3 Certification Exam(s) ]
    Zend [6 Certification Exam(s) ]





    References :


    Dropmark : http://killexams.dropmark.com/367904/11885638
    Wordpress : http://wp.me/p7SJ6L-1XJ
    Dropmark-Text : http://killexams.dropmark.com/367904/12850958
    Blogspot : http://killexamsbraindump.blogspot.com/2017/12/ibm-000-103-dumps-and-practice-tests.html
    RSS Feed : http://feeds.feedburner.com/Ibm000-103DumpsAndPracticeTestsWithRealQuestions
    Box.net : https://app.box.com/s/4yz96xebvlkgt89zxowmdkcs64ps7igq






    Back to Main Page





    Killexams exams | Killexams certification | Pass4Sure questions and answers | Pass4sure | pass-guaratee | best test preparation | best training guides | examcollection | killexams | killexams review | killexams legit | kill example | kill example journalism | kill exams reviews | kill exam ripoff report | review | review quizlet | review login | review archives | review sheet | legitimate | legit | legitimacy | legitimation | legit check | legitimate program | legitimize | legitimate business | legitimate definition | legit site | legit online banking | legit website | legitimacy definition | pass 4 sure | pass for sure | p4s | pass4sure certification | pass4sure exam | IT certification | IT Exam | certification material provider | pass4sure login | pass4sure exams | pass4sure reviews | pass4sure aws | pass4sure security | pass4sure cisco | pass4sure coupon | pass4sure dumps | pass4sure cissp | pass4sure braindumps | pass4sure test | pass4sure torrent | pass4sure download | pass4surekey | pass4sure cap | pass4sure free | examsoft | examsoft login | exams | exams free | examsolutions | exams4pilots | examsoft download | exams questions | examslocal | exams practice |

    www.pass4surez.com | www.killcerts.com | www.search4exams.com | http://smresidences.com.ph/