Sales Tel: +63 945 7983492  |  Email Us    
SMDC Residences

Air Residences

Features and Amenities

Reflective Pool
Function Terrace
Seating Alcoves

Air Residences

Green 2 Residences

Features and Amenities:

Wifi ready study area
Swimming Pool
Gym and Function Room

Green 2 Residences

Bloom Residences

Features and Amenities:

Recreational Area
2 Lap Pools
Ground Floor Commercial Areas

Bloom Residences

Leaf Residences

Features and Amenities:

3 Swimming Pools
Gym and Fitness Center
Outdoor Basketball Court

Leaf Residences

Contact Us

Contact us today for a no obligation quotation:

+63 945 7983492
+63 908 8820391

Copyright © 2018 SMDC :: SM Residences, All Rights Reserved.

006-002 dumps with Real exam Questions and Practice Test -

Great Place to download 100% free 006-002 braindumps, real exam questions and practice test with VCE exam simulator to ensure your 100% success in the 006-002 -

Pass4sure 006-002 dumps | 006-002 real questions |

006-002 Certified MySQL 5.0 DBA Part II

Study Guide Prepared by mySQL Dumps Experts 006-002 Dumps and Real Questions

100% Real Questions - Exam Pass Guarantee with High Marks - Just Memorize the Answers

006-002 exam Dumps Source : Certified MySQL 5.0 DBA Part II

Test Code : 006-002
Test Name : Certified MySQL 5.0 DBA Part II
Vendor Name : mySQL
: 140 Real Questions

terrific idea to prepare 006-002 real exam questions.
At closing, my marks 90% turned into more than choice. on the point when the exam 006-002 turned into handiest 1 week away, my planning changed into in an indiscriminate situation. I expected that i would want to retake inside the occasion of unhappiness to get eighty% marks. Taking after a partners advice, i bought the from and will take a mild arrangement through typically composed material.

surprised to look 006-002 dumps!
Its a completely beneficial platform for opemarks experts like us to practice the questions and answers anywhere. I am very an awful lot grateful to you people for creating such a terrific exercise questions which changed into very beneficial to me within the final days of exams. i have secured 88% marks in 006-002 exam and the revision exercise exams helped me loads. My idea is that please increase an android app in order that humans like us can practice the tests whilst travelling also.

006-002 actual query bank is real have a look at, authentic result.
whenever I need to pass my certification check to preserve my job, I instantly visit and seek the specifiedcertification test, purchase and put together the check. It surely is worth admiring due to the fact, I continually passthe test with accurate scores.

No waste trendy time on searhching internet! located genuine supply trendy 006-002 .
This is absolutely the achievement of, now not mine. Very person pleasant 006-002 exam simulator and authentic 006-002 QAs.

006-002 take a look at prep a ways clean with those dumps.
I were given an top class cease result with this package. Amazing outstanding, questions are accurate and i had been given maximum of them at the exam. After i have passed it, I advocated to my colleagues, and all and sundry passed their tests, too (some of them took Cisco test, others did Microsoft, VMware, and many others). I have not heard a awful test of, so this must be the tremendous IT education you could currently find on line.

What are benefits modern-day 006-002 certification?
Nowadays i am very glad because of the fact i have were given a completely high score in my 006-002 exam. I couldnt assume i would be able to do it but this made me count on in any other case. The internet educators are doing their interest very well and i salute them for his or her determination and devotion.

Can i get ultra-modern dumps with actual Q & A ultra-modern 006-002 examination?
I handed, and clearly extraordinarily completely satisfied to document that adhere to the claims they make. They provide actual exam questions and the finding out engine works flawlessly. The bundle includes the whole thing they promise, and their customer support works well (I had to get in touch with them for the motive that first my online rate would not go through, but it turned out to be my fault). Anyhow, this is a amazing product, masses higher than I had predicted. I handed 006-002 exam with nearly top marks, something I in no way concept i was able to. Thank you.

I located all my efforts on net and positioned killexams 006-002 actual exam bank.
i have never used this type of wonderful Dumps for my gaining knowledge of. It assisted nicely for the 006-002 exam. I already used the and handed my 006-002 exam. it is the bendy material to apply. but, i used to be a below average candidate, it made me pass in the exam too. I used most effective for the studying and by no means used some other material. i can hold on the use of your product for my destiny exams too. were given ninety eight%.

Take Advantage of 006-002 dumps, Use these questions to ensure your success.
Hearty way to team for the question & solution of 006-002 exam. It provided brilliant option to my questions on 006-002 I felt confident to stand the test. Observed many questions inside the exam paper a great deal likethe manual. I strongly experience that the manual remains valid. Respect the try with the aid of using your team individuals, The gadget of dealing topics in a very specific and uncommon manner is terrific. Wish you people create more such test publications in close to destiny for their comfort.

Passing the 006-002 examination isn't always sufficient, having that expertise is needed.
Asking my father to assist me with some thing is like coming into in to large problem and I simply didnt need to disturb him in the course of my 006-002 guidance. I knew someone else has to assist me. I just didnt who it might be until one of my cousins informed me of this It became like a super gift to me because it become extremely useful and beneficial for my 006-002 test preparation. I owe my notable marks to the humans opemarks on here due to the fact their dedication made it viable.

mySQL Certified MySQL 5.0 DBA

Get MySQL certified | Real Questions and Pass4sure dumps

check in to get MySQL certified on the 2008 MySQL conference & Expo. Certification tests are being offered most effective at the conference for this discounted rate of $25 ($175 value). house is limited, most effective pre-registered exams are guaranteed a seat on the convention, so sign in now. For answers to often asked questions, seek advice from the Certification FAQ.

essential advice examination data
  • exams will be offered Tuesday, Wednesday and Thursday.
  • assessments will be carried out at 10:30 am and at 1:forty pm and should ultimate for 90 minutes.
  • You have to be registered as a session or session plus tutorials convention attendee. checks don't seem to be offered to tutorial simplest, show hall most effective or conference attendee guest.
  • 10:30am - 12:00pm

  • CMDBA:certified DBA I
  • CMDBA:licensed DBA II
  • CMDEV: certified Developer I
  • CMDEV: certified Developer II
  • CMCDBA: MySQL 5.1 Cluster DBA Certification
  • 1:40pm - 3:10pm

  • CMDBA:certified DBA I
  • CMDBA:licensed DBA II
  • CMDEV: certified Developer I
  • CMDEV: licensed Developer II
  • CMCDBA: MySQL 5.1 Cluster DBA Certification
  • note: a unique exam mp;A Session will be held within the Magnolia Room, Tuesday from 1:00 pm - 1:30 pm

    CMDEV: MySQL 5.0 Developer I & IIThe MySQL 5.0 Developer Certification ensures that the candidate is aware of and is in a position to make use of all of the facets of MySQL that are necessary to develop and keep functions that use MySQL for back-end storage. be aware that you simply have to move each of the developer tests (in any order) to acquire certification.

    CMDBA: MySQL 5.0 Database Administrator I & IIThe MySQL Database Administrator Certification attests that the person conserving the certification is aware of the way to maintain and optimize an installing of 1 or greater MySQL servers, and function administrative projects akin to monitoring the server, making backups, and so forth. word that youngsters you can also take the CMCDBA examination at any time, you have to move both of the DBA exams (in any order) to obtain certification.

    CMCDBA: MySQL 5.1 Cluster DBA CertificationThe MySQL Cluster Database Administrator certification exam will also be administered at the conference. be aware that you should attain CMDBA certification before a CMCDBA certification is diagnosed.

    notice: CMDBA and CMCDBA Certification primers are being offered as tutorials right through the MySQL conference & Expo.


    Certification checks are open to convention attendees registered to attend classes. tests aren't available to show-hall handiest individuals or the generic public.


    on-line registration for the checks is attainable. in case you register for the checks along with the conference registration, exam fees can be delivered to your complete convention registration charges. discipline to availability, you can also also register and pay for exams on-site. be aware that handiest exams paid all through convention registration are guaranteed a seat. Vouchers for checks may be passed to you if you register at the conference and are redeemed at the testing room.

    region and Time

    All exams should be administered in the Magnolia Room on the lobby level of the Hyatt Regency Santa Clara (adjoining to the conference center). checks may be provided Tuesday, Wednesday and Thursday. exams will be conducted simplest at 10:30 am and at 1:forty pm and may remaining 90 minutes.


    results of certification tests might be posted outside the trying out room following each examination session and sent to you by means of postal mail immediately following the conference.

    Re-examination policy

    Full conference attendees may also pick to re-take any exams now not passed for a $25 payment. There is not any restrict to the variety of instances an exam can be taken. Re-exams are only provided on the conference and may be purchased at the registration desk. most effective cash or exams can be authorised onsite.

    Registering for checks

    with a purpose to attend an exam, you ought to convey:

  • charge voucher (acquired on the registration desk)
  • picture identification
  • MySQL Certification Candidate identification quantity. in case you do not have already got a Certification Candidate identification number from past exams, you should gain one at

  • access MySQL Database With php | Real Questions and Pass4sure dumps


    access MySQL Database With Hypertext Preprocessor

    Use the Hypertext Preprocessor extension for MySQL to access statistics from the MySQL database.

  • by means of Deepak Vohra
  • 06/20/2007
  • The MySQL database is essentially the most widely used open supply relational database. It helps distinctive records kinds in these classes: numeric, date and time, and string. The numeric records forms consist of BIT, TINYINT, BOOL, BOOLEAN, INT, INTEGER, BIGINT, DOUBLE, flow and DECIMAL. The date and time information types encompass DATE, DATETIME, TIMESTAMP and 12 months. The string information kinds include CHAR, VARCHAR, BINARY, ASCII, UNICODE, text and BLOB. listed here, you're going to learn the way you could entry these information kinds with php scripting language — taking expertise of personal home page 5's extension for the MySQL database.

    set up MySQL DatabaseTo install the MySQL database, you ought to first download the neighborhood version of MySQL 5.0 database for home windows. There are three types: home windows necessities (x86), windows (x86) ZIP/Setup.EXE and with out installer (unzip in C:\). To install the with out installer version, unzip the zip file to a directory. in case you've downloaded the zip file, extract it to a directory. And, in case you've downloaded the home windows (x86) ZIP/Setup.EXE version, extract the zip file to a listing. (See elements.)

    subsequent, double-click on on the Setup.exe utility. you'll spark off the MySQL Server 5.0 Setup wizard. in the wizard, select the Setup classification (the default surroundings is usual), and click on installation to set up MySQL 5.0.

    within the sign-Up body, create a MySQL account, or opt for skip sign-Up. opt for "Configure the MySQL Server now" and click on on conclude. you will set off the MySQL Server illustration Configuration wizard. Set the configuration category to precise Configuration (the default surroundings).

    if you're now not time-honored with MySQL database, choose the default settings in the subsequent frames. via default, server type is set at Developer desktop and database usage is determined at Multifunctional Database. select the drive and listing for the InnoDB tablespace. within the concurrent connections frame, choose the DDSS/OLAP surroundings. subsequent, select the allow TCP/IP Networking and enable Strict Mode settings and use the 3306 port. choose the usual personality Set surroundings and the installation As windows carrier setting with MySQL as the service name.

    within the safety alternatives body, that you can specify a password for the basis consumer (via default, the root user does not require a password). next, uncheck adjust protection Settings and click on Execute to configure a MySQL Server example. eventually, click on on finish.

    if you've downloaded the home windows Installer equipment utility, double-click on the mysql-essential-5.0.x-win32.exe file. you're going to prompt the MySQL Server Startup wizard. comply with the equal procedure as Setup.exe.

    After you have got complete installation the MySQL database, log into the database with the MySQL command. In a command instant window, specify this command:

    >mysql -u root

    The default user root will log in. A password isn't required for the default user root:

    >mysql -u <username> -p <password>

    The MySQL command will reveal:


    To listing the database cases in the MySQL database, specify this command:

    mysql>show databases

    by means of default, the look at various database may be listed. to make use of this database, specify this command:

    mysql>use verify

    installation MySQL php ExtensionThe php extension for MySQL database is packaged with the personal home page 5 down load (see resources). First, you should set off the MySQL extension within the Hypertext Preprocessor.ini configuration file. get rid of the ';' before this code line in the file:


    subsequent, restart the Apache2 web server.

    php also requires entry to the MySQL client library. The libmysql.dll file is covered with the php 5 distribution. Add libmysql.dll to the home windows equipment course variable. The libmysql.dll file will seem within the C:/php listing, which you delivered to the gadget course if you put in personal home page 5.

    The MySQL extension offers various configuration directives for connecting with the database. The default connection parameters establish a reference to the MySQL database if a connection isn't designated in a characteristic that requires a connection resource and if a connection has now not already been opened with the database.

    The personal home page class library for MySQL has a variety of capabilities to connect with the database, create database tables and retrieve database records.

    Create a MySQL Database TableNow it be time to create a table within the MySQL database using the php classification library. Create a Hypertext Preprocessor script named createMySQLTable.php in the C:/Apache2/Apache2/htdocs listing. in the script, specify variables for username and password, and connect with the database the usage of the mysql_connect() characteristic. The username root does not require a password. subsequent, specify the server parameter of the mysql_connect() components as localhost:3306:

    $username='root'; $password=''; $connection = mysql_connect ('localhost:3306', $username, $password);

    If a connection is not based, output this error message using the mysql_error() function:

    if (!$connection) $e = mysql_error($connection); echo "Error in connecting to MySQL Database.".$e;

    you'll deserve to select the database in which a table should be created. opt for the MySQL examine database instance the use of the mysql_select_db() function:

    $selectdb=mysql_select_db('look at various');

    subsequent, specify a SQL observation to create a database desk:

    $sql="CREATE table Catalog (CatalogId VARCHAR(25) primary KEY, Journal VARCHAR(25), writer Varchar(25),edition VARCHAR(25), Title Varchar(seventy five), writer Varchar(25))";

    Run the SQL observation using the mysql_query() function. The connection useful resource that you simply created earlier will be used to run the SQL statement:

    $createtable=mysql_query ($sql, $connection );

    If a desk isn't created, output this error message:

    if (!$createtable) $e = mysql_error($connection); echo "Error in developing table.".$e;

    next, add records to the Catalog table. Create a SQL commentary to add facts to the database:

    $sql = "INSERT INTO Catalog VALUES('catalog1', 'Oracle journal', 'Oracle Publishing', 'July-August 2005', 'Tuning Undo Tablespace', 'Kimberly Floss')";

    Run the SQL statement using the mysql_query() feature:

    $addrow=mysql_query ($sql, $connection );

    in a similar fashion, add yet another desk row. Use the createMySQLTable.personal home page script proven in record 1. Run this script in Apache net server with this URL: http://localhost/createMySQLTable.php. A MySQL desk will screen (determine 1).

    Retrieve information From MySQL DatabaseYou can retrieve facts from the MySQL database the usage of the personal home page type library for MySQL. Create the retrieveMySQLData.personal home page script in the C:/Apache2/Apache2/htdocs directory. within the script, create a connection with the MySQL database using the mysql_connect() characteristic:

    $username='root'; $password=''; $connection = mysql_connect ('localhost:3306', $username, $password);

    opt for the database from which statistics might be retrieved with the mysql_select_db() formula:

    $selectdb=mysql_select_db('look at various');

    subsequent, specify the choose statement to question the database (The php classification library for MySQL doesn't have the supply to bind variables as the php category library for Oracle does.):

    $sql = "opt for * from CATALOG";

    Run the SQL question the usage of the mysql_query() feature:

    $influence=mysql_query($sql , $connection);

    If the SQL query doesn't run, output this error message:

    if (!$outcomes) $e = mysql_error($connection); echo "Error in running SQL remark.".$e;

    Use the mysql_num_rows() characteristic to achieve the variety of rows in the result aid:


    If the number of rows is improved than 0, create an HTML desk to screen the effect records. Iterate over the influence set using the mysql_fetch_array() formulation to attain a row of statistics. To achieve an associative array for each and every row, set the result_type parameter to MYSQL_ASSOC:

    while ($row = mysql_fetch_array ($effect, MYSQL_ASSOC))

    Output the row data to an HTML desk the usage of associative dereferencing. for example, the Journal column price is got with $row['Journal']. The retrieveMySQLData.personal home page script retrieves information from the MySQL database (record 2).

    Run the personal home page script in Apache2 server with this URL: http://localhost/retrieveMySQLData.php. HTML information will seem with statistics bought from the MySQL database (figure 2).

    Now you know a way to use the Hypertext Preprocessor extension for MySQL to entry records from the MySQL database. which you could also use the php information Objects (PDO) extension and the MySQL PDO driver to access MySQL with Hypertext Preprocessor .

    concerning the AuthorDeepak Vohra is an internet developer, a solar-certified Java programmer and a solar-certified web part developer. which you can attain him at

    about the author

    Deepak Vohra, a solar-certified Java programmer and solar-certified internet element developer, has posted a large number of articles in trade publications and journals. Deepak is the author of the book "Ruby on Rails for personal home page and Java developers."

    MySQL 5.0: To plug or no longer to plug? | Real Questions and Pass4sure dumps

    Open supply database dealer MySQL AB has released the most up-to-date edition of its signature database administration device, MySQL 5.0, with new pluggable storage engines -- swappable accessories that offer the potential so as to add or get rid of storage engines from a are living MySQL server. talked to site professional Mike Hillyer to learn the way MySQL purchasers can advantage from the brand new pluggable storage engines.

    Hillyer, the webmaster of, a popular site for individuals who run MySQL on desirable of windows, currently holds a MySQL knowledgeable Certification and is a MySQL professional at

    What exactly do pluggable storage engines deliver to MySQL that wasn't obtainable in old types?

    Mike Hillyer: Pluggable storage engines bring the ability to add and remove storage engines to a working MySQL server. ahead of the introduction of the pluggable storage engine architecture, users had been required to cease and reconfigure the server when adding and disposing of storage engines. the usage of third-party or in-condo storage engines required additional effort.

    if you had been chatting with a database administrator (DBA) now not established with MySQL, how would you describe the value of the brand new pluggable storage engines?

    Hillyer: Many database management programs use a 'one-measurement-matches-all' approach for information storage -- all table facts is handled the equal manner, in spite of what the records is or the way it is accessed. MySQL took a special approach early on and carried out the thought of storage engines: diverse storage subsystems which are really expert to distinctive use instances.

    MyISAM tables are perfect to examine heavy purposes reminiscent of internet sites. InnoDB supports higher examine/write concurrency. the new Archive storage engine is designed for logging and archival statistics. The NDB storage engine offers very excessive efficiency and availability.

    One improvement of this design is that their clients had been capable of make migrating from a legacy device to a SQL DBMS more straightforward by using changing their legacy storage right into a MySQL storage engine, allowing them to challenge SQL queries in opposition t their legacy equipment with out forsaking their old techniques.

    Pluggable seems to suggest that they are utilized in definite circumstances, or now not at all depending on the administrator's needs. could you explain how some of the more essential engines (of the 9) support a MySQL DBA?

    Hillyer: listed below are a couple of examples:

    the brand new Archive engine is extraordinary for storing log facts because it makes use of gzip compression and indicates splendid efficiency for inserts and reads with concurrency aid. This skill an administrator can keep on storage and processing costs for logging and archival facts.

    the new Blackhole engine is pleasing in that it takes all INSERT, replace and DELETE statements and drops them; it literally holds no records. That might also appear unusual at first, however it works neatly for enabling a replication master to deal with writes with out the use of any storage because the statements nevertheless get written to the binary log and passed on to the slaves.

    due to the brand new pluggable element, these storage engines can be loaded into the server when mandatory, and unloaded when now not being used.

    Are any of the nine modules anything that has already been part of database expertise in the past? How does their inclusion in MySQL server make this app extra robust?

    Hillyer: every one of these storage engines had been in vicinity for reasonably some time, namely MyISAM, InnoDB, BDB, memory and MERGE. they are rather mature and used by using most of their clients. The NDB storage engine is new to MySQL, however is an latest technology that has been in building for over 10 years.

    The NDB storage engine is an illustration of a storage engine that has contributed to making MySQL extra powerful with the aid of enabling 5 nines of availability when effectively carried out.

    Are there any concerns with MySQL that these pluggable storage engines do not handle? How essential is it that additional modules are launched in future types?

    Hillyer: there will always be needs that definite clients have that the existing storage engines will now not handle, but the new pluggable approach capacity that it might be increasingly elementary to put in writing customized storage engines in line with a defined API [application programming interface] and plug them in.

    As these engines are written, it should be enjoyable to look the innovation that comes from the community, and i look ahead to trying some of those neighborhood-provided storage engines.

    Whilst it is very hard task to choose reliable exam questions / answers resources regarding review, reputation and validity because people get ripoff due to choosing incorrect service. Killexams. com make it certain to provide its clients far better to their resources with respect to exam dumps update and validity. Most of other peoples ripoff report complaint clients come to us for the brain dumps and pass their exams enjoyably and easily. They never compromise on their review, reputation and quality because killexams review, killexams reputation and killexams client self confidence is important to all of us. Specially they manage review, reputation, ripoff report complaint, trust, validity, report and scam. If perhaps you see any bogus report posted by their competitor with the name killexams ripoff report complaint internet, ripoff report, scam, complaint or something like this, just keep in mind that there are always bad people damaging reputation of good services due to their benefits. There are a large number of satisfied customers that pass their exams using brain dumps, killexams PDF questions, killexams practice questions, killexams exam simulator. Visit, their test questions and sample brain dumps, their exam simulator and you will definitely know that is the best brain dumps site.

    Back to Braindumps Menu

    350-020 practice test | JN0-360 questions answers | A2040-409 practice test | 1Y0-740 pdf download | C2150-575 free pdf | 000-123 Practice test | 000-M64 exam questions | 250-403 mock exam | HP2-B76 test prep | 9L0-610 test questions | 1Y0-614 brain dumps | P2170-015 dumps questions | CCM bootcamp | HP0-S40 dump | HP2-Z32 dumps | 1Z0-451 study guide | C5050-062 free pdf | 3M0-600 brain dumps | 000-924 practice questions | 00M-663 VCE |

    Pass4sure 006-002 Dumps and Practice Tests with Real Questions
    We are generally particularly mindful that an imperative issue in the IT business is that there is a nonattendance of significant worth investigation materials. Their exam prep material gives all of you that you should take a confirmation exam. Their mySQL 006-002 Exam will give you exam questions with affirmed answers that mirror the real exam. High gauge and impetus for the 006-002 Exam. They at are set out to empower you to pass your 006-002 exam with high scores.

    We have their specialists operating ceaselessly for the gathering of real test questions of 006-002. All the pass4sure Questions and Answers of 006-002 collected by their team are verified and updated by their mySQL certified team. they have an approach to stay connected to the candidates appeared within the 006-002 exam to induce their reviews regarding the 006-002 exam, they have an approach to collect 006-002 exam tips and tricks, their expertise regarding the techniques utilized in the important 006-002 exam, the mistakes they wiped out the important exam then improve their braindumps consequently. Click Once you bear their pass4sure Questions and Answers, you will feel assured regarding all the topics of exam and feel that your information has been greatly improved. These Questions and Answers are not simply practice questions, these are real test Questions and Answers that are enough to pass the 006-002 exam first attempt. Discount Coupons and Promo Codes are as under; WC2017 : 60% Discount Coupon for all exams on website PROF17 : 10% Discount Coupon for Orders larger than $69 DEAL17 : 15% Discount Coupon for Orders larger than $99 SEPSPECIAL : 10% Special Discount Coupon for All Orders If you are inquisitive about success passing the mySQL 006-002 exam to begin earning? has forefront developed Certified MySQL 5.0 DBA Part II test questions that will make sure you pass this 006-002 exam! delivers you the foremost correct, current and latest updated 006-002 exam questions and out there with a 100 percent refund guarantee. There are several firms that offer 006-002 brain dumps however those are not correct and latest ones. Preparation with 006-002 new questions will be a best thing to pass this certification test in straightforward means.

    Quality and Value for the 006-002 Exam: Practice Exams for mySQL 006-002 are made to the most raised standards of particular accuracy, using simply certified theme experts and dispersed makers for development.

    100% Guarantee to Pass Your 006-002 Exam: If you don't pass the mySQL 006-002 exam using their testing programming and PDF, they will give you a FULL REFUND of your purchasing charge.

    Downloadable, Interactive 006-002 Testing Software: Their mySQL 006-002 Preparation Material gives you that you should take mySQL 006-002 exam. Inconspicuous components are investigated and made by mySQL Certification Experts ceaselessly using industry experience to convey correct, and authentic.

    - Comprehensive questions and answers about 006-002 exam - 006-002 exam questions joined by displays - Verified Answers by Experts and very nearly 100% right - 006-002 exam questions updated on general premise - 006-002 exam planning is in various decision questions (MCQs). - Tested by different circumstances previously distributing - Try free 006-002 exam demo before you choose to get it in Huge Discount Coupons and Promo Codes are as under;
    WC2017: 60% Discount Coupon for all exams on website
    PROF17: 10% Discount Coupon for Orders greater than $69
    DEAL17: 15% Discount Coupon for Orders greater than $99
    DECSPECIAL: 10% Special Discount Coupon for All Orders

    006-002 | 006-002 | 006-002 | 006-002 | 006-002 | 006-002

    Killexams C4040-332 questions and answers | Killexams 1Z0-342 real questions | Killexams GB0-363 sample test | Killexams 00M-670 dumps questions | Killexams 000-807 practice questions | Killexams JN0-130 mock exam | Killexams 1Z0-412 exam prep | Killexams HP0-M55 braindumps | Killexams WPT-R cram | Killexams DP-021W test prep | Killexams EX0-101 practice test | Killexams A2040-921 study guide | Killexams TB0-123 free pdf | Killexams 000-417 questions and answers | Killexams ITEC-Massage questions answers | Killexams 310-230 practice questions | Killexams 6006-1 exam questions | Killexams A2180-178 braindumps | Killexams 1Z0-517 pdf download | Killexams 9L0-507 test prep | huge List of Exam Braindumps

    View Complete list of Brain dumps

    Killexams 920-245 exam prep | Killexams 1Z0-403 dumps questions | Killexams 1Z0-540 questions and answers | Killexams HP0-063 braindumps | Killexams M2080-663 braindumps | Killexams 000-M93 braindumps | Killexams M6040-419 study guide | Killexams C4040-250 examcollection | Killexams C2070-448 exam questions | Killexams 1Z0-441 study guide | Killexams 000-611 practice test | Killexams 310-056 braindumps | Killexams 000-152 cram | Killexams RCDD-001 practice test | Killexams 9A0-384 questions answers | Killexams 190-980 Practice test | Killexams 000-910 study guide | Killexams HP2-Z28 dumps | Killexams OAT test prep | Killexams 050-649 free pdf |

    Certified MySQL 5.0 DBA Part II

    Pass 4 sure 006-002 dumps | 006-002 real questions |

    Indian Bank Recruitment 2018: Apply online for 145 Specialist Officer posts | real questions and Pass4sure dumps

    NEW DELHI: The Indian Bank, a leading Public Sector Bank, has invited applications for the Specialist Officer SO Posts of Assistant General Manager, Assistant Manager, Manager, Senior Manager, & Other Posts.

    The eligible candidates can apply online through its official website from April 10, 2018 to May 2, 2018.

    Direct link to apply online:



    Official website:

    Important DatesStarting Date to Apply Online: April 10, 2018Closing Date to Apply Online: May 2, 2018Last date for submission of Application Fee: May 2, 2018

    Vacancy Details

    Positions in Information Technology Department / Digital Banking Department

    Post Code Post Role / Domain Scale Vacancy 1 Assistant General Manager System Administrator - AIX, HP-UX, Linux, Windows V 1 2 Chief Manager DBA - Oracle, MySQL, SQL-Server, DB2 IV 2 3 Manager DBA - Oracle, MySQL, SQL-Server, DB2 II 2 4 Chief Manager System Administrator - AIX, HP-UX, Linux, Windows IV 1 5 Manager System Administrator - AIX, HP-UX, Linux, Windows II 2 6 Senior Manager Middleware Administrator - Weblogic, Websphere,JBOSS, Tomcat, Apache, IIS. III 2 7 Chief Manager Application Architect IV 1 8 Manager Application Architect II 1 9 Chief Manager Big Data, Analytics, CRM IV 1 10 Senior Manager Big Data, Analytics, CRM III 1 11 Chief Manager IT Security Specialist IV 1 12 Manager IT Security Specialist II 2 13 Chief Manager Software Testing Specialist IV 1 14 Manager Software Testing Specialist II 2 15 Chief Manager Network Specialist IV 1 16 Senior Manager Network Specialist III 1 17 Manager Virtualisation specialist for VMware, Microsofthypervisor, RHEL(Red Hat Enterprise Linux) II 2 18 Senior Manager Project architect III 1 19 Senior Manager Data Centre Management III 1 20 Manager Network administrator II 2 21 Chief Manager Cyber security specialist IV 1 22 Senior Manager Cyber security specialist III 2 Total 31 Positions in Information Systems Security Cell Post Code Post Role / Domain Scale Vacancy 23 Senior Manager Senior Information Security Manager III 1 24 Manager Information Security Administrators II 3 25 Manager Cyber Forensic Analyst II 1 26 Manager Certified Ethical Hacker &Penetration Tester II 1 27 Assistant Manager Application Security Tester I 1 Total 7 Positions in Treasury Department Post Code Post Role / Domain Scale Vacancy 28 Senior Manager Regulatory Compliance III 1 29 Senior Manager Research Analyst III 1 30 Senior Manager Fixed Income Dealer III 2 31 Manager Equity Dealer II 1 32 Senior Manager Forex Derivative Dealer III 1 33 Senior Manager Forex Global Markets Dealer III 1 34 Manager Forex Dealer II 1 35 Senior Manager Relationship Manager - Trade Finance and Forex III 3 36 Senior Manager Business Research Analyst - Trade Finance and Forex III 1 37 Senior Manager Credit Analyst - Corporates III 1 Total 13 Position in Security Department Post Code Post Role / Domain Scale Vacancy 40 Manager Security Officer II 25 Positions in Credit Post Code Post Role / Domain Scale Vacancy 41 Senior Manager Credit III 20 42 Manager Credit II 30 Total 50 Positions in Planning and Development Department Post Code Post Role / Domain Scale Vacancy 43 Manager Statistician II 1 44 Assistant Manager Statistician I 1 Total 2 Positions in Premises and Expenditure Department Post Code Post Role / Domain Scale Vacancy 45 Manager Electrical II 2 46 Manager Civil II 2 47 Assistant Manager Civil I 6 48 Assistant Manager Architect I 1 Total 11 RESERVATION SCALE TOTAL SC ST OBC UR OC VI HI ID V 1 0 0 0 1 0 0 0 0 IV 9 2 0 2 5 0 0 0 0 III 42 6 3 11 22 1 0 1 0 II 84 12 6 22 44 0 1 1 1 I 9 1 0 2 6 1 0 0 0 PAY SCALE AND EMOLUMENTS Scale I 23700 980 30560 1145 32850 1310 42020 Scale II 31705 1145 32850 1310 45950 Scale III 42020 1310 48570 1460 51490 Scale IV 50030 1460 55870 1650 59170 Scale V 59170 1650 62470 1800 66070

    Age Limit (as on January 1, 2018)

    Post Age Limit Assistant General Manager 30 to 45 years Manager (All Other) 23 to 35 years Manager (Equity Dealer, Forex Dealer, Risk Management, Security Officer, Credit, Statistician) 25 to 35 years Senior Manager (All Other) 25 to 38 years Senior Manager (Regulatory Compliance, Research Analyst, Fixed Income Dealer, Forex Derivative Dealer, Forex Global Markets Dealer, Relationship Manager - Trade Finance and Forex, Business Research Analyst - Trade Finance and Forex,Risk Management) 27 to 38 years Chief Manager 27 to 40 years Assistant Manager 20 to 30 years

    Age Relaxation

    Category Age Relaxation SC/ ST 5 years OBC (Non-Creamy Layer) 3 years Ex-Servicemen 5 years Persons ordinarily domiciled in the state of Jammu & Kashmir during the period January 1, 1980 and December 31, 1989 5 years Persons affected by 1984 riots 5 years Qualification

    Educational Qualification (For Post Code 1, 2,3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 and 22):a) 4 year Engineering/ Technology Degree in Computer Science/ Computer Applications/ Information Technology/ Electronics/ Electronics & Telecommunications/ Electronics & Communication/ Electronics & InstrumentationORb) Post Graduate Degree in Electronics/ Electronics & Tele Communication/ Electronics & Communication/ Electronics & Instrumentation/ Computer Science/ Information Technology/ Computer ApplicationsORGraduate having passed DOEACC ‘B’ level

    Post Code Additional Qualification Experience 1 Professional level certification inSystem Administration 10 years Experience in maintenance and Administration of Operating Systems, Databases, Backup Management and Data Centre Management 2 Professional level certification in Database Administration 7 years Experience in maintenance and administration of databases likeOracle/ DB2/ MySql/ SQL Server 3 Associate level certification inDatabase Administration. 3 years Experience in maintenance and administration of databases likeOracle/ DB2/ MySql/ SQL Server. 4 Professional level certification in System Administration 7 years Experience in maintenance andAdministration of Operating Systems. 5 Associate level certification inSystem Administration 3 years Experience in maintenance and Administration of Operating Systems 6 Certification in Middleware Solution 5 years Experience in maintenance andAdministration of Middleware. 7 Certification in Software Development & Programming 7 years Experience in application design, code review and Documentation 8 Certification in Software Development& Programming 7 years Experience in application design, code review and Documentation 9 Certification in Big Data/Analytics/ CRM solution 7 years Experience in Analyzing data, uncover information, derive insights and implement data-driven strategies and datamodels in Big Data/ Analytic/ CRM technology 10 Certification in Big Data/Analytics/ CRM solution 3 years Experience in Analyzing data, uncover information, derive insights and implement data-driven strategies and data models in Big Data/Analytic/ CRM technology 11 Certified Information Security Manager/ Certified Information Systems Security Professional 7 years Experience in implementing security improvements by auditing and assessing the current situation; evaluating trends; anticipating requirements and making relevant configuration/strategychanges to keep the organization secure. 12 Checkpoint Certified SecurityExpert /CISCO Certified Security Professional. 3 years Experience in implementing security improvements by assessing the current situation; evaluating trends; anticipating requirements and makingchanges to keep the organization secure 13 Certification in software testing. Experience in Software Testing 14 Certification in software testing. Experience in Software Testing 15 Cisco Certified Internetwork Expert (Switching and Routing). 7 years Experience in Routing and switching. Design and implementation of WAN networks. Experience (a) in routing using Border Gateway Protocol(BGP). (b) Drawing up specifications for procurement of Network devices includingrouters, switches, firewalls 16 Cisco Certified InternetworkExpert (Switching and Routing). 5 years Experience in Routing and switching. Design and implementation of WAN networks. Experience in implementation of NetworkAdmission Control (NAC) 17 Associate level CertificationVirtualization Technology. 3 years Experience in Administrationof systems in Virtualized environment 18 Nil 5 years Experience in conceptualizing, esigning and implementation of High-valueorganization level IT projects 19 It is desirable to have certificationin Data Centre Management. 5 years Experience in Managing DataCentre Operations. 20 Cisco Certified NetworkProfessional (Routing and Switching) 3 years Experience in Network Troubleshooting,Network Protocols, Routers, Network Administration. 21 Certification in Cyber Security froma recognized institution 7 years Experience Managing Cyber SecurityOperation Centre. 22 Certification in Cyber Security froma recognized institution 5 years Experience Managing Cyber SecurityOperation Centre HOW TO APPLY ONLINE
  • Log on to the official website:
  • Click on "Recruitment to the post"
  • Read the advertisement details very carefully to ensure your eligibility before "Online Application"
  • Click on "Online Application" to fill up the application form online
  • The candidate would be directed to a page where he/she has to click on "Apply Online" (for the first time registration or new registration)/ already registered candidate just need to "Sign In" by using their application number and password sent to their valid e-mail ID/Mobile No. (This is required always for logging in to their account for Form Submission and Admit Card/Call Letter Download)
  • Fill up the application form as per the guidelines and information sought
  • Candidates need to fill up to all required information in "First Screen" tab and click on "SUBMIT" to move next screen.
  • Fill the all details in the application & upload Photo, Signature.
  • Application fee should be paid through Online & then Submit the Form.
  • Take a print out of online application for future use.

  • Netflix Billing Migration to AWS — Part II | real questions and Pass4sure dumps

    This is a continuation in the series on Netflix Billing migration to the Cloud. An overview of the migration project was published earlier here:

    This post details the technical journey for the Billing applications and datastores as they were moved from the Data Center to AWS Cloud.

    As you might have read in earlier Netflix Cloud Migration blogs, all of Netflix streaming infrastructure is now completely run in the Cloud. At the rate Netflix was growing, especially with the imminent Netflix Everywhere launch, they knew they had to move Billing to the Cloud sooner than later else their existing legacy systems would not be able to scale.

    There was no doubt that it would be a monumental task of moving highly sensitive applications and critical databases without disrupting the business, while at the same time continuing to build the new business functionality and features.

    A few key responsibilities and challenges for Billing:

  • The Billing team is responsible for the financially critical data in the company. The data they generate on a daily basis for subscription charges, gift cards, credits, chargebacks, etc. is rolled up to finance and is reported into the Netflix accounting. They have stringent SLAs on their daily processing to ensure that the revenue gets booked correctly for each day. They cannot tolerate delays in processing pipelines.
  • Billing has zero tolerance for data loss.
  • For most parts, the existing data was structured with a relational model and necessitates use of transactions to ensure an all-or-nothing behavior. In other words they need to be ACID for some operations. But they also had use-cases where they needed to be highly available across regions with minimal replication latencies.
  • Billing integrates with the DVD business of the company, which has a different architecture than the Streaming component, adding to the integration complexity.
  • The Billing team also provides data to support Netflix Customer Service agents to answer any member billing issues or questions. This necessitates providing Customer Support with a comprehensive view of the data.
  • The way the Billing systems were, when they started this project, is shown below.

  • 2 Oracle databases in the Data Center — One storing the customer subscription information and other storing the invoice/payment data.
  • Multiple REST-based applications — Serving calls from the and Customer support applications. These were essentially doing the CRUD operations
  • 3 Batch applications — Subscription Renewal — A daily job that looks through the customer base to determine the customers to be billed that day and the amount to be billed by looking at their subscription plans, discounts, etc.Order & Payment Processor — A series of batch jobs that create an invoice to charge the customer to be renewed and process the invoice through various stages of the invoice lifecycle.Revenue Reporting — A daily job that looks through billing data and generates reports for the Netflix Finance team to consume.
  • One Billing Proxy application (in the Cloud) — used to route calls from rest of Netflix applications in the Cloud to the Data Center.
  • Weblogic queues with legacy formats being used for communications between processes.
  • The goal was to move all of this to the Cloud and not have any billing applications or databases in the Data Center. All this without disrupting the business operations. They had a long way to go!

    The Plan

    We came up with a 3-step plan to do it:

  • Act I — Launch new countries directly in the Cloud on the billing side while syncing the data back to the Data Center for legacy batch applications to continue to work.
  • Act II — Model the user-facing data, which could live with eventual consistency and does not need to be ACID, to persist to Cassandra (Cassandra gave us the ability to perform writes in one region and make it available in the other regions with very low latency. It also gives us high-availability across regions).
  • Act III — Finally move the SQL databases to the Cloud.
  • In each step and for each country migration, learn from it, iterate and improve on it to make it better.

    Act I — Redirect new countries to the Cloud and sync data to the Data Center

    Netflix was going to launch in 6 new countries soon. They decided to take it as a challenge to launch these countries partly in the Cloud on the billing side. What that meant was the user-facing data and applications would be in the Cloud, but they would still need to sync data back to the Data Center so some of their batch applications which would continue to run in the Data Center for the time-being, could work without disruption. The customer for these new countries data would be served out of the Cloud while the batch processing would still run out of the Data Center. That was the first step.

    We ported all the APIs from the 2 user-facing applications to a Cloud based application that they wrote using Spring Boot and Spring Integration. With Spring Boot, they were able to quickly jump-start building a new application, as it provided the infrastructure and plumbing they needed to stand it up out of the box and let us focus on the business logic. With Spring Integration they were able to write once and reuse a lot of the workflow style code. Also with headers and header-based routing support that it provided, they were able to implement a pub-sub model within the application to put a message in a channel and have all consumers consume it with independent tuning for each consumer. They were now able to handle the API calls for members in the 6 new countries in any AWS region with the data stored in Cassandra. This enabled Billing to be up for these countries even if an entire AWS region went down — the first time they were able to see the power of being on the Cloud!

    We deployed their application on EC2 instances in AWS in multiple regions. They added a redirection layer in their existing Cloud proxy application to switch billing calls for users in the new countries to go to the new billing APIs in the Cloud and billing calls for the users in the existing countries to continue to go to the old billing APIs in the Data Center. They opened direct connectivity from one of the AWS regions to the existing Oracle databases in the Data Center and wrote an application to sync the data from Cassandra via SQS in the 3 regions back to this region. They used SQS queues and Dead Letter Queues (DLQs) to move the data between regions and process failures.

    New country launches usually mean a bump in member base. They knew they had to move their Subscription Renewal application from the Data Center to the Cloud so that they don’t put the load on the Data Center one. So for these 6 new countries in the Cloud, they wrote a crawler that went through all the customers in Cassandra daily and came up with the members who were to be charged that day. This all row iterator approach would work for now for these countries, but they knew it wouldn’t hold ground when they migrated the other countries and especially the US data (which had majority of their members at that time) to the Cloud. But they went ahead with it for now to test the waters. This would be the only batch application that they would run from the Cloud in this stage.

    We had chosen Cassandra as their data store to be able to write from any region and due to the fast replication of the writes it provides across regions. They defined a data model where they used the customerId as the key for the row and created a set of composite Cassandra columns to enable the relational aspect of the data. The picture below depicts the relationship between these entities and how they represented them in a single column family in Cassandra. Designing them to be a part of a single column family helped us achieve transactional support for these related entities.

    We designed their application logic such that they read once at the beginning of any operation, updated objects in memory and persisted it to a single column family at the end of the operation. Reading from Cassandra or writing to it in the middle of the operation was deemed an anti-pattern. They wrote their own custom ORM using Astyanax (a Netflix grown and open-sourced Cassandra client) to be able to read/write the domain objects from/to Cassandra.

    We launched in the new countries in the Cloud with this approach and after a couple of initial minor issues and bug fixes, they stabilized on it. So far so good!

    The Billing system architecture at the end of Act I was as shown below:

    Act II — Move all applications and migrate existing countries to the cloud

    With Act I done successfully, they started focusing on moving the rest of the apps to the Cloud without moving the databases. Most of the business logic resides in the batch applications, which had matured over years and that meant digging into the code for every condition and spending time to rewrite it. They could not simply forklift these to the Cloud as is. They used this opportunity to remove dead code where they could, break out functional parts into their own smaller applications and restructure existing code to scale. These legacy applications were coded to read from config files on disk on startup and use other static resources like reading messages from Weblogic queues — all anti-patterns in the Cloud due to the ephemeral nature of the instances. So they had to re-implement those modules to make the applications Cloud-ready. They had to change some APIs to follow an async pattern to allow moving the messages through the queues to the region where they had now opened a secure connection to the Data Center.

    The Cloud Database Engineering (CDE) team setup a multi node Cassandra cluster for their data needs. They knew that the all row Cassandra iterator Renewal solution that they had implemented for renewing customers from earlier 6 countries would not scale once they moved the entire Netflix member billing data to Cassandra. So they designed a system to use Aegisthus to pull the data from Cassandra SSTables and convert it to JSON formatted rows that were staged out to S3 buckets. They then wrote Pig scripts to run mapreduce on the massive dataset everyday to fetch customer list to renew and charge for that day. They also wrote Sqoop jobs to pull data from Cassandra and Oracle and write to Hive in a queryable format which enabled us to join these two datasets in Hive for faster troubleshooting.

    To enable DVD servers to talk to us in the Cloud, they setup load balancer endpoints (with SSL client certification) for DVD to route calls to us through the Cloud proxy, which for now would pipe the call back to the Data Center, until they migrated US. Once US data migration was done, they would sever the Cloud to Data Center communication link.

    To validate this huge data migration, they wrote a comparator tool to compare and validate the data that was migrated to the Cloud, with the existing data in the Data Center. They ran the comparator in an iterative format, where they were able to identify any bugs in the migration, fix them, clear out the data and re-run. As the runs became clearer and devoid of issues, it increased their confidence in the data migration. They were excited to start with the migration of the countries. They chose a country with a small Netflix member base as the first country and migrated it to the Cloud with the following steps:

  • Disable the non-GET apis for the country under migration. (This would not impact members, but delay any updates to subscriptions in billing)
  • Use Sqoop jobs to get the data from Oracle to S3 and Hive.
  • Transform it to the Cassandra format using Pig.
  • Insert the records for all members for that country into Cassandra.
  • Enable the non-GET apis to now serve data from the Cloud for the country that was migrated.
  • After validating that everything looked good, they moved to the next country. They then ramped up to migrate set of similar countries together. The last country that they migrated was US, as it held most of their member base and also had the DVD subscriptions. With that, all of the customer-facing data for Netflix members was now being served through the Cloud. This was a big milestone for us!

    After Act II, they were looking like this:

    Act III — Good bye Data Center!

    Now the only (and most important) thing remaining in the Data Center was the Oracle database. The dataset that remained in Oracle was highly relational and they did not feel it to be a good idea to model it to a NoSQL-esque paradigm. It was not possible to structure this data as a single column family as they had done with the customer-facing subscription data. So they evaluated Oracle and Aurora RDS as possible options. Licensing costs for Oracle as a Cloud database and Aurora still being in Beta didn’t help make the case for either of them.

    While the Billing team was busy in the first two acts, their Cloud Database Engineering team was working on creating the infrastructure to migrate billing data to MySQL instances on EC2. By the time they started Act III, the database infrastructure pieces were ready, thanks to their help. They had to convert their batch application code base to be MySQL-compliant since some of the applications used plain jdbc without any ORM. They also got rid of a lot of the legacy pl-sql code and rewrote that logic in the application, stripping off dead code when possible.

    Our database architecture now consists of a MySQL master database deployed on EC2 instances in one of the AWS regions. They have a Disaster Recovery DB that gets replicated from the master and will be promoted to master if the master goes down. And they have slaves in the other AWS regions for read only access to applications.

    Our Billing Systems, now completely in the Cloud, look like this:

    Needless to say, they learned a lot from this huge project. They wrote a few tools along the way to help us debug/troubleshoot and improve developer productivity. They got rid of old and dead code, cleaned up some of the functionality and improved it wherever possible. They received support from many other engineering teams within Netflix. They had engineers from the Cloud Database Engineering, Subscriber and Account engineering, Payments engineering, Messaging engineering worked with us on this initiative for anywhere between 2 weeks to a couple of months. The great thing about the Netflix culture is that everyone has one goal in mind — to deliver a great experience for their members all over the world. If that means helping Billing solution move to the Cloud, then everyone is ready to do that irrespective of team boundaries!

    The road ahead…

    With Billing in the Cloud, Netflix streaming infrastructure now completely runs in the Cloud. They can scale any Netflix service on demand, do predictive scaling based on usage patterns, do single-click deployments using Spinnaker and have consistent deployment architectures between various Netflix applications. Billing infrastructure can now make use of all the Netflix platform libraries and frameworks for monitoring and tooling support in the Cloud. Today they support billing for over 81 million Netflix members in 190+ countries. They generate and churn through terabytes of data everyday to accomplish billing events. Their road ahead includes rearchitecting membership workflows for a global scale and business challenges. As part of their new architecture, they would be redefining their services to scale natively in the Cloud. With the global launch, they have an opportunity to learn and redefine Billing and Payment methods in newer markets and integrate with many global partners and local payment processors in the regions. They are looking forward to architect more functionality and scale out further.

    If you like to design and implement large-scale distributed systems for critical data and build automation/tooling for testing it, they have a couple of positions open and would love to talk to you! Check out the positions here :

    — by Subir Parulekar, Rahul Pilani

    See Also:

    Performance Certification of Couchbase Autonomous Operator on Kubernetes | real questions and Pass4sure dumps

    At Couchbase, they take performance very seriously, and with the launch of their new product, Couchbase Autonomous Operator 1.0, they wanted to make sure it’s Enterprise-grade and production ready for customers.

    In this post, they will discuss the detailed performance results from running YCSB Performance Benchmark tests on Couchbase Server 5.5 using the Autonomous Operator to deploy on Kubernetes platform. One of the big concerns for Enterprises planning to run a database on Kubernetes is "performance."

    This document gives a quick comparison of two workloads, namely YCSB A & E with Couchbase Server 5.5 on Kubernetes vs. bare metal.

    YCSB Workload A: This workload has a mix of 50/50 reads and writes. An application example is a session store recording recent actions.

    Workload E: Short ranges: In this workload, short ranges of records are queried, instead of individual records. Application example: threaded conversations, where each scan is for the posts in a given thread (assumed to be clustered by thread id).

    In general, they observed no significant performance degradation in running Couchbase Cluster on Kubernetes, Workload A had on par performance compared to bare metal and Workload E had approximately less than 10% degradation.


    For the setup, Couchbase was installed using the Operator deployment as stated below. For more details on the setup, please refer here.


    Operator deployment: deployment.yaml (See Appendix)

    Couchbase deployment: couchbase-cluster-simple-selector.yaml (See Appendix)

    Client / workload generator deployment: pillowfight-ycsb.yaml (See Appendix) (Official pillowfight docker image from dockerhub and installed java and YCSB manually on top of it)


    7 servers

    24 CPU x 64GB RAM per server

    Couchbase Setup

    4 servers: 2 data nodes, 2 index+query nodes

    40GB RAM quota for data service

    40GB RAM quota for index services

    1 data/bucket replica

    1 primary index replica


    YCSB WorkloadA and WorkloadE

    10M docs

    Workflow after new empty k8s cluster is initialized on 7 servers:

    # assign labels to the nodes so all services/pods will be assigned to right servers:kubectl label nodes arke06-sa09 type=powerkubectl label nodes arke07-sa10 type=clientkubectl label nodes ark08-sa11 type=clientkubectl label nodes arke01-sa04 type=kvkubectl label nodes arke00-sa03 type=kvkubectl label nodes arke02-sa05 type=kvkubectl label nodes arke03-sa06 type=kv #deploy Operator: kubectl create -f deployment.yaml #deploy Couchbase kubectl create -f couchbase-cluster-simple-selector.yaml #deploy Client(s): kubectl create -f pillowfight-ycsb.yaml I ran my tests directly from the client node by logging into the docker image of the client pod: docker exec -it --user root <pillowfight-yscb container id> bash And installing YCSB environment there manually: apt-get upgrade apt-get update apt-get install -y software-properties-common apt-get install python sudo apt-add-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java8-installer export JAVA_HOME=/usr/lib/jvm/java-8-oracle cd /opt wget sudo tar -xvzf apache-maven-3.5.4-bin.tar.gz export M2_HOME="/opt/apache-maven-3.5.4" export PATH=$PATH:/opt/apache-maven-3.5.4/bin sudo update-alternatives --install "/usr/bin/mvn" "mvn" "/opt/apache-maven-3.5.4/bin/mvn" 0 sudo update-alternatives --set mvn /opt/apache-maven-3.5.4/bin/mvn git clone

    Running the workloads:

    Examples of YCSB commands used in this exercise: Workload A Load: ./bin/ycsb load couchbase2 -P workloads/workloade -p couchbase.password=password -p -p couchbase.bucket=default -p couchbase.upsert=true -p couchbase.epoll=true -p couchbase.boost=48 -p couchbase.persistTo=0 -p couchbase.replicateTo=0 -p couchbase.sslMode=none -p writeallfields=true -p recordcount=10000000 -threads 50 -p maxexecutiontime=3600 -p operationcount=1000000000 Run: ./bin/ycsb run couchbase2 -P workloads/workloada -p couchbase.password=password -p -p couchbase.bucket=default -p couchbase.upsert=true -p couchbase.epoll=true -p couchbase.boost=48 -p couchbase.persistTo=0 -p couchbase.replicateTo=0 -p couchbase.sslMode=none -p writeallfields=true -p recordcount=10000000 -threads 50 -p operationcount=1000000000 -p maxexecutiontime=600 -p exportfile=ycsb_workloadA_22vCPU.log

    Test results:

    Env Direct setup Kubernetes pod resources Test Bare metal Kubernetes Delta Env 1 22 vCPU, 48 GB RAM

    (cpu cores and RAM available are set on OS core level)

    Limit to:

    cpu: 22000m = ~22vCPU

    mem: 48GB

    All pods are on dedicated nodes


    50/50 get/upsert

    Throughput: 194,158req/sec

    CPU usage avg: 86% of all 22 cores

    Throughput: 192,190req/sec

    CPU usage avg: 94% of the cpu quota

    – 1% Env 2 16 vCPU, 48 GB RAM

    (cpu cores and RAM available are set on OS core level)

    Limit to:

    cpu: 16000m = ~16vCPU

    mem: 48GB

    All pods are on dedicated nodes


    50/50 get/upsert

    Throughput: 141,909req/sec

    CPU usage avg: 89% of all 16 cores

    Throughput: 145,430req/sec

    CPU usage avg: 100% of the cpu quota

    + 2.5% Workload E: Load: ./bin/ycsb load couchbase2 -P workloads/workloade -p couchbase.password=password -p -p couchbase.bucket=default -p couchbase.upsert=true -p couchbase.epoll=true -p couchbase.boost=48 -p couchbase.persistTo=0 -p couchbase.replicateTo=0 -p couchbase.sslMode=none -p writeallfields=true -p recordcount=10000000 -threads 50 -p maxexecutiontime=3600 -p operationcount=1000000000 Run: ./bin/ycsb run couchbase2 -P workloads/workloade -p couchbase.password=password -p -p couchbase.bucket=default -p couchbase.upsert=true -p couchbase.epoll=true -p couchbase.boost=48 -p couchbase.persistTo=0 -p couchbase.replicateTo=0 -p couchbase.sslMode=none -p writeallfields=true -p recordcount=10000000 -threads 50 -p operationcount=1000000000 -p maxexecutiontime=600 -p exportfile=ycsb_workloadE_22vCPU.log Env Direct setup Kubernetes pod resources Test Bare metal Kubernetes Delta Env 1 22 vCPU, 48 GB RAM

    (cpu cores and RAM available are set on OS core level)

    Limit to:

    cpu: 22000m = ~22vCPU

    mem: 48GB

    All pods are on dedicated nodes


    95/5 scan/insert

    Throughput: 15,823req/sec

    CPU usage avg: 85% of all 22 cores

    Throughput: 14,281req/sec

    CPU usage avg: 87% of the cpu quota

    – 9.7% Env 2 16 vCPU, 48 GB RAM

    (cpu cores and RAM available are set on OS core level)

    Limit to:

    cpu: 16000m = ~16vCPU

    mem: 48GB

    All pods are on dedicated nodes


    95/5 scan/insert

    Throughput: 13,014req/sec

    CPU usage avg: 91% of all 16 cores

    Throughput: 12,579req/sec

    CPU usage avg: 100% of the cpu quota

    – 3.3% Conclusions

    Couchbase Server 5.5 is production ready to be deployed on Kubernetes with the Autonomous Operator. Performance of Couchbase Server 5.5 on Kubernetes comparable to running on bare metal. There is little performance penalty in running Couchbase Server on Kubernetes platform. Looking at the results Workload A had on par performance compared to bare metal and Workload E had approximately less than 10% degradation.

  • YCSB Workloads
  • Couchbase Kubernetes page
  • Download Couchbase Autonomous Operator
  • Introducing Couchbase Operator
  • Appendix

    My deployment.yaml file:

    apiVersion: extensions/v1beta1 kind: Deployment metadata: name: couchbase-operator spec: replicas: 1 template: metadata: labels: name: couchbase-operator spec: nodeSelector: type: power containers: - name: couchbase-operator image: couchbase/couchbase-operator-internal:1.0.0-292 command: - couchbase-operator # Remove the arguments section if you are installing the CRD manually args: - -create-crd - -enable-upgrades=false env: - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: ports: - name: readiness-port containerPort: 8080 readinessProbe: httpGet: path: /readyz port: readiness-port initialDelaySeconds: 3 periodSeconds: 3 failureThreshold: 19

    My couchbase-cluster-simple-selector.yaml file:

    apiVersion: kind: CouchbaseCluster metadata: name: cb-example spec: baseImage: couchbase/server version: enterprise-5.5.0 authSecret: cb-example-auth exposeAdminConsole: true antiAffinity: true exposedFeatures: - xdcr cluster: dataServiceMemoryQuota: 40000 indexServiceMemoryQuota: 40000 searchServiceMemoryQuota: 1000 eventingServiceMemoryQuota: 1024 analyticsServiceMemoryQuota: 1024 indexStorageSetting: memory_optimized autoFailoverTimeout: 120 autoFailoverMaxCount: 3 autoFailoverOnDataDiskIssues: true autoFailoverOnDataDiskIssuesTimePeriod: 120 autoFailoverServerGroup: false buckets: - name: default type: couchbase memoryQuota: 20000 replicas: 1 ioPriority: high evictionPolicy: fullEviction conflictResolution: seqno enableFlush: true enableIndexReplica: false servers: - size: 2 name: data services: - data pod: nodeSelector: type: kv resources: limits: cpu: 22000m memory: 48Gi requests: cpu: 22000m memory: 48Gi - size: 2 name: qi services: - index - query pod: nodeSelector: type: kv resources: limits: cpu: 22000m memory: 48Gi requests: cpu: 22000m memory: 48Gi

    My pillowfight-ycsb.yaml file:

    apiVersion: batch/v1 kind: Job metadata: name: pillowfight spec: template: metadata: name: pillowfight spec: containers: - name: pillowfight image: sequoiatools/pillowfight:v5.0.1 command: ["sh", "-c", "tail -f /dev/null"] restartPolicy: Never nodeSelector: type: client


    kubernetes ,couchbase 5.5 ,database ,performance ,autonomous operator

    Direct Download of over 5500 Certification Exams

    3COM [8 Certification Exam(s) ]
    AccessData [1 Certification Exam(s) ]
    ACFE [1 Certification Exam(s) ]
    ACI [3 Certification Exam(s) ]
    Acme-Packet [1 Certification Exam(s) ]
    ACSM [4 Certification Exam(s) ]
    ACT [1 Certification Exam(s) ]
    Admission-Tests [13 Certification Exam(s) ]
    ADOBE [93 Certification Exam(s) ]
    AFP [1 Certification Exam(s) ]
    AICPA [2 Certification Exam(s) ]
    AIIM [1 Certification Exam(s) ]
    Alcatel-Lucent [13 Certification Exam(s) ]
    Alfresco [1 Certification Exam(s) ]
    Altiris [3 Certification Exam(s) ]
    Amazon [2 Certification Exam(s) ]
    American-College [2 Certification Exam(s) ]
    Android [4 Certification Exam(s) ]
    APA [1 Certification Exam(s) ]
    APC [2 Certification Exam(s) ]
    APICS [2 Certification Exam(s) ]
    Apple [69 Certification Exam(s) ]
    AppSense [1 Certification Exam(s) ]
    APTUSC [1 Certification Exam(s) ]
    Arizona-Education [1 Certification Exam(s) ]
    ARM [1 Certification Exam(s) ]
    Aruba [6 Certification Exam(s) ]
    ASIS [2 Certification Exam(s) ]
    ASQ [3 Certification Exam(s) ]
    ASTQB [8 Certification Exam(s) ]
    Autodesk [2 Certification Exam(s) ]
    Avaya [96 Certification Exam(s) ]
    AXELOS [1 Certification Exam(s) ]
    Axis [1 Certification Exam(s) ]
    Banking [1 Certification Exam(s) ]
    BEA [5 Certification Exam(s) ]
    BICSI [2 Certification Exam(s) ]
    BlackBerry [17 Certification Exam(s) ]
    BlueCoat [2 Certification Exam(s) ]
    Brocade [4 Certification Exam(s) ]
    Business-Objects [11 Certification Exam(s) ]
    Business-Tests [4 Certification Exam(s) ]
    CA-Technologies [21 Certification Exam(s) ]
    Certification-Board [10 Certification Exam(s) ]
    Certiport [3 Certification Exam(s) ]
    CheckPoint [41 Certification Exam(s) ]
    CIDQ [1 Certification Exam(s) ]
    CIPS [4 Certification Exam(s) ]
    Cisco [318 Certification Exam(s) ]
    Citrix [48 Certification Exam(s) ]
    CIW [18 Certification Exam(s) ]
    Cloudera [10 Certification Exam(s) ]
    Cognos [19 Certification Exam(s) ]
    College-Board [2 Certification Exam(s) ]
    CompTIA [76 Certification Exam(s) ]
    ComputerAssociates [6 Certification Exam(s) ]
    Consultant [2 Certification Exam(s) ]
    Counselor [4 Certification Exam(s) ]
    CPP-Institue [2 Certification Exam(s) ]
    CPP-Institute [1 Certification Exam(s) ]
    CSP [1 Certification Exam(s) ]
    CWNA [1 Certification Exam(s) ]
    CWNP [13 Certification Exam(s) ]
    Dassault [2 Certification Exam(s) ]
    DELL [9 Certification Exam(s) ]
    DMI [1 Certification Exam(s) ]
    DRI [1 Certification Exam(s) ]
    ECCouncil [21 Certification Exam(s) ]
    ECDL [1 Certification Exam(s) ]
    EMC [129 Certification Exam(s) ]
    Enterasys [13 Certification Exam(s) ]
    Ericsson [5 Certification Exam(s) ]
    ESPA [1 Certification Exam(s) ]
    Esri [2 Certification Exam(s) ]
    ExamExpress [15 Certification Exam(s) ]
    Exin [40 Certification Exam(s) ]
    ExtremeNetworks [3 Certification Exam(s) ]
    F5-Networks [20 Certification Exam(s) ]
    FCTC [2 Certification Exam(s) ]
    Filemaker [9 Certification Exam(s) ]
    Financial [36 Certification Exam(s) ]
    Food [4 Certification Exam(s) ]
    Fortinet [13 Certification Exam(s) ]
    Foundry [6 Certification Exam(s) ]
    FSMTB [1 Certification Exam(s) ]
    Fujitsu [2 Certification Exam(s) ]
    GAQM [9 Certification Exam(s) ]
    Genesys [4 Certification Exam(s) ]
    GIAC [15 Certification Exam(s) ]
    Google [4 Certification Exam(s) ]
    GuidanceSoftware [2 Certification Exam(s) ]
    H3C [1 Certification Exam(s) ]
    HDI [9 Certification Exam(s) ]
    Healthcare [3 Certification Exam(s) ]
    HIPAA [2 Certification Exam(s) ]
    Hitachi [30 Certification Exam(s) ]
    Hortonworks [4 Certification Exam(s) ]
    Hospitality [2 Certification Exam(s) ]
    HP [750 Certification Exam(s) ]
    HR [4 Certification Exam(s) ]
    HRCI [1 Certification Exam(s) ]
    Huawei [21 Certification Exam(s) ]
    Hyperion [10 Certification Exam(s) ]
    IAAP [1 Certification Exam(s) ]
    IAHCSMM [1 Certification Exam(s) ]
    IBM [1532 Certification Exam(s) ]
    IBQH [1 Certification Exam(s) ]
    ICAI [1 Certification Exam(s) ]
    ICDL [6 Certification Exam(s) ]
    IEEE [1 Certification Exam(s) ]
    IELTS [1 Certification Exam(s) ]
    IFPUG [1 Certification Exam(s) ]
    IIA [3 Certification Exam(s) ]
    IIBA [2 Certification Exam(s) ]
    IISFA [1 Certification Exam(s) ]
    Intel [2 Certification Exam(s) ]
    IQN [1 Certification Exam(s) ]
    IRS [1 Certification Exam(s) ]
    ISA [1 Certification Exam(s) ]
    ISACA [4 Certification Exam(s) ]
    ISC2 [6 Certification Exam(s) ]
    ISEB [24 Certification Exam(s) ]
    Isilon [4 Certification Exam(s) ]
    ISM [6 Certification Exam(s) ]
    iSQI [7 Certification Exam(s) ]
    ITEC [1 Certification Exam(s) ]
    Juniper [64 Certification Exam(s) ]
    LEED [1 Certification Exam(s) ]
    Legato [5 Certification Exam(s) ]
    Liferay [1 Certification Exam(s) ]
    Logical-Operations [1 Certification Exam(s) ]
    Lotus [66 Certification Exam(s) ]
    LPI [24 Certification Exam(s) ]
    LSI [3 Certification Exam(s) ]
    Magento [3 Certification Exam(s) ]
    Maintenance [2 Certification Exam(s) ]
    McAfee [8 Certification Exam(s) ]
    McData [3 Certification Exam(s) ]
    Medical [69 Certification Exam(s) ]
    Microsoft [374 Certification Exam(s) ]
    Mile2 [3 Certification Exam(s) ]
    Military [1 Certification Exam(s) ]
    Misc [1 Certification Exam(s) ]
    Motorola [7 Certification Exam(s) ]
    mySQL [4 Certification Exam(s) ]
    NBSTSA [1 Certification Exam(s) ]
    NCEES [2 Certification Exam(s) ]
    NCIDQ [1 Certification Exam(s) ]
    NCLEX [2 Certification Exam(s) ]
    Network-General [12 Certification Exam(s) ]
    NetworkAppliance [39 Certification Exam(s) ]
    NI [1 Certification Exam(s) ]
    NIELIT [1 Certification Exam(s) ]
    Nokia [6 Certification Exam(s) ]
    Nortel [130 Certification Exam(s) ]
    Novell [37 Certification Exam(s) ]
    OMG [10 Certification Exam(s) ]
    Oracle [279 Certification Exam(s) ]
    P&C [2 Certification Exam(s) ]
    Palo-Alto [4 Certification Exam(s) ]
    PARCC [1 Certification Exam(s) ]
    PayPal [1 Certification Exam(s) ]
    Pegasystems [12 Certification Exam(s) ]
    PEOPLECERT [4 Certification Exam(s) ]
    PMI [15 Certification Exam(s) ]
    Polycom [2 Certification Exam(s) ]
    PostgreSQL-CE [1 Certification Exam(s) ]
    Prince2 [6 Certification Exam(s) ]
    PRMIA [1 Certification Exam(s) ]
    PsychCorp [1 Certification Exam(s) ]
    PTCB [2 Certification Exam(s) ]
    QAI [1 Certification Exam(s) ]
    QlikView [1 Certification Exam(s) ]
    Quality-Assurance [7 Certification Exam(s) ]
    RACC [1 Certification Exam(s) ]
    Real-Estate [1 Certification Exam(s) ]
    RedHat [8 Certification Exam(s) ]
    RES [5 Certification Exam(s) ]
    Riverbed [8 Certification Exam(s) ]
    RSA [15 Certification Exam(s) ]
    Sair [8 Certification Exam(s) ]
    Salesforce [5 Certification Exam(s) ]
    SANS [1 Certification Exam(s) ]
    SAP [98 Certification Exam(s) ]
    SASInstitute [15 Certification Exam(s) ]
    SAT [1 Certification Exam(s) ]
    SCO [10 Certification Exam(s) ]
    SCP [6 Certification Exam(s) ]
    SDI [3 Certification Exam(s) ]
    See-Beyond [1 Certification Exam(s) ]
    Siemens [1 Certification Exam(s) ]
    Snia [7 Certification Exam(s) ]
    SOA [15 Certification Exam(s) ]
    Social-Work-Board [4 Certification Exam(s) ]
    SpringSource [1 Certification Exam(s) ]
    SUN [63 Certification Exam(s) ]
    SUSE [1 Certification Exam(s) ]
    Sybase [17 Certification Exam(s) ]
    Symantec [134 Certification Exam(s) ]
    Teacher-Certification [4 Certification Exam(s) ]
    The-Open-Group [8 Certification Exam(s) ]
    TIA [3 Certification Exam(s) ]
    Tibco [18 Certification Exam(s) ]
    Trainers [3 Certification Exam(s) ]
    Trend [1 Certification Exam(s) ]
    TruSecure [1 Certification Exam(s) ]
    USMLE [1 Certification Exam(s) ]
    VCE [6 Certification Exam(s) ]
    Veeam [2 Certification Exam(s) ]
    Veritas [33 Certification Exam(s) ]
    Vmware [58 Certification Exam(s) ]
    Wonderlic [2 Certification Exam(s) ]
    Worldatwork [2 Certification Exam(s) ]
    XML-Master [3 Certification Exam(s) ]
    Zend [6 Certification Exam(s) ]

    References :

    Dropmark :
    Wordpress :
    Scribd :
    Issu :
    weSRCH :
    Dropmark-Text :
    Blogspot :
    Youtube :
    RSS Feed :
    Vimeo :
    Google+ : :
    Calameo : : :

    Back to Main Page

    Killexams exams | Killexams certification | Pass4Sure questions and answers | Pass4sure | pass-guaratee | best test preparation | best training guides | examcollection | killexams | killexams review | killexams legit | kill example | kill example journalism | kill exams reviews | kill exam ripoff report | review | review quizlet | review login | review archives | review sheet | legitimate | legit | legitimacy | legitimation | legit check | legitimate program | legitimize | legitimate business | legitimate definition | legit site | legit online banking | legit website | legitimacy definition | pass 4 sure | pass for sure | p4s | pass4sure certification | pass4sure exam | IT certification | IT Exam | certification material provider | pass4sure login | pass4sure exams | pass4sure reviews | pass4sure aws | pass4sure security | pass4sure cisco | pass4sure coupon | pass4sure dumps | pass4sure cissp | pass4sure braindumps | pass4sure test | pass4sure torrent | pass4sure download | pass4surekey | pass4sure cap | pass4sure free | examsoft | examsoft login | exams | exams free | examsolutions | exams4pilots | examsoft download | exams questions | examslocal | exams practice | | | |