Friday, December 16, 2022

Certified Kubernetes DevOps Engineer with AWS, Kafka available. Visa : H1B


Certified Kubernetes DevOps Engineer with AWS, Kafka available. Visa : H1B

Certifications: -
Certified Kubernetes Administrator (CKA )  
Certificate ID number : - LF-lgk6czkgls

Looking remote for now.

PROFESSIONAL SUMMARY:
·      Over 7+ years of experience in IT industry in Build/Release Management, software configuration Management (SCM), working on large scale applications in building and deploying, Automating, Configuring and Deploying instances on Cloud environments and Data Centers, CI/CD pipeline, Build and Release Engineer, AWS/GCP and Linux/Windows Administration.
·      Hands-on experience in DevOps automation development for Linux and Windows  Environments. Experience in UNIX environment and expertise in several flavors of Linux including Red Hat, CentOS, and Ubuntu.
·     Excellent hands-on experience on configuration management tool like Chef, Puppet and Ansible.
·     Experience in Amazon AWS Cloud Administration which includes services like: EC2, S3, EBS, RDS, IAM, Cloud watch, Cloud Formation.
·     DevOps experience with Puppet, Ansible, Chef, AWS (OPS Work) and Open stack.
·      Experience in using build automation tools like MAVEN, Bamboo, Gradle, and ANT for the building of deployable artifacts such as WAR & EAR from source code
·     Experience in understanding the principles of Software Configuration Management (SCM) in Agile, Scrum and Kanban.
·      Expertise in building Kafka cluster, cluster maintenance, trouble shooting, monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
·      Designed how to build, set up, and configure Kubernetes clusters for high performance.
·      Performed, maintained and managed day to day operations on Kubernetes cluster to avoid any downtime.
·      Setup Security on the Kubernetes cluster so that no data breaching is compromised.
·      Building/Maintaining Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker, on GCP (Google Cloud Platform). 
·      Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test deploy.
·      Experience with DevOps practices using AWS, Elastic Bean stalk and Docker with Kubernetes.
·      Package and deploy Kubernetes into an on premise or cloud environment.
·      Design a Kubernetes environment with high availability, and high performance.
·      Expertise in writing and understanding the Chef Cookbooks and recipes with several components like attributes, files, resources and templates to automate middleware installations, domain creations and deployment activities.
·      Experience in creating Puppet manifests and Modules to automate system operations.
·      Administered and implemented CI tools like Hudson/Jenkins for automated builds.
·      Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing and Implementing and Post-production analysis of the projects
·      Extensively worked with Version Control Systems SVN (Subversion), GIT and TFS.
·      Mastery of build technologies like Jenkins, Maven, etc., Integration and automation of source control applications like Subversion and GIT.
·      Expertise in converting build.xml into pom.xml in order to build the applications using Maven.
 
 
Cloud Platform
AWS, (EC2, S3, EBS, RDS, ELB, IAM, AMI, Auto Scaling). Microsoft Azure and Open stack.
Configuration Management
Chef, Puppet, Vagrant, Maven, Ansible, Docker, Gradle, Splunk, OPS Work.
Database
Oracle, DB2, MySQL, MongoDB 7 SQL Server, MS Sql.
Build Tools
ANT, MAVEN, make file, Hudson, Jenkins, BAMBOO, Code Deploy.
Version Control Tools
Subversion (SVN), Clear case, GIT, GIT Hub, Perforce, Code Commit.
Web Servers
Apache, Tomcat, Web Sphere, Nix, JBOSS, WebSphere.
Virtualization
VMware and Virtual Box
Languages/Scripts
C, .Net, HTML, java, Shell, Bash, PHP, Python, Ruby and Perl. 
SDLC
Agile, Scrum& Waterfall.
Web Technologies. 
HTML, CSS, Java Script, JQuery, Bootstrap, XML, JSON, XSD, XSL, XPATH.
Operating Systems
Linux (Red Hat 4/5/6), UNIX, Ubuntu, Fedora, CentOS, Windows NT/2000/2003, 7,8,10 and iOS
Networking
TCP/IP, NIS, NFS, DNS, DHCP, Cisco Routers/Switches, WAN, SMTP, LAN, FTP/TFTP.
 
PROFESSIONAL EXPERIENCE: -
 
Cummins                                                                                                                   July 2022- Present
DevOps Engineer
Roles and Responsibilities: -
  • Build and maintain infrastructure and used to automate the deployment and upgrade processes for workloads.
  • Review and contribute to ongoing improvements in the implementation of standards and procedures for DevOps best practices.
  • Drive troubleshooting efforts around incidents and outages related to DevOps platforms.
  • Engineer the DevOps CI CD tooling and platform road map for the team.
  • Work closely with software development and IT operations groups to ensure that teams are successful and software is deployed efficiently.
  • Design automated pipelines to continuously deliver value to clients.
  • Work to provide guidance and recommendations to business stakeholders.
  • Recently worked on the updating of security breaches, which helped to cut security risks by 40%.
  • Supported client’s implementation of AWS products through design and architecture guidance and adherence to best practices.
  • Provided clients with guidance on how to design their cloud applications for optimal scaling and performance.
  • Worked tirelessly to resolve complex AWS Infrastructure and DevOps challenges.
  • Developed reporting and monitoring tools for teams.
  • Raised code quality standards through manual and automated processes.
 
Environment: - AWS,EKS, Jenkins, Ansible, JFrog, Kubernetes.
 
 
 
 
Verizon (Working Remotely)                                                             October 2021 – July 2022
Linux Administration/Kubernetes Engineer
Roles and Responsibilities: -
·      Hands-on experience on working with Windows, Linux and Unix Administration with orchestration platforms such as EKS.
·      Experience in developing vbscript, PowerShell script, windows batch programming and C#.NET to create monitoring programs and utilities
·      Experience with microservices architectures and good Knowledge of SQL databases and basic coding skills in Java, JavaScript, PHP etc.
·      Good experience with use of GIT, Jenkins, Artifactory etc.
·      Experience in creating and managing production scale Kubernetes clusters, Deep understanding of Kubernetes networking and core concepts of a Kubernetes cluster.
·      Supporting our Kubernetes based projects to resolve critical and complex technical issues.
·      Performing application deployments on Kubernetes cluster.
·      Hands-on experience with Kubernetes core concepts such as Deployment, Replica Set, Daemon Set, Stateful sets, and Jobs.
·      Managing Kubernetes storages (PV, PVC, Storage Classes, Provisioners)
·      Kubernetes Networking (Services, Endpoints, DNS, Load Balancers)
·      Experience in setting up monitoring and alerting for Kubernetes cluster.
·      Deeply getting engaged with our stakeholders to understand their architecture and operations, and work to continuously improve their overall Kubernetes support experience.
 
Environment: - AWS, EKS, Jenkins, Ansible, JFrog, Kubernetes.
 
Price Waterhouse Coopers (PWC), Tampa, FL.                        May 2021 – September 2021
DevOps Engineer.
 
·      Hands-on Experience on working with Microsoft Azure and Azure Kubernetes Services, building multiple AKS clusters, deploying and managing multiple clusters across the organization.
·      Provide operational support and fully maintain and manage the clusters for teams to be able to use the working AKS clusters.
·      Work closely with development teams to plan, deploy, and fully support multiple applications across our working environments.
·      Participating in industry leading practices to assure smooth application integration across the team’s entire architecture, as well as research, design and prototype new solutions to facilitate application growth.
·      Worked on CI/CD toolchains centered around tools such as Azure DevOps, Jenkins, Spinnaker etc.
·      Worked on operations and logging tools such as Splunk, New-Relic, Data-Dog to get the application insights.
·      Daily activities include configuring continuous integration within a development environment using tools such as Azure DevOps, Jenkins, Puppet etc.
·      Documenting systems and networks, refining requirements, self-identity solutions and communicating it to the team.
·      Utilizing enterprise tools for orchestration of services and workflows, as well as messaging queues such as RabbitMQ.
 
Environment: - Microsoft Azure, Azure DevOps, AKS, GIT, New-Relic, Data-Dog, Kubernetes, Vault, Terraform, Puppet, Jenkins, Ansible.
 
Ford Motor Company, Dearborn, MI.                                                     December 2019 – April 2021.
SRE / Kubernetes Admin.
 
Roles and Responsibilities: -
 
·      Hands-on Experience on working with RedHat OpenShift Infrastructure design, deployment and Operational support to the organization for cloud and as well as virtualization end-to-end architecture and IAAS and on on-Prem.
·      Fully support and maintain a production cluster with more than 1600 namespaces in it and all the applications that are hosted on the platform.
·      Extensive knowledge of Linux Containers, Docker, Kubernetes and deployment of containerized applications.
·      Developed a Centralized Documentation website from the scratch for our customers which is a static framework web server environment based on Hugo that executes CD via a GitHub Web hook when updated in a particular repo by using OpenShift build process.
·      Published several documents to this website for customers better understanding of the Container-as-a-service (CAAS) platform.
·      Experience on Red hat Quay image registry solution for storing, building and deploying container images.
·      Experience with Portworx, a Container based storage provider for persistent storage needs in OpenShift 4.x. Supports dynamic provisioning of PVCs, and can be either ReadWriteOnce (used by a single pod) or ReadWriteMany (shared by multiple pods).
·      Administration experience on OpenShift Platform, maintain and enhance automation to support provisioning of new projects in OCP and OpenShift upgrades.
·      Perform health checks on all the environments regularly.
·      Experience in GCP for deploying and also scaling web applications and services deployed with python.
·      Built docker images, tagging and pulling/pushing images to Quay Organizations and repositories.
·      Worked on moving several SandBox clusters from Pre-Production Datacenter (ECC) to Production Datacenter.
·      Support infrastructure, security, Platform as a Service and other updates involving DevOps environments.
·      Experience on working with migrating application workloads from lower environment clusters (OCP 3.x) to upper environments (OCP 4.x).
·      Participated in all on-call rotations and giving production support to our customers and handling general requests (Workorders) and also Incident tickets as soon as they come into the queue.
·      Extensive knowledge on working with SysDig which is a monitoring tool which we use to monitor our platform and any high- loads on any nodes and have set up alerts for the same.
 
Environment: - Red hat OpenShift, Google cloud Platform (GCP), Kubernetes, Azure, Bash Scripting, Vault, Sysdig, Jenkins, Grafana, Rally, Portworx, VMWare, Tigera.
 
Hyatt Corporation, Chicago, IL
Senior. Kafka/DevOps Platform Engineer.                                                  May 2019- November 2019
 
Roles and responsibilities: -
·      Designed Kafka set up architecture and took the responsibility for designing, operationalizing, maintaining and scaling production and non-production clusters.
·      Built Kafka clusters for our non-production and production environments.
·      Developed strategies to improve scalability, service reliability, capacity, and performance of the kafka cluster.
·      Implemented security on all kafka clusters with SSL and SASL/SCRAM mechanism and managed users by ACLs.
·      Performed high-level, day-to-day operational maintenance, support, and upgrades for the Kafka Cluster.
·      Developed strategies for Kafka tuning, capacity planning, replication strategies and deep dived to troubleshoot any issues that create development road blocks.
·      Participated in the occasional on-call rotation supporting the entire infrastructure.
·      Rolled up the sleeves to troubleshoot incidents, formulate theories, test the hypothesis and narrow down possibilities to find out the root cause.
·      Handled all Kafka environment builds, including design, capacity planning, cluster setup, performance tuning and ongoing monitoring.
·      Implemented many Spring boot micro services to process the messages into the Kafka cluster setup.
·      Developed and implemented Kafka producer and consumer applications on Kafka cluster setup with the help of Zookeeper.
·      Hands on experience in standing up and administrating Kafka platform which includes creating a backup & mirroring of Kafka Cluster brokers, broker sizing, topic sizing, h/w sizing, performance monitoring, broker security, topic security, consumer/producer access management(ACL).
·      Expertise in strategies on partition of kafka messages and setting up the replication factors in kafka cluster.
·      Experience with SDLC, Web Application Development Concepts, Application Technical Release, Complex Scripting, Version Control, Environment Management, Software Configuration Management, Technical Release Management, Enterprise Network Concepts 
·      Hands on experience on working with CI/CD tools and automation: Docker, Kubernetes, Git, Ansible, Jenkins, Artifactory, Rundeck, Splunk.
·      In-depth understanding of strong development/automation skills and very comfortable with reading and writing Python
·      Very good grasp of monitoring and metrics collection via monitoring tools such as New-relic and Datadog, performance tuning, and troubleshooting complicated situations with distributed systems. 
·      Proactively monitor and setup alerting mechanism for Kafka Cluster and supporting hardware to ensure system health and maximum availability.
·      Tools-first mindset- built tools for the team and others to increase efficiency and to make hard
or repetitive tasks easy and quick. 
·      Organized, focused on building, improving, resolving and delivering. 
·      Possess good communication in and across teams, great teamwork, and a character of taking ownership.
·      Hands on experience with managing production Kafka 
·      In-depth understanding of the internals of Kafka cluster management, Zookeeper, partitioning,
schema registry, topic replication and mirroring.
·      Familiarity with both cloud native Kafka and on-premise architectures. Understanding of Kafka security, limiting bandwidth usage, enforcing client quotas, backup and restoration. 
·      Experience with open source Kafka distributions as well as enterprise Kafka products.
 
 
Environment: Kafka, Splunk, Rundeck, Confluence, Lucidchart, Mongo Db, New-Relic, Data Dog, Confluent, Bit Bucket, CI/CD, SVN, CVS, ANT, Maven, AWS EC2, Puppet, Shell, Perl, GIT, Jenkins, Tomcat, Shell, Nagios, API Gateway, Lambda, DynamoDB, S3, GCP, Chef, Ansible, Docker, Kubernetes, Terraform, Apache Lib Cloud, Jenkins, Maven, GIT, ELK, Selenium, Nagios and JIRA.
 
Toyota, Plano, TX.                                                                                       March 2018- April 2019 AWS/DevOps Engineer.
·      Implemented secure cloud architecture based on AWS to make sure applications are reliable, scalable and highly available. Built and configured a virtual data centre in the Amazon Web Services cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer.
·      Implemented Amazon EC2 by setting up instances, Virtual Private Cloud (VPCs), and security groups and by automating the process of launching and stopping/terminating EC2 instances on AWS with BOTO3.
·      Launched and configured AWS Multi-Tier EC2 instances in predefined VPC’s in different subnets and attached ACLs & Security Groups to maintain security.
·       Developed CloudFormation scripts to automate EC2 instances and managed 200+ AWS instances and configured Elastic IP & Elastic Storage in open enrolment period and implemented security groups and Network ACL’s.
·      Implemented a 'server less' architecture using API Gateway, Lambda and DynamoDB and deployed AWS Lambda code from Amazon S3 buckets.
·      Converted existing AWS infrastructure to server less architecture (AWS Lambda, Kinesis) deployed via Apache Lib Cloud, Terraform and AWS Cloud formation.
·      Experience in importing the real-time data to Hadoop using Kafka and implemented the Oozie job. Experience Schedule Recurring Hadoop Jobs with Apache Oozie.
·      Wrote shell scripts and python scripts for CI/CD pipeline automation and scheduled activities.
·      Assisted in migrating applications from customer on-premises datacenter to the cloud (AWS).
Defined a migration strategy by understanding the application architecture in working with the development team.
·      Used Chef to reduce management complexity by developing cookbooks and recipes for installation, file management and continuous application deployment on various remote nodes
Defined and Implemented CM and Release Management Processes, Policies and Procedures.
·      Participated in configuring and monitoring distributed and multiple platform servers using Puppet. Used Puppet server and workstation to manage and configure nodes.
Managed AWS EC2 virtual instances using Puppet.
·      Generated and executed software configuration management plans
·      Actively involved in architecture of DevOps platform and Cloud solutions.
·      Build Automation and Build Pipe Development using Jenkins and Maven.
·      Involved in developing custom scripts using Perl & Shell (bash, ksh) to automate jobs.
·      Analyzed the ANT build projects for conversion and converting to Maven build projects.
·      Developed pom.xml files for Maven build scripts
·      Utilized Puppet and Puppet dashboard for Configuration Management of hosted Instances within AWS.
·      Configured RDS instances using Cloud Formation and Terraform. Used Terraform to map more complex dependencies and identified network issue.
Configured a Google cloud Virtual Private Cloud (VPC) and Database Subnet Group for isolation of resources.
·      Deployed and monitored scalable infrastructure on Google cloud and configuration management using Docker.
Configured, monitored and automated Google cloud Services as well as involved in deploying the content in cloud platform using Google compute engine, Google storage buckets. 
·      Maintained and developed Docker images for a tech stack including Cassandra, Kafka, Apache, and several in house written Java services running in Google Cloud Platform(GCP) on Kubernetes. 
·      Performed all necessary day-to-day CVS/Subversion support for different projects.
·      Responsible for design and maintenance of the CVS/Subversion repositories and the access control strategies.
·      Creation of branches in CVS and Subversion for parallel development process. 
·      Used Jenkins to automate most of the build related tasks.
·      Virtualization using KVM, Xen, VMWare ESX/ESXi, vSphere, Oracle Virtual Box, Virt-manager
·      Implemented Continuous Integration web hooks and workflows around Jenkins to automate the dev test deploy workflow around Puppet codebase.
·      Worked extensively with ANT and MAVEN build tools for writing build.xml and pom.xml files respectively.
·      Analyzed the builds using the SonarQube.
·      Managed and monitored the server and network infrastructure using Nagios.
·      Involved in managing other version control tools like GIT.
·      Architected hybrid AWS and on-premise solutions for technology clusters and patterns.
Experience with KVM and AWS
·      Established shared IT service centers for Cloud operations.
WroteShell and Perl scripts to take backup oracle database.
·      Generated AWS migration roadmaps and driving buy-in across complex organization structures.
Collaborated with consulting and managed services partners to build and execute migration plans.
·      Got feedback from common Enterprise requirements into AWS service development teams, Basic Qualifications.
 
Environment: CI/CD, SVN, CVS, ANT, Maven, AWS EC2, Puppet, Shell, Kafka, Perl, GIT, Jenkins, Tomcat, Shell, Nagios, KVM, AWS VPC, ELB, API Gateway, Lambda, DynamoDB, S3, GCP, Chef, Ansible, Docker, Kubernetes, Terraform, Apache Lib Cloud, Jenkins, Maven, GIT, ELK, Selenium, Nagios and JIRA.
 
GE, Houston, TX                                                                                         Feb 2017- March 2018
AWS/DevOps Engineer
Roles and responsibilities:
 
·      Assisted in migrating the exiting data center into the AWS instances.
·      Migrated applications to the AWS cloud.
·      Installed the application on AWS EC2 AMI, Redhat, Ubuntu Instances
·      Configured the storage on S3 Buckets. 
·      Experience working with IAM in order to create new accounts, roles, and groups.
·      Developed Chef recipes to configure, deploy and maintain software components of the existing infrastructure 
·      Used Chef to manage Web Applications, Config Files, Data Base, Commands, Users, Mount Points, and Packages.
·      Written cookbooks for Web Logic, JDK 1.7, and Jenkins, tomcat, JBoss and deployment automation. 
·      Use puppet and Chef Automation tools for configuration management in different systems. 
·      Reduced build + deployment times by designing and implementing Docker workflow.
·      Set up system for dynamically adding and removing web services from a server using Docker, Nginx.
·      Configured Docker container for branching purposes.
·      Managed and configured VMware virtual machine for RHEL, Ubuntu Linux servers 
·      Experience in creating alarms and notifications for EC2 instances using Cloud Watch. 
·      Implemented AWS solutions using E2C, S3, RDS, EBS, Elastic Load Balancer, Auto-scaling groups.
·      Monitoring Live Traffic, logs, Memory utilization, Disk utilization and various other factors, which are important for deployment. 
·      Experience with Ruby and shell scripts for automating tasks.
·      Created the Release process of the artifacts. 
·      JIRA is used as ticket tracking, change management and Agile/SCRUM tool. 
·      Experience in designing and implementing continuous integration system using Jenkins by creating Python and Perl scripts. 
·      Implemented the setup for Master slave architecture to improve the Performance of Jenkins.
·      Installed, configured and maintained DNS, FTP and TCP/IP on Red Hat Linux. 
·      Installed, configured and maintained web servers like HTTP Web Server, Apache Web Server, Apache Tomcat, Nginx, JBoss.
 
Environment: Subversion, Ant, Jenkins, NEXUS, XML Install Shield, Shell, Perl, WebLogic Servers.
 
Unisys Global Services, Hyderabad, India                                                 March 2015 – Nov 2016                                   
Build/Release Engineer
 
Roles and responsibilities:
 
·      Automating the Build process using SVN, and ANT
·      Managed builds and wrote Build scripts for JAVA and Websphere based applications
·      Maintained source code repository in SVN.
·      Configured Git with Jenkins and schedule jobs using Poll SCM option.
·      Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-used and modified same Chef Recipes to create a Deployment directly into Amazon EC2 instances.
·      Successfully administrated SVN, Cruise Control and Anthill Pro.
·      Performed installation, configuration and administration of Clear Case, SVN and afterwards migrated src, config and website code over to Git in Windows and Linux environment
·      Automated deployment of builds to different environments using AnthillPro.
·      Deployment and management of many servers via script and chef, utilizing cloud providers as a direct substrate for implementation.
·      Actively participated in the automation effort. Wrote security & web architecture provisioning scripts for Ops code Chef.
·      Performed system administration and operations tasks using Chef, Nagios.
·      Used Artifactory repository tool for maintaining the java-based release code packages.
·      Automated the cloud deployments using chef, python (boto& fabric) and AWS Cloud Formation Templates.
·      Implemented continuous integration using the Hudson, which tracks the source code changes
·      Create and setup automated nightly build environment for Java projects using Maven.
·      Deploying and managing many servers utilizing both traditional and cloud-oriented  providers (for example, Amazon EC2) with the Chef Platform configuration system from first light, through initial technology development, and into production and maintenance.
·      Studied the current build and release process and automated the process using Shell scripts.
·      Perform various builds for the QA, and Production Environments.
·      Experienced in building Java application using make files (Ant utility), Shell Scripts.
·      Deployment of application to the Tomcat/ Websphere Application Server.
·      Integrating GIT and Clear Case with Cruise Control, Jenkins.
·      Resolving merging issues during build and release by conducting meetings with developers and managers.
·      DevOps for load balanced environments & amp; Multi-regional server environments (AWS Regional nodes managed via Chef Roles and Ohai Attributes).
·      Support development engineers with Configuration Management issues. Assist my seniors and Project Leaders in technical issues.
 
Environment: SVN(Subversion), Anthill Pro, ANT, NAnt, and Maven, Chef, TFS, Jenkins, Clear case, Unix, Linux, Perl, Jython, Python, Ruby, Cruise control, AWS, Bamboo, Hudson, Git, JIRA, Shell Script, Weblogic.
 
Virtusa, Hyderabad, Telangana                                                            Feb 2014- March 2015
Systems Administrator
Roles and Responsibilities: -
·      Maintained the company’s IT infrastructure including servers, SAN, IP network, and backups.
·      Developed build and deployment scripts using Maven and ANT as build tools in Hudson to move from one environment to other environments.
·      Publishing the Release notes for all the releases using Confluence
·      Ensured that assigned systems were engineered, configured and optimized for maximum functionality and availability.
·      Implemented solutions that reduced single points of failure and improved system uptime to 99.9% availability.
·      Led enterprise wide hardware/software installations; oversaw major server upgrade/expansion project that improved network access protection (NAP), terminal services and network performance; and integrated new technologies into existing data-center environments.
·      Worked with GIT and Subversion to manage source code
·      Sending the Uptime and Downtime notifications to teams regarding Servers Status as a part of the Build Engineer role at the time of deploying the EAR and WAR package in JBoss 4.2.3 Admin Console.
·      Builds and deploys J2EE application in JBoss using Python scripts
·      Involved in setting up Rally as defect tracking system and configured various workflows, customizations, and plugins for the Rally bug/issue tracker
·      Strengthened system/network security and business-continuity planning as a member of company’s security incident response team.
·      Integrated Maven with Subversion to manage and deploy project related tags
·      Working closely with Web Administrators to understand, participate, and experience the technical architect for web sites such as Apache, JBoss, WebSphere, WebLogic; and deploying J2EE Applications to these environments
·      Installed and administered repository to deploy the artifacts generated by Maven and Ant and to store the dependent jars which are used during the build.
 Environment: Java/J2ee, Maven, Subversion, Git, GitHub, UNIX, Rally, Shell, Artifactory, Hudson, Python, JBoss, WebSphere, Confluence, Linux, MYSQL.
 
Education: -
 
Master’s Degree in Computer Technology, Eastern Illinois University, IL                   2017-2018
 
Praveen Raj | Sales Recruiter
W 732.479.5649 | raj@techsmartglobal.com
666 Plainsboro Rd, Suite #1116, Plainsboro, New Jersey 08536