What is tfs 2017




















For more information, see the API reference documentation. Send release notifications when new releases are created, deployments are started or completed, or when approvals are pending or completed. Integrate with third party tools such as Slack to receive such notifications. For more details, see Azure Classic service endpoint documentation. In this release, we are migrating the test result artifacts to a new compact and efficient storage schema. Since test results are one of the top consumers of storage space in TFS databases, we expect this feature to translate into reduced storage footprint for TFS databases.

For customers who are upgrading from earlier versions of TFS, test results will be migrated to the new schema during TFS upgrade. This upgrade may result in long upgrade times depending on how much test result data exists in your databases. It is advisable to configure the test retention policy and wait for the policy to kick in and reduce the storage used by test results so that the TFS upgrade is faster. See TFSConfig.

If you do not have the flexibility to configure test retention or clean up test results before upgrade, make sure you plan accordingly for the upgrade window. See Test result data retention with Team Foundation Server for more examples about configuring test retention policy. We have brought test configuration management to the web UI by adding a new Configurations tab within the Test Hub Figure Now you can create and manage test configurations and test configuration variables from within the Test hub.

For more information, see Create configurations and configuration variables. Assigning configurations just got easier. You can assign test configurations to a test plan, test suite, or test case s directly from within the Test hub Figure Right-click an item, select Assign configurations to … , and you're off and running.

You can also filter by Configurations in the Test hub Figure For more information, see Assign configurations to Test plans and Test suites. We have added new columns to the Test results pane that show you the test plan and test suite under which the test results were executed in.

These columns provide much-needed context when drilling into results for your tests Figure You can now order manual tests from within the Test Hub Figure 65 , irrespective of the type of suite in which they are included: static, requirement-based, or query-based suites. You can simply drag and drop one or more tests or use the context menu to reorder tests. Once the ordering is completed, you can sort your tests by the Order field and then run them in that order from the Web runner.

You can also order the tests directly on a user story card on the Kanban board Figure Test teams can now order the test suites as per their needs. Prior to this capability, the suites were only ordered alphabetically. As part of the rollout of new identity picker controls across the different hubs, in Test hub, we have also enabled the option to search for users when assigning testers to one or more tests Figure You can now pick the "build" you want to test with and then launch the Web runner, using 'Run with options' in Test hub Figure Any bug filed during the run is automatically associated with the build selected.

In addition, the test outcome is published against that specific build. The Microsoft Test Runner launches without opening the entire Microsoft Test Manager shell and will shut-down on completion of the test execution. For more information, see Run tests for desktop apps. You can now choose your data collectors and launch the Exploratory Runner client in a performant way from Test hub, without having to configure them in Microsoft Test Manager client. Invoke 'Run with options' from the context menu Figure 72 for a Requirement based suite and choose Exploratory runner and the data collectors you need.

The Exploratory runner launches similar to Microsoft Test Runner as described above. We have now added the ability to configure the behavior of test outcomes for tests shared across different test suites under the same test plan Figure Users can set the "Configure test outcomes" option for a particular test plan either from the Test hub test plan context menu or from the Kanban board test page in the common settings configuration dialog.

This option is turned off by default and you will have to explicitly enable it to take effect. You can now verify a bug by re-running the tests which identified the bug Figure You can invoke the Verify option from the bug work item form context menu to launch the relevant test case in the web runner.

Perform your validation using the web runner and update the bug work item directly within the web runner. You can now add, view, and interact with test cases directly from your stories on the Kanban board. Use the new Add Test menu option to create a linked Test case, and then monitor status directly from the card as things progress Figure With this new capability, you can now perform the following actions directly from a card on your board:.

If you need advanced test management capabilities like assign testers, assign configurations, centralized parameters, exporting test results, etc. For more information, see Add, run, and update inline tests.

Clicking on this link Figure 76 will take you to the Test hub, open the right test plan, and then select the specific suite that controls those inline tests. Use the new Tests page in common settings configuration dialog on Kanban board to control the test plan where the inline tests are created Figure Now, you can override this behavior by configuring an existing test plan of your choice — all the tests are added to the selected test plan.

Note that this functionality is only enabled if the Test annotation is turned on. We have enhanced the Web test runner to give you the ability to add test step attachments during manual testing Figure These step result attachments automatically show up in any bugs you file in the session and subsequently in the Test results pane.

You can now take screenshots and annotate them inline when you use Web runner in Chrome Figure You can also capture on-demand screen recordings of not just the web apps, but also your desktop apps. These screenshots and screen recordings are automatically added to the current Test step. You need to specify the browser window on which to capture your actions — all actions on that window any existing or new tabs you open in that window or any new child browser windows you launch, will automatically be captured and correlated against the test steps being tested in the Web runner.

These screenshots, screen recordings and image action logs are then added to any bugs you file during the run and attached to the current test result. Similarly, the system information data is automatically captured and included as part of any bugs you file from the Web runner. For more information, see Collect diagnostic data during tests.

When running tests in Web runner, launched either from a card on the board or from a requirement-based suite in Test hub, any new bugs filed will now be automatically created as a child to that user story. Similarly, if you are exploring a user story from the exploratory testing extension, any new bugs you file are also created as a child to that user story.

This new behavior allows for simpler traceability across stories and bugs. This is applicable only if the "Working with bugs" settings in the Common Settings Configuration page is set to "Bugs do not appear on backlogs or board" or "Bugs appear on the backlogs and boards with tasks". For all other settings for "Working with bugs" option and in certain other scenarios, such as adding to an existing bug that already has a parent defined, a Related link is created instead.

In addition to creating new bugs from the Web runner, now you can also update an existing bug Figure All the diagnostic data collected, repro steps, and links for traceability from the current session are automatically added to the existing bug. You can now do exploratory testing for a specific work item Figure This lets you associate the selected work item with your ongoing testing session, and view the acceptance criteria and description, from within the extension.

It also creates end-to-end traceability between bugs or tasks that you file on the selected work item. You can explore the work item either directly from a work item, or from within the extension:. We have added entry points on all cards, grids, and in the Test hub.

Image Action Log: The extension gives you a new option to add the steps that lead you to the bug automatically with just one click. Select the "Include image action log" option Figure 83 to capture the mouse, keyboard, and touch actions, and add the corresponding text and images directly into the bug or task. Screen recording as video: You can also capture on-demand screen recordings using the extension.

These screen recordings can be captured not just from the web apps, but also your desktop apps. You can configure the extension to automatically stop screen recordings and attach them to a bug being filed using the extension's "Options" page. Page Load Data: We have added a new background capture capability to the extension — capturing of "web page load" data.

Just like the "image action log" captured your actions performed on a web app being explored, in the form of images in the background, the "page load" functionality automatically captures details for a web page to complete the load operation.

Once the bug is filed, in addition to the tile view, a detailed report is also attached to the bug, which can help the developer with their initial set of investigations.

When you create test cases during your exploratory session, the test steps with images are automatically filled in for you Figure Simultaneous test design and test execution is the basis of true exploratory testing, and this new capability makes this a reality.

For more information, see Create test cases based in image action log data. You can get to this insights page by clicking the "Recent exploratory sessions" link in the Runs hub within the Test Hub group in web access. This new view helps you derive meaningful insights, including:.

For more information, see Get insights across your exploratory testing sessions. You start by specifying a shared query for work items that you are interested in and the sessions page shows a list of all the work items from the query, with a breakdown of both explored and unexplored items in the summary section.

In addition, using the "Unexplored Work Item" group by pivot, you can see the list of items that have not been explored yet. This is extremely useful to track down how many stories have not been explored or gone through a bug-bash yet.

This opens the Request feedback form where you can choose the stakeholders you want feedback from and optionally provide a simple set of instructions prompting for the areas of the product you would like input. This will send off individual mails to the selected stakeholders along with the instructions provided, if any.

Additionally, stakeholders can navigate to the "Feedback requests" page to view in one place all feedback requests received by them. From the list, they can select the feedback request they want to provide feedback on, manage their "Pending feedback requests" Figure 88 by marking them as complete or by declining them and can switch between different types of feedback requests by clicking on the desired radio button Figure In addition to the solicited flow mentioned above, stakeholders can also use the extension to provide voluntary feedback Figure Test result console logs that are captured in.

You have an option to preview them in Tests tab, and do not need to download the trx file to view logs anymore. We have added a new 'Test result trend' widget to the Widget Gallery Figure Use this widget to add a test result trend chart of up to 30 most recent builds for a build definition to the dashboard.

Widget configuration options can help you customize the chart to include pivots like passed test count, failed test count, total test count, pass percentage, and test duration.

It is a recommended practice to use Release Environments to deploy applications and run tests against them.

With this release, we have integrated test pass rate of Release Environments in the Environments section of the Release summary page Figure As shown in the screenshot, if an Environment has failed, you can quickly infer if the failure is because of failing tests by looking at the Tests column. You can click on the pass rate to navigate to the Tests tab and investigate the failing tests for that Environment.

It is a common scenario for an individual test to run on multiple branches, environments, and configurations. When such a test fails, it is important to identify if the failure is contained to development branches like the main branch or if failures also impact release branches that deploy to production environments.

You can now visualize the history of a test across various branches that it is testing by looking at the History tab in Result summary page Figure Similarly, you group by the Environment pivot to visualize the history of a test across different Release Environments in which its run.

Users can now track the quality of their Requirements right on their Dashboard Figure We already have a solution for Requirements quality for our Planned testing users and we are bringing it to our users who follow Continuous Testing.

Users are able to link automated tests directly to Requirements and then use Dashboard widgets to track the quality of Requirements you are interested in tracking, pulling the Quality data from Build or Release. We have enabled tests from within an assembly to be distributed to remote machines using the Run Functional Tests task Figure In TFS , you could distribute tests only at the assembly level. This is enabled using the check box in the task as below.

Users can dynamically set up test machines in the cloud with Azure, or on premises using SCVMM or VMWare, and use these machines to run their tests in a distributed manner. You can now trigger a SonarQube analysis in the Maven and Gradle build task by checking 'Run SonarQube Analysis', and providing the endpoint, the SonarQube project name, the project key, and the version Figure You will also now get a link on the SonarQube project Figure You can request a full analysis to see the quality gates details, and choose to break the build if they are not met.

For more information, please see The Gradle build task now supports SonarQube analysis. Project collection administrators can now browse to the Visual Studio Marketplace from a Team Foundation Server and install free extensions in a team project collection.

The extensions are automatically downloaded from the Visual Studio Marketplace, uploaded to the Team Foundation Server, and installed in the selected team project collection Figure Project collection administrators can now browse to the Visual Studio Marketplace from a Team Foundation Server, buy paid extensions, and install them in a selected team project collection Figure The administrator can pay for extensions with an Azure subscription and select the number of users to assign these extensions.

These extensions are automatically downloaded from the Visual Studio Marketplace, uploaded to the Team Foundation Server, and installed in the selected team project collection. For more details, see Get extensions for Team Foundation Server documentation. In , we removed this setting from the configuration experience.

If you want to continue using NTLM authentication in , you do not need to take any action. If you had been using Kerberos authentication and want to continue doing so in , you do not need to take any action.

With this configuration, Kerberos authentication is used where possible, providing enhanced security. We did extensive testing to ensure that there would not be any impact on existing TFS deployments using NTLM authentication due to this change.

In this release, we are enabling a new and improved top navigation bar. There are two core goals for the new nav:. Since this is a big change for our users, and the feature is still being iterated on, we decided to have the new navigation UX off by default. If you want to play with it, you can enable it by going to the Team Foundation Server admin area Control Panel and choosing to "Turn on new navigation".

Please note that it enables it for all users in the server. The permission controlling which users can rename a team project has changed. Previously, users with Edit project-level information permission for a team project could rename it. Now users can be granted or denied the ability to rename a team project through the new Rename team project permission. We have introduced a new "Work" hub in the Admin settings page that combines general settings Figure , Iterations, and Areas in a single tab.

With this change, users will see clear differences between project-level settings and team settings. For team settings, users will only see areas and iterations that are relevant to their team.

At a project level, the settings page will enable admins to manage areas and iterations for the entire project. Additionally, for project area paths, a new column called "Teams" has been added to make it convenient for admins to tell quickly and easily which teams have selected a specific area path.

This public API allows users to get the process configuration of a given project. The process configuration contains the following settings:. Team Foundation Server introduces a new experience to manage groups and group membership. We then move onto Work Item Tracking, which is where requirements, tasks, bugs, and more are defined and tracked throughout the project.

This includes how to branch and merge following best practices before moving into unit testing and code quality features. We will examine the new build system and how to configure continuous integration CI , and the final topic in the course looks at the new Package Management features introduced in TFS to allow teams to easily reuse packages across their applications.

In this chapter, you will learn what a task group is and how to create and use them. Using the history and the comments provided by updaters of the definitions, you will be able to identify changes made to the build or the release definitions. You will also get to learn about grouping build or release definitions using folders and the use of tags. In this chapter, you will learn how to use Team Services builds to build code in GitHub and how to build Java code with Team Services builds.

Using a similar mechanism, you will be able to build code in other repositories, such as Subversion. This chapter will give you an overview of test automation, as well as of the capabilities of Team Foundation build and release management to run automated tests with build and deployment processes. Hands-on lessons will guide you step-by-step on unit test integration, functional test integration, and cloud-based load-test execution with TFS and Team Services.

Streamlining Dynamics CRM deployments is always a challenging task because there is less support from development environments. This chapter can be skipped if you are not familiar with Dynamics CRM development. Release notes, depending on the deployment environment, are important as they identify what is getting delivered to the target environment.

This provides visibility and traceability from inception of the requirements through to delivery and then production.



0コメント

  • 1000 / 1000