|
Tpetra parallel linear algebra
Version of the Day
|
Namespace for Tpetra implementation details. More...
Namespaces | |
| DefaultTypes | |
| Declarations of values of Tpetra classes' default template parameters. | |
Classes | |
| struct | AbsMax |
| Functor for the the ABSMAX CombineMode of Import and Export operations. More... | |
| class | ContiguousUniformDirectory |
| Implementation of Directory for a contiguous, uniformly distributed Map. More... | |
| struct | CrsIJV |
| Struct representing a sparse matrix entry as an i,j,v triplet. More... | |
| class | Directory |
| Computes the local ID and process ID corresponding to given global IDs. More... | |
| class | DistributedContiguousDirectory |
| Implementation of Directory for a distributed contiguous Map. More... | |
| class | DistributedNoncontiguousDirectory |
| Implementation of Directory for a distributed noncontiguous Map. More... | |
| class | FixedHashTable |
| struct | GetLapackType |
| Return the Teuchos::LAPACK specialization corresponding to the given Scalar type. More... | |
| struct | Hash |
| The hash function for FixedHashTable. More... | |
| struct | Hash< KeyType, DeviceType, OffsetType, int > |
| Specialization for ResultType = int. More... | |
| class | HashTable |
| class | InvalidGlobalIndex |
| Exception thrown by CrsMatrix on invalid global index. More... | |
| class | InvalidGlobalRowIndex |
| Exception thrown by CrsMatrix on invalid global row index. More... | |
| class | MapCloner |
| Implementation detail of Map::clone(). More... | |
| struct | MultiVectorCloner |
| Implementation of Tpetra::MultiVector::clone(). More... | |
| class | MultiVectorFillerData |
| Implementation of fill and local assembly for MultiVectorFiller. More... | |
| class | MultiVectorFillerData2 |
| Second implementation of fill and local assembly for MultiVectorFiller. More... | |
| class | OptColMap |
| Implementation detail of makeOptimizedColMap, and makeOptimizedColMapAndImport. More... | |
| struct | OrdinalTraits |
| Traits class for "invalid" (flag) values of integer types that Tpetra uses as local ordinals or global ordinals. More... | |
| struct | PackTraits |
Traits class for packing / unpacking data of type T, using Kokkos data structures that live in the given space D. More... | |
| class | ReplicatedDirectory |
| Implementation of Directory for a locally replicated Map. More... | |
| class | TieBreak |
| Interface for breaking ties in ownership. More... | |
| class | Transfer |
| Common base class of Import and Export. More... | |
Enumerations |
Functions | |
| void | gathervPrint (std::ostream &out, const std::string &s, const Teuchos::Comm< int > &comm) |
| On Process 0 in the given communicator, print strings from each process in that communicator, in rank order. More... | |
| template<class MapType > | |
| MapType | makeOptimizedColMap (std::ostream &errStream, bool &lclErr, const MapType &domMap, const MapType &colMap) |
| Return an optimized reordering of the given column Map. More... | |
| template<class MapType > | |
| std::pair< MapType, Teuchos::RCP< typename OptColMap< MapType >::import_type > > | makeOptimizedColMapAndImport (std::ostream &errStream, bool &lclErr, const MapType &domMap, const MapType &colMap, const typename OptColMap< MapType >::import_type *oldImport, const bool makeImport) |
| Return an optimized reordering of the given column Map. Optionally, recompute an Import from the input domain Map to the new column Map. More... | |
| std::string | DistributorSendTypeEnumToString (EDistributorSendType sendType) |
| Convert an EDistributorSendType enum value to a string. More... | |
| std::string | DistributorHowInitializedEnumToString (EDistributorHowInitialized how) |
| Convert an EDistributorHowInitialized enum value to a string. More... | |
| bool | congruent (const Teuchos::Comm< int > &comm1, const Teuchos::Comm< int > &comm2) |
| Whether the two communicators are congruent. More... | |
Namespace for Tpetra implementation details.
Status of the graph's or matrix's storage, when not in a fill-complete state.
When a CrsGraph or CrsMatrix is not fill complete, its data live in one of three storage formats:
"2-D storage": The graph stores column indices as "array of arrays," and the matrix stores values as "array of arrays." The graph must have k_numRowEntries_ allocated. This only ever exists if the graph was created with DynamicProfile. A matrix with 2-D storage must own its graph, and the graph must have 2-D storage.
"Unpacked 1-D storage": The graph uses a row offsets array, and stores column indices in a single array. The matrix also stores values in a single array. "Unpacked" means that there may be extra space in each row: that is, the row offsets array only says how much space there is in each row. The graph must use k_numRowEntries_ to find out how many entries there actually are in the row. A matrix with unpacked 1-D storage must own its graph, and the graph must have unpacked 1-D storage.
With respect to the Kokkos refactor version of Tpetra, "2-D storage" should be considered a legacy option.
The phrase "When not in a fill-complete state" is important. When the graph is fill complete, it always uses 1-D "packed" storage. However, if storage is "not optimized," we retain the 1-D unpacked or 2-D format, and thus retain this enum value.
Definition at line 160 of file Tpetra_CrsGraph_decl.hpp.
The type of MPI send that Distributor should use.
This is an implementation detail of Distributor. Please do not rely on these values in your code.
Definition at line 76 of file Tpetra_Distributor.hpp.
Enum indicating how and whether a Distributor was initialized.
This is an implementation detail of Distributor. Please do not rely on these values in your code.
Definition at line 94 of file Tpetra_Distributor.hpp.
| void Tpetra::Details::gathervPrint | ( | std::ostream & | out, |
| const std::string & | s, | ||
| const Teuchos::Comm< int > & | comm | ||
| ) |
On Process 0 in the given communicator, print strings from each process in that communicator, in rank order.
For each process in the given communicator comm, send its string s to Process 0 in that communicator. Process 0 prints the strings in rank order.
This is a collective over the given communicator comm. Process 0 promises not to store all the strings in its memory. This function's total memory usage on any process is proportional to the calling process' string length, plus the max string length over any process. This does NOT depend on the number of processes in the communicator. Thus, we call this a "memory-scalable" operation. While the function's name suggests MPI_Gatherv, the implementation may NOT use MPI_Gather or MPI_Gatherv, because neither of those are not memory scalable.
Process 0 prints nothing other than what is in the string. It does not add an endline after each string, nor does it identify each string with its owning process' rank. If you want either of those in the string, you have to put it there yourself.
| out | [out] The output stream to which to write. ONLY Process 0 in the given communicator will write to this. Thus, this stream need only be valid on Process 0. |
| s | [in] The string to write. Each process in the given communicator has its own string. Strings may be different on different processes. Zero-length strings are OK. |
| comm | [in] The communicator over which this operation is a collective. |
Definition at line 52 of file Tpetra_Details_gathervPrint.cpp.
| MapType Tpetra::Details::makeOptimizedColMap | ( | std::ostream & | errStream, |
| bool & | lclErr, | ||
| const MapType & | domMap, | ||
| const MapType & | colMap | ||
| ) |
Return an optimized reordering of the given column Map.
| MapType | A specialization of Map. |
| err | [out] Output stream for human-readable error reporting. This is local to the calling process and may differ on different processes. |
| lclErr | [out] On output: true if anything went wrong on the calling process. This value is local to the calling process and may differ on different processes. |
| domMap | [in] Domain Map of a CrsGraph or CrsMatrix. |
| colMap | [in] Original column Map of the same CrsGraph or CrsMatrix as domMap. |
newColMap.This is a convenience wrapper for makeOptimizedColMapAndImport(). (Please refer to that function's documentation in this file.) It does everything that that function does, except that it does not compute a new Import.
Definition at line 310 of file Tpetra_Details_makeOptimizedColMap.hpp.
| std::pair<MapType, Teuchos::RCP<typename OptColMap<MapType>::import_type> > Tpetra::Details::makeOptimizedColMapAndImport | ( | std::ostream & | errStream, |
| bool & | lclErr, | ||
| const MapType & | domMap, | ||
| const MapType & | colMap, | ||
| const typename OptColMap< MapType >::import_type * | oldImport, | ||
| const bool | makeImport | ||
| ) |
Return an optimized reordering of the given column Map. Optionally, recompute an Import from the input domain Map to the new column Map.
| MapType | A specialization of Map. |
This function takes a domain Map and a column Map of a distributed graph (Tpetra::CrsGraph) or matrix (e.g., Tpetra::CrsMatrix). It then creates a new column Map, which optimizes the performance of an Import operation from the domain Map to the new column Map. This function also optionally creates that Import. Creating the new column Map and its Import at the same time saves some communication, since making the Import requires some of the same information that optimizing the column Map does.
| err | [out] Output stream for human-readable error reporting. This is local to the calling process and may differ on different processes. |
| lclErr | [out] On output: true if anything went wrong on the calling process. This value is local to the calling process and may differ on different processes. |
| domMap | [in] Domain Map of a CrsGraph or CrsMatrix. |
| colMap | [in] Original column Map of the same CrsGraph or CrsMatrix as domMap. |
| makeImport | [in] Whether to make and return an Import from the input domain Map to the new column Map. |
newColMap, and the corresponding Import from domMap to newColMap. The latter is nonnull if and only if makeImport is true.domMap and colMap must have the same or congruent communicators. colMap must be a subset of the indices in domMap.The returned column Map's global indices (GIDs) will have the following order on all calling processes:
colMap and domMap (on the calling process) go first. colMap on the calling process, but not in the domain Map on the calling process, follow. They are ordered first contiguously by their owning process rank (in the domain Map), then in increasing order within that. This imitates the ordering used by AztecOO and Epetra. Storing indices contiguously that are owned by the same process (in the domain Map) permits the use of contiguous send and receive buffers in Distributor, which is used in an Import operation.
Definition at line 383 of file Tpetra_Details_makeOptimizedColMap.hpp.
| std::string Tpetra::Details::DistributorSendTypeEnumToString | ( | EDistributorSendType | sendType | ) |
Convert an EDistributorSendType enum value to a string.
This is an implementation detail of Distributor. Please do not rely on this function in your code.
Definition at line 50 of file Tpetra_Distributor.cpp.
| std::string Tpetra::Details::DistributorHowInitializedEnumToString | ( | EDistributorHowInitialized | how | ) |
Convert an EDistributorHowInitialized enum value to a string.
This is an implementation detail of Distributor. Please do not rely on this function in your code.
Definition at line 71 of file Tpetra_Distributor.cpp.
| bool Tpetra::Details::congruent | ( | const Teuchos::Comm< int > & | comm1, |
| const Teuchos::Comm< int > & | comm2 | ||
| ) |
Whether the two communicators are congruent.
Two communicators are congruent when they have the same number of processes, and those processes occur in the same rank order.
If both communicators are MpiComm instances, this function returns true exactly when MPI_Comm_compare returns MPI_IDENT (the communicators are handles for the same object) or MPI_CONGRUENT. SerialComm instances are always congruent. An MpiComm is congruent to a SerialComm if the MpiComm has only one process. This function is symmetric in its arguments.
If either Comm instance is neither an MpiComm nor a SerialComm, this method cannot do any better than to compare their process counts.
Two communicators are congruent when they have the same number of processes, and those processes occur in the same rank order.
If both communicators are Teuchos::MpiComm instances, this function returns true exactly when MPI_Comm_compare returns MPI_IDENT (the communicators are handles for the same object) or MPI_CONGRUENT on their MPI_Comm handles. Any two Teuchos::SerialComm instances are always congruent. An MpiComm instance is congruent to a SerialComm instance if and only if the MpiComm has one process. This function is symmetric in its arguments.
If either Teuchos::Comm instance is neither an MpiComm nor a SerialComm, this method cannot do any better than to compare their process counts.
Definition at line 65 of file Tpetra_Util.cpp.
1.8.11