-
Notifications
You must be signed in to change notification settings - Fork 13.4k
[GISel] Add KnownFPClass Analysis to GISelValueTrackingPass #134611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-llvm-adt @llvm/pr-subscribers-llvm-selectiondag Author: Tim Gymnich (tgymnich) Changesadd KnownFPClass analysis to GISelValueTrackingPass Patch is 107.08 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/134611.diff 10 Files Affected:
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/GISelValueTracking.h b/llvm/include/llvm/CodeGen/GlobalISel/GISelValueTracking.h
index aa99bf321d2b1..1ae3b173d95ce 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/GISelValueTracking.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/GISelValueTracking.h
@@ -14,12 +14,15 @@
#ifndef LLVM_CODEGEN_GLOBALISEL_GISELVALUETRACKING_H
#define LLVM_CODEGEN_GLOBALISEL_GISELVALUETRACKING_H
+#include "llvm/ADT/APFloat.h"
#include "llvm/ADT/DenseMap.h"
#include "llvm/CodeGen/GlobalISel/GISelChangeObserver.h"
#include "llvm/CodeGen/MachineFunctionPass.h"
#include "llvm/CodeGen/Register.h"
+#include "llvm/IR/InstrTypes.h"
#include "llvm/InitializePasses.h"
#include "llvm/Support/KnownBits.h"
+#include "llvm/Support/KnownFPClass.h"
namespace llvm {
@@ -41,6 +44,64 @@ class GISelValueTracking : public GISelChangeObserver {
unsigned computeNumSignBitsMin(Register Src0, Register Src1,
const APInt &DemandedElts, unsigned Depth = 0);
+ /// Returns a pair of values, which if passed to llvm.is.fpclass, returns the
+ /// same result as an fcmp with the given operands.
+ ///
+ /// If \p LookThroughSrc is true, consider the input value when computing the
+ /// mask.
+ ///
+ /// If \p LookThroughSrc is false, ignore the source value (i.e. the first
+ /// pair element will always be LHS.
+ std::pair<Register, FPClassTest> fcmpToClassTest(CmpInst::Predicate Pred,
+ const MachineFunction &MF,
+ Register LHS, Value *RHS,
+ bool LookThroughSrc = true);
+ std::pair<Register, FPClassTest> fcmpToClassTest(CmpInst::Predicate Pred,
+ const MachineFunction &MF,
+ Register LHS,
+ const APFloat *ConstRHS,
+ bool LookThroughSrc = true);
+
+ /// Compute the possible floating-point classes that \p LHS could be based on
+ /// fcmp \Pred \p LHS, \p RHS.
+ ///
+ /// \returns { TestedValue, ClassesIfTrue, ClassesIfFalse }
+ ///
+ /// If the compare returns an exact class test, ClassesIfTrue ==
+ /// ~ClassesIfFalse
+ ///
+ /// This is a less exact version of fcmpToClassTest (e.g. fcmpToClassTest will
+ /// only succeed for a test of x > 0 implies positive, but not x > 1).
+ ///
+ /// If \p LookThroughSrc is true, consider the input value when computing the
+ /// mask. This may look through sign bit operations.
+ ///
+ /// If \p LookThroughSrc is false, ignore the source value (i.e. the first
+ /// pair element will always be LHS.
+ ///
+ std::tuple<Register, FPClassTest, FPClassTest>
+ fcmpImpliesClass(CmpInst::Predicate Pred, const MachineFunction &MF,
+ Register LHS, Register RHS, bool LookThroughSrc = true);
+ std::tuple<Register, FPClassTest, FPClassTest>
+ fcmpImpliesClass(CmpInst::Predicate Pred, const MachineFunction &MF,
+ Register LHS, FPClassTest RHS, bool LookThroughSrc = true);
+ std::tuple<Register, FPClassTest, FPClassTest>
+ fcmpImpliesClass(CmpInst::Predicate Pred, const MachineFunction &MF,
+ Register LHS, const APFloat &RHS,
+ bool LookThroughSrc = true);
+
+ void computeKnownFPClass(Register R, KnownFPClass &Known,
+ FPClassTest InterestedClasses, unsigned Depth);
+
+ void computeKnownFPClassForFPTrunc(const MachineInstr &MI,
+ const APInt &DemandedElts,
+ FPClassTest InterestedClasses,
+ KnownFPClass &Known, unsigned Depth);
+
+ void computeKnownFPClass(Register R, const APInt &DemandedElts,
+ FPClassTest InterestedClasses, KnownFPClass &Known,
+ unsigned Depth);
+
public:
GISelValueTracking(MachineFunction &MF, unsigned MaxDepth = 6);
virtual ~GISelValueTracking() = default;
@@ -86,6 +147,34 @@ class GISelValueTracking : public GISelChangeObserver {
/// \return The known alignment for the pointer-like value \p R.
Align computeKnownAlignment(Register R, unsigned Depth = 0);
+ /// Determine which floating-point classes are valid for \p V, and return them
+ /// in KnownFPClass bit sets.
+ ///
+ /// This function is defined on values with floating-point type, values
+ /// vectors of floating-point type, and arrays of floating-point type.
+
+ /// \p InterestedClasses is a compile time optimization hint for which
+ /// floating point classes should be queried. Queries not specified in \p
+ /// InterestedClasses should be reliable if they are determined during the
+ /// query.
+ KnownFPClass computeKnownFPClass(Register R, const APInt &DemandedElts,
+ FPClassTest InterestedClasses,
+ unsigned Depth);
+
+ KnownFPClass computeKnownFPClass(Register R,
+ FPClassTest InterestedClasses = fcAllFlags,
+ unsigned Depth = 0);
+
+ /// Wrapper to account for known fast math flags at the use instruction.
+ KnownFPClass computeKnownFPClass(Register R, const APInt &DemandedElts,
+ uint32_t Flags,
+ FPClassTest InterestedClasses,
+ unsigned Depth);
+
+ KnownFPClass computeKnownFPClass(Register R, uint32_t Flags,
+ FPClassTest InterestedClasses,
+ unsigned Depth);
+
// Observer API. No-op for non-caching implementation.
void erasingInstr(MachineInstr &MI) override {}
void createdInstr(MachineInstr &MI) override {}
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/MIPatternMatch.h b/llvm/include/llvm/CodeGen/GlobalISel/MIPatternMatch.h
index 72483fbea5805..ccd898f0bfc39 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/MIPatternMatch.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/MIPatternMatch.h
@@ -14,9 +14,13 @@
#define LLVM_CODEGEN_GLOBALISEL_MIPATTERNMATCH_H
#include "llvm/ADT/APInt.h"
+#include "llvm/ADT/FloatingPointMode.h"
+#include "llvm/CodeGen/GlobalISel/GenericMachineInstrs.h"
#include "llvm/CodeGen/GlobalISel/Utils.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/TargetOpcodes.h"
#include "llvm/IR/InstrTypes.h"
+#include <optional>
namespace llvm {
namespace MIPatternMatch {
@@ -84,6 +88,12 @@ inline std::optional<int64_t> matchConstant(Register Reg,
return getIConstantVRegSExtVal(Reg, MRI);
}
+template <>
+inline std::optional<uint64_t> matchConstant(Register Reg,
+ const MachineRegisterInfo &MRI) {
+ return getIConstantVRegZExtVal(Reg, MRI);
+}
+
template <typename ConstT> struct ConstantMatch {
ConstT &CR;
ConstantMatch(ConstT &C) : CR(C) {}
@@ -103,6 +113,10 @@ inline ConstantMatch<int64_t> m_ICst(int64_t &Cst) {
return ConstantMatch<int64_t>(Cst);
}
+inline ConstantMatch<uint64_t> m_ICst(uint64_t &Cst) {
+ return ConstantMatch<uint64_t>(Cst);
+}
+
template <typename ConstT>
inline std::optional<ConstT> matchConstantSplat(Register,
const MachineRegisterInfo &);
@@ -119,6 +133,12 @@ matchConstantSplat(Register Reg, const MachineRegisterInfo &MRI) {
return getIConstantSplatSExtVal(Reg, MRI);
}
+template <>
+inline std::optional<uint64_t>
+matchConstantSplat(Register Reg, const MachineRegisterInfo &MRI) {
+ return getIConstantSplatZExtVal(Reg, MRI);
+}
+
template <typename ConstT> struct ICstOrSplatMatch {
ConstT &CR;
ICstOrSplatMatch(ConstT &C) : CR(C) {}
@@ -145,6 +165,10 @@ inline ICstOrSplatMatch<int64_t> m_ICstOrSplat(int64_t &Cst) {
return ICstOrSplatMatch<int64_t>(Cst);
}
+inline ICstOrSplatMatch<uint64_t> m_ICstOrSplat(uint64_t &Cst) {
+ return ICstOrSplatMatch<uint64_t>(Cst);
+}
+
struct GCstAndRegMatch {
std::optional<ValueAndVReg> &ValReg;
GCstAndRegMatch(std::optional<ValueAndVReg> &ValReg) : ValReg(ValReg) {}
@@ -393,6 +417,7 @@ inline bind_ty<const MachineInstr *> m_MInstr(const MachineInstr *&MI) {
inline bind_ty<LLT> m_Type(LLT &Ty) { return Ty; }
inline bind_ty<CmpInst::Predicate> m_Pred(CmpInst::Predicate &P) { return P; }
inline operand_type_match m_Pred() { return operand_type_match(); }
+inline bind_ty<FPClassTest> m_FPClassTest(FPClassTest &T) { return T; }
template <typename BindTy> struct deferred_helper {
static bool match(const MachineRegisterInfo &MRI, BindTy &VR, BindTy &V) {
@@ -762,6 +787,32 @@ struct CompareOp_match {
}
};
+template <typename LHS_P, typename Test_P, unsigned Opcode>
+struct ClassifyOp_match {
+ LHS_P L;
+ Test_P T;
+
+ ClassifyOp_match(const LHS_P &LHS, const Test_P &Tst) : L(LHS), T(Tst) {}
+
+ template <typename OpTy>
+ bool match(const MachineRegisterInfo &MRI, OpTy &&Op) {
+ MachineInstr *TmpMI;
+ if (!mi_match(Op, MRI, m_MInstr(TmpMI)) || TmpMI->getOpcode() != Opcode)
+ return false;
+
+ Register LHS = TmpMI->getOperand(1).getReg();
+ if (!L.match(MRI, LHS))
+ return false;
+
+ FPClassTest TmpClass =
+ static_cast<FPClassTest>(TmpMI->getOperand(2).getImm());
+ if (T.match(MRI, TmpClass))
+ return true;
+
+ return false;
+ }
+};
+
template <typename Pred, typename LHS, typename RHS>
inline CompareOp_match<Pred, LHS, RHS, TargetOpcode::G_ICMP>
m_GICmp(const Pred &P, const LHS &L, const RHS &R) {
@@ -804,6 +855,14 @@ m_c_GFCmp(const Pred &P, const LHS &L, const RHS &R) {
return CompareOp_match<Pred, LHS, RHS, TargetOpcode::G_FCMP, true>(P, L, R);
}
+/// Matches a register not-ed by a G_XOR.
+/// G_XOR %not_reg, -1
+template <typename LHS, typename Test>
+inline ClassifyOp_match<LHS, Test, TargetOpcode::G_IS_FPCLASS>
+m_GIsFPClass(const LHS &L, const Test &T) {
+ return ClassifyOp_match<LHS, Test, TargetOpcode::G_IS_FPCLASS>(L, T);
+}
+
// Helper for checking if a Reg is of specific type.
struct CheckType {
LLT Ty;
@@ -868,6 +927,176 @@ m_Not(const SrcTy &&Src) {
return m_GXor(Src, m_AllOnesInt());
}
+/// Matching combinators
+template <typename LTy, typename RTy> struct match_combine_or {
+ LTy L;
+ RTy R;
+
+ match_combine_or(const LTy &Left, const RTy &Right) : L(Left), R(Right) {}
+
+ template <typename OpTy>
+ bool match(const MachineRegisterInfo &MRI, OpTy &&Op) {
+ if (L.match(MRI, Op))
+ return true;
+ if (R.match(MRI, Op))
+ return true;
+ return false;
+ }
+};
+
+template <typename LTy, typename RTy> struct match_combine_and {
+ LTy L;
+ RTy R;
+
+ match_combine_and(const LTy &Left, const RTy &Right) : L(Left), R(Right) {}
+
+ template <typename OpTy>
+ bool match(const MachineRegisterInfo &MRI, OpTy &&Op) {
+ if (L.match(MRI, Op))
+ if (R.match(MRI, Op))
+ return true;
+ return false;
+ }
+};
+
+/// Combine two pattern matchers matching L || R
+template <typename LTy, typename RTy>
+inline match_combine_or<LTy, RTy> m_CombineOr(const LTy &L, const RTy &R) {
+ return match_combine_or<LTy, RTy>(L, R);
+}
+
+/// Combine two pattern matchers matching L && R
+template <typename LTy, typename RTy>
+inline match_combine_and<LTy, RTy> m_CombineAnd(const LTy &L, const RTy &R) {
+ return match_combine_and<LTy, RTy>(L, R);
+}
+
+template <typename Opnd_t> struct Argument_match {
+ unsigned OpI;
+ Opnd_t Val;
+
+ Argument_match(unsigned OpIdx, const Opnd_t &V) : OpI(OpIdx), Val(V) {}
+
+ template <typename OpTy>
+ bool match(const MachineRegisterInfo &MRI, OpTy &&Op) {
+ MachineInstr *TmpMI;
+ if (mi_match(Op, MRI, m_MInstr(TmpMI)))
+ return Val.match(
+ MRI, TmpMI->getOperand(TmpMI->getNumDefs() + 1 + OpI).getReg());
+ return false;
+ }
+};
+
+/// Match an argument.
+template <unsigned OpI, typename Opnd_t>
+inline Argument_match<Opnd_t> m_Argument(const Opnd_t &Op) {
+ return Argument_match<Opnd_t>(OpI, Op);
+}
+
+/// Intrinsic matchers.
+struct IntrinsicID_match {
+ unsigned ID;
+
+ IntrinsicID_match(Intrinsic::ID IntrID) : ID(IntrID) {}
+
+ template <typename OpTy>
+ bool match(const MachineRegisterInfo &MRI, OpTy &&Op) {
+ MachineInstr *TmpMI;
+ if (mi_match(Op, MRI, m_MInstr(TmpMI)))
+ if (auto *Intr = dyn_cast<GIntrinsic>(TmpMI))
+ return Intr->getIntrinsicID() == ID;
+ return false;
+ }
+};
+
+/// Intrinsic matches are combinations of ID matchers, and argument
+/// matchers. Higher arity matcher are defined recursively in terms of and-ing
+/// them with lower arity matchers. Here's some convenient typedefs for up to
+/// several arguments, and more can be added as needed
+template <typename T0 = void, typename T1 = void, typename T2 = void,
+ typename T3 = void, typename T4 = void, typename T5 = void,
+ typename T6 = void, typename T7 = void, typename T8 = void,
+ typename T9 = void, typename T10 = void>
+struct m_Intrinsic_Ty;
+template <typename T0> struct m_Intrinsic_Ty<T0> {
+ using Ty = match_combine_and<IntrinsicID_match, Argument_match<T0>>;
+};
+template <typename T0, typename T1> struct m_Intrinsic_Ty<T0, T1> {
+ using Ty =
+ match_combine_and<typename m_Intrinsic_Ty<T0>::Ty, Argument_match<T1>>;
+};
+template <typename T0, typename T1, typename T2>
+struct m_Intrinsic_Ty<T0, T1, T2> {
+ using Ty = match_combine_and<typename m_Intrinsic_Ty<T0, T1>::Ty,
+ Argument_match<T2>>;
+};
+template <typename T0, typename T1, typename T2, typename T3>
+struct m_Intrinsic_Ty<T0, T1, T2, T3> {
+ using Ty = match_combine_and<typename m_Intrinsic_Ty<T0, T1, T2>::Ty,
+ Argument_match<T3>>;
+};
+
+template <typename T0, typename T1, typename T2, typename T3, typename T4>
+struct m_Intrinsic_Ty<T0, T1, T2, T3, T4> {
+ using Ty = match_combine_and<typename m_Intrinsic_Ty<T0, T1, T2, T3>::Ty,
+ Argument_match<T4>>;
+};
+
+template <typename T0, typename T1, typename T2, typename T3, typename T4,
+ typename T5>
+struct m_Intrinsic_Ty<T0, T1, T2, T3, T4, T5> {
+ using Ty = match_combine_and<typename m_Intrinsic_Ty<T0, T1, T2, T3, T4>::Ty,
+ Argument_match<T5>>;
+};
+
+/// Match intrinsic calls like this:
+/// m_Intrinsic<Intrinsic::fabs>(m_Value(X))
+template <Intrinsic::ID IntrID> inline IntrinsicID_match m_GIntrinsic() {
+ return IntrinsicID_match(IntrID);
+}
+
+template <Intrinsic::ID IntrID, typename T0>
+inline typename m_Intrinsic_Ty<T0>::Ty m_GIntrinsic(const T0 &Op0) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(), m_Argument<0>(Op0));
+}
+
+template <Intrinsic::ID IntrID, typename T0, typename T1>
+inline typename m_Intrinsic_Ty<T0, T1>::Ty m_GIntrinsic(const T0 &Op0,
+ const T1 &Op1) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(Op0), m_Argument<1>(Op1));
+}
+
+template <Intrinsic::ID IntrID, typename T0, typename T1, typename T2>
+inline typename m_Intrinsic_Ty<T0, T1, T2>::Ty
+m_GIntrinsic(const T0 &Op0, const T1 &Op1, const T2 &Op2) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(Op0, Op1), m_Argument<2>(Op2));
+}
+
+template <Intrinsic::ID IntrID, typename T0, typename T1, typename T2,
+ typename T3>
+inline typename m_Intrinsic_Ty<T0, T1, T2, T3>::Ty
+m_GIntrinsic(const T0 &Op0, const T1 &Op1, const T2 &Op2, const T3 &Op3) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(Op0, Op1, Op2), m_Argument<3>(Op3));
+}
+
+template <Intrinsic::ID IntrID, typename T0, typename T1, typename T2,
+ typename T3, typename T4>
+inline typename m_Intrinsic_Ty<T0, T1, T2, T3, T4>::Ty
+m_GIntrinsic(const T0 &Op0, const T1 &Op1, const T2 &Op2, const T3 &Op3,
+ const T4 &Op4) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(Op0, Op1, Op2, Op3),
+ m_Argument<4>(Op4));
+}
+
+template <Intrinsic::ID IntrID, typename T0, typename T1, typename T2,
+ typename T3, typename T4, typename T5>
+inline typename m_Intrinsic_Ty<T0, T1, T2, T3, T4, T5>::Ty
+m_GIntrinsic(const T0 &Op0, const T1 &Op1, const T2 &Op2, const T3 &Op3,
+ const T4 &Op4, const T5 &Op5) {
+ return m_CombineAnd(m_GIntrinsic<IntrID>(Op0, Op1, Op2, Op3, Op4),
+ m_Argument<5>(Op5));
+}
+
} // namespace MIPatternMatch
} // namespace llvm
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/Utils.h b/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
index 44141844f42f4..f6101d5d589d2 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/Utils.h
@@ -183,6 +183,10 @@ std::optional<APInt> getIConstantVRegVal(Register VReg,
std::optional<int64_t> getIConstantVRegSExtVal(Register VReg,
const MachineRegisterInfo &MRI);
+/// If \p VReg is defined by a G_CONSTANT fits in uint64_t returns it.
+std::optional<uint64_t> getIConstantVRegZExtVal(Register VReg,
+ const MachineRegisterInfo &MRI);
+
/// \p VReg is defined by a G_CONSTANT, return the corresponding value.
const APInt &getIConstantFromReg(Register VReg, const MachineRegisterInfo &MRI);
@@ -438,6 +442,17 @@ std::optional<int64_t> getIConstantSplatSExtVal(const Register Reg,
std::optional<int64_t> getIConstantSplatSExtVal(const MachineInstr &MI,
const MachineRegisterInfo &MRI);
+/// \returns the scalar sign extended integral splat value of \p Reg if
+/// possible.
+std::optional<uint64_t>
+getIConstantSplatZExtVal(const Register Reg, const MachineRegisterInfo &MRI);
+
+/// \returns the scalar sign extended integral splat value defined by \p MI if
+/// possible.
+std::optional<uint64_t>
+getIConstantSplatZExtVal(const MachineInstr &MI,
+ const MachineRegisterInfo &MRI);
+
/// Returns a floating point scalar constant of a build vector splat if it
/// exists. When \p AllowUndef == true some elements can be undef but not all.
std::optional<FPValueAndVReg> getFConstantSplat(Register VReg,
@@ -654,6 +669,9 @@ class GIConstant {
/// }
/// provides low-level access.
class GFConstant {
+ using VecTy = SmallVector<APFloat>;
+ using const_iterator = VecTy::const_iterator;
+
public:
enum class GFConstantKind { Scalar, FixedVector, ScalableVector };
@@ -671,6 +689,23 @@ class GFConstant {
/// Returns the kind of of this constant, e.g, Scalar.
GFConstantKind getKind() const { return Kind; }
+ const_iterator begin() const {
+ assert(Kind != GFConstantKind::ScalableVector &&
+ "Expected fixed vector or scalar constant");
+ return Values.begin();
+ }
+
+ const_iterator end() const {
+ assert(Kind != GFConstantKind::ScalableVector &&
+ "Expected fixed vector or scalar constant");
+ return Values.end();
+ }
+
+ size_t size() const {
+ assert(Kind == GFConstantKind::FixedVector && "Expected fixed vector");
+ return Values.size();
+ }
+
/// Returns the value, if this constant is a scalar.
APFloat getScalarValue() const;
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 16066226f1896..f339344704f34 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -51,6 +51,7 @@
#include "llvm/Support/AtomicOrdering.h"
#include "llvm/Support/Casting.h"
#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/KnownFPClass.h"
#include <algorithm>
#include <cassert>
#include <climits>
@@ -4165,6 +4166,13 @@ class TargetLowering : public TargetLoweringBase {
const MachineRegisterInfo &MRI,
unsigned Depth = 0) const;
+ virtual void computeKnownFPClassForTargetInstr(GISelValueTracking &Analysis,
+ Register R,
+ KnownFPClass &Known,
+ const APInt &DemandedElts,
+ const MachineRegisterInfo &MRI,
+ unsigned Depth = 0) const;
+
/// Determine the known alig...
[truncated]
|
✅ With the latest revision this PR passed the undef deprecator. |
d5e75b2
to
05278bc
Compare
} | ||
|
||
std::tuple<Register, FPClassTest, FPClassTest> | ||
GISelValueTracking::fcmpImpliesClass(CmpInst::Predicate Pred, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more duplication from the IR version than I would hope. Can we cut out the Value / Register part of the first field and share the rest? Or should this be templatized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is my attempt of templating this: https://github.com/tgymnich/llvm-project/blob/tim/gisel-value-tracking/llvm/include/llvm/ADT/FloatingPointModeUtils.h
In the end I wasn't quite happy with how I handled the LookThrough.
- Just passing a lambda is not extensible enoguh imo. => e.g. what happens if we want to look through to more than just fabs.
if constexpr
is also not to nice since we'd need to inlcude all the MIR and IR headers at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@arsenm I finally went ahead with the templating approach. What do you think?
443db20
to
5b3d474
Compare
35a896d
to
930a35c
Compare
m_GIsFPClass